Share

13 min read

Writen by Daniela Atanasovska

Posted on: August 19, 2024

Understanding EU AI Act Literacy Requirement: Getting Ready for the EU AI Act in Its First 6 Months

The AI Act, a pioneering regulation aimed at governing the use of AI within the European Union, will be fully applicable by 2 August 2026. However, specific provisions will come into force earlier, with significant implications for AI literacy and the operation of generative and high-risk AI models.

According to the European Commission’s explanation, the Act will be implemented in a staged manner, with specific provisions coming into effect at different times.

This phased approach will ensure:

Smooth transition: Businesses and organisations have time to adapt their AI practices to comply with the new regulations.
Focus on priorities: Initial focus is on AI literacy and governance frameworks to lay the groundwork for responsible AI development.
Gradual implementation: As the Act comes into full force, higher risk aspects of AI are addressed progressively.

Key Dates and Provisions for AI Act implementation are:

AI literacy refers to the understanding and knowledge required to effectively use and manage AI technologies. This includes having a knowledge of how AI systems work, their potential impacts, and the ethical considerations involved. As AI becomes more integrated into various sectors, enhancing AI literacy is essential for both individuals and organizations to navigate the complexities of AI responsibly.

Why is AI Literacy Important?

Imagine this: Your company develops a cutting-edge AI model for content creation. It’s powerful, efficient, and generates fantastic results. But what if your employees, from developers to marketing teams, don’t understand the potential biases inherent in AI systems?

To recognise these potential biases, it’s essential for them to understand the fundamentals of AI literacy. AI literacy empowers your workforce to:

Understand AI technologies and their potential impacts.
Identify and mitigate risks associated with AI systems.
Develop ethical AI practices.
Comply with regulatory requirements.

Preparing for AI Literacy Provisions

By 2 February 2025, each organization that develops or deploys an AI system/model must ensure that it is aligned with the AI literacy provisions of the AI Act. This involves:

The Benefits of Proactive Preparation

By prioritising AI literacy, you’re not just complying with regulations; you’re investing in a future-proof workforce. Your company will reap the benefits of:

 AI Literacy

Generative AI models, which create new content based on existing data, will be subject to governance rules starting 2 August 2025.

Operators of these models should prioritise risk assessment and establish strong governance frameworks. Key steps include:

Risk Assessment Procedures

Identify Risks: Map out potential risks, such as technical failures, biases, operational misuse, data breaches, and societal impacts like ethical concerns.

Evaluate Impact and Likelihood: Assess the impact and likelihood of each risk to prioritize them effectively.

Mitigation Strategies: Develop strategies to address these risks, including technical improvements, security measures, and organisational actions like staff training.

Continuous Monitoring: Set up continuous monitoring to detect and respond to emerging risks, supported by regular audits.

Governance Framework

Establish Policies and Roles: Define clear policies and assign roles to ensure structured oversight and accountability.

Ethical Guidelines: Create and enforce guidelines that address fairness, transparency, and accountability in AI development and deployment.

Compliance Checks: Conduct regular checks to ensure AI systems adhere to internal policies and external regulations.

Documentation and Reporting

Maintain Detailed Records: Keep comprehensive documentation of AI activities, including development processes, risk assessments, and mitigation efforts.

Transparent Reporting: Implement transparent reporting mechanisms to communicate AI system performance, risks, and regulatory adherence to stakeholders.

Collaboration and Training

Engage with Regulators: Stay in close contact with regulatory authorities to remain updated on best practices and compliance requirements.

Training Programs: Develop training programs to ensure all relevant personnel understand AI governance, risk management, and ethical considerations.

By following these steps, operators of general-purpose AI can effectively prepare for the upcoming regulatory requirements, ensuring their AI systems are ethical, secure, and ready to meet the new standards.

High-risk AI systems will face obligations starting 2 August 2026. The EU AI Act imposes specific obligations for high-risk AI systems to promote their safe and ethical use. Key requirements include:

Risk Management System

Continuous Risk Assessment: Regularly identify, analyze, and mitigate risks associated with the AI system.

Monitoring and Updating: Continuously monitor the AI system and update risk management strategies as needed.

Data and Data Governance

Quality and Relevance: Ensure data is high-quality, relevant, and representative to reduce biases and inaccuracies.

Data Governance: Implement strong data governance practices to maintain data integrity and security.

Technical Documentation

System Design and Development: Maintain detailed records of the AI system’s design, development processes, and methodologies.

Performance Metrics: Document performance metrics and testing results to meet regulatory standards.

Record-Keeping

Operational Logs: Keep logs of the AI system’s operations to track performance and identify issues.

Incident Reports: Record and report any incidents or malfunctions, including corrective actions taken.

Transparency and Provision of Information

Informing Users: Clearly communicate the AI system’s capabilities, limitations, and intended use to users.

Human Oversight: Assign human oversight to monitor the AI system’s operations and intervene when necessary.

Accuracy, Robustness, and Cybersecurity

Accuracy: Ensure the AI system delivers accurate and reliable results.

Robustness: Design the AI system to withstand errors and adversarial attacks.

Cybersecurity: Implement robust cybersecurity measures to protect the AI system from unauthorised access and data breaches.

Obligations of Providers and Deployers

Compliance with Instructions: Use the AI system according to the provider’s instructions and within its intended scope.

Reporting Risks: Immediately report any identified risks or issues to the provider and relevant authorities.

By adhering to these obligations, operators can ensure their high-risk AI systems align with the EU AI Act’s standards, promoting the responsible and secure use of AI technologies.

Member States will impose effective, proportionate, and dissuasive penalties for violations of the AI Act’s rules. The regulation outlines specific thresholds for these penalties:

– Up to €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements related to prohibited practices or non-compliance with data requirements.

– Up to €15 million or 3% of the total worldwide annual turnover for non-compliance with other requirements or obligations under the regulation.

– Up to €7.5 million or 1.5% of the total worldwide annual turnover for supplying incorrect, incomplete, or misleading information to notified bodies and national competent authorities.

For each category of infringement, the lower threshold applies to SMEs, while the higher threshold applies to larger companies.

As the EU Artificial Intelligence (AI) Act approaches full implementation by August 2026, it’s crucial for companies to understand the early provisions and prepare accordingly. Specific obligations, including those related to AI literacy, governance of general-purpose AI, and high-risk AI systems, will come into force as early as February 2025.

Companies must prioritize AI literacy among their workforce to effectively manage AI technologies, understand their potential impacts, and address ethical considerations. Proactive preparation includes comprehensive training programs, risk assessment procedures, and establishing governance frameworks to ensure AI systems are ethical, secure, and compliant.

Failure to comply could result in severe penalties, with fines reaching up to €35 million or 7% of annual turnover.

By investing in AI literacy and adhering to the Act’s requirements, businesses can reduce risks, foster innovation, and maintain a competitive edge in the evolving AI landscape.

Contact Us

Hope you find this useful. If you need an EU Rep, have any GDPR questions, or have received a SAR or Regulator request and need help then please contact us anytime. We are always happy to help...
GDPR Local team.

Contact Us

Recent blogs

Enterprise Data Protection: Securing Large-Scale Information Assets

Cyber threats and regulatory pressures have made it necessary for businesses around the world to sa

Continuous Data Protection: Ensuring Real-Time Information Security

Continuous data protection (CDP) has emerged as a crucial strategy in safeguarding data assets agai

California’s Senate Bill 1047: Key Takeaways on California’s AI Safety Bill 

In a significant step toward regulating advanced AI development, California’s legislature on Augu

Get Your Account Now

Setup in just a few minutes. Enter your company details and choose the services you need.

Create Account

Get In Touch

Not sure which option to choose? Call, email, chat to us
anytime.

Contact Us
06 GDPR INFO

Stay Up-To-Date

Leave your details here and we’ll send you updates and information on all aspects of GDPR and EU Representative. We won’t bombard you with emails and you will be able to tell us to stop anytime.

Full Name is required!

Business Email is required!

Company is required!

Please accept the Terms and Conditions and Privacy Policy