The EU AI Act, set to come into force in 2025, is a regulation designed to make artificial intelligence safer and more transparent. It’s a game – changer for companies developing or deploying AI systems and ignoring it could cost you dearly – fines for non-compliance can reach up to €35 million or 7% of global annual turnover, whichever is higher. Besides hefty fines – consequences include reputational damage, and the removal of your product from the market.
This topic isn’t just for tech companies developing cutting-edge AI; it’s relevant to any organization that develops, deploys, or uses AI systems in the European Union or provides AI-powered services to EU users. Besides tech companies, businesses in finance, healthcare, education, retail, recruitment, marketing, and public administration are likely to be affected.
Even startups or companies in the early stages of exploring AI solutions need to evaluate their plans against the EU AI Act to ensure compliance.
If your company handles sensitive data, interacts with vulnerable populations, or uses AI for decision-making, this is a critical issue for you.
The EU AI Act classifies AI systems into four risk levels based on their risk level:
• Minimal-Risk AI Systems: Most AI systems fall here, with no obligations under the Act.
• Limited-Risk AI Systems (Articles 50): These include systems like chatbots or AI-driven recommendations. Companies must disclose their use of AI, but the requirements are lighter.
• High-Risk AI Systems (Articles 6-27, Annex III): These systems operate in sensitive areas like healthcare, hiring, law enforcement, or education. They face the most stringent requirements, including risk management, data quality standards, transparency, and ongoing monitoring.
• Unacceptable risk – behavioral manipulation, exploitation of vulnerable characteristics of people, social scoring, real time remote biometric identification…)
Under the EU AI Act, some AI systems are considered so inherently dangerous or unethical that they are prohibited entirely. These systems are viewed as fundamentally incompatible with the core values of health, safety, respect for human rights, democratic values and environmental protection. Starting February 2, 2025, AI systems that fall into this category will be banned from the EU market, and any company deploying them could face removal of the system and severe fines of up to €35 million or 7% of their global turnover.
Key prohibited practices include:
• Social Scoring: Systems that rank or evaluate individuals based on their social behavior, characteristics, or perceived “trustworthiness” (Article 5.1(c)). Such systems often lead to unfair treatment, stigmatization, or discrimination, as seen in controversial implementations like social credit scoring in certain regions.
• Manipulation of Behavior: AI that exploits psychological or social vulnerabilities to manipulate user decisions in harmful ways (Article 5.1(b)), such as pressuring individuals to make purchases or altering their behavior without informed consent.
• Exploitation of Vulnerabilities: AI systems designed to target vulnerable groups (Article 5.1(a)), particularly children or individuals with disabilities, by taking advantage of their lack of understanding or ability to resist certain influences.
• Biometric Surveillance: Real-time biometric identification systems used in public spaces (Article 5.1(d)), such as facial recognition tools for mass surveillance, except under tightly regulated and lawful purposes for law enforcement.
• Untargeted Scraping of Facial Images: The development or deployment of AI systems creating or expanding facial recognition databases through untargeted scraping of images from the internet or CCTV footage (Article 5.1(e)).
• Emotion Recognition in Sensitive Settings: The use of AI systems to infer emotions in workplace and educational institutions, except where the system is explicitly designed for medical or safety purposes (Article 5.1(f)).
• Biometric Categorization Based on Sensitive Attributes: Systems categorizing individuals based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This prohibition excludes lawful labeling or filtering of biometric datasets in the context of law enforcement (Article 5.1(g)).
• Real-Time Biometric Identification for Law Enforcement: The use of real-time remote biometric identification in publicly accessible spaces, except under strictly defined and necessary conditions (Article 5.1(h)), such as:
– Searching for victims of abduction, human trafficking, or sexual exploitation.
– Preventing imminent threats to life, safety, or terrorist attacks.
– Localizing or identifying individuals suspected of committing serious crimes as specified in Annex II of the Act, punishable by a minimum custodial sentence of four years in the concerned Member State.
If your system falls into this category, it will be banned and removed from the market.
But, which AI systems are likely to fall into this category on February 2, 2025?
Whether your company has already developed an AI system, is deploying one, or is still in the planning phase, the first and most critical step is understanding your system’s risk level.
• Audit Your System:
– Review what your AI does and the context in which it’s used.
– Identify potential impacts on safety, privacy, fundamental rights, democratic values and environment.
• Plan Ahead:
– Even if your system is in development, align your processes with the Act’s requirements early. This proactive approach will save time, resources, and potential legal trouble down the line.
If you’re in the early stages of developing or considering an AI system, now is the time to start assessing its risk level and compliance needs. Waiting until the product is ready to launch might expose your company to delays, additional costs, or, worse, fines and bans.
The EU AI Act doesn’t just affect the systems already on the market – it shapes how AI systems are designed, built, and deployed from the ground up. By integrating compliance into your development process, you’ll stay ahead of the curve and build trust with your users.
No company wants to see their AI system labeled as prohibited, pulled off the market, and slapped with hefty fines.
Here’s how to stay safe:
1. Understand the Rules: Familiarize yourself with the prohibited practices under the EI AI Act.
2. Conduct Risk Assessments: Evaluate your AI systems to identify whether they could fall into the prohibited category. Pay special attention to applications involving social scoring, vulnerable groups, or public surveillance.
3. Engage Experts: If you’re unsure, consult with professionals who specialize in AI compliance to guide your compliance efforts and mitigate risks.
4. Implement AI Governance: Establish clear policies and oversight mechanisms to ensure your AI systems are ethical, transparent, and compliant with the law.
5. Design Responsibly: If your system could potentially be high-risk or prohibited, consider redesigning it to align with ethical principles and regulatory requirements.
The EU AI Act is a wake – up call for businesses. It’s not just about avoiding fines – it’s about building ethical and trustworthy AI that respects people’s rights and safety.
Whether you’re developing, deploying, or planning your next AI project, start with a clear understanding of your system’s risk. The sooner you take action, the better positioned your company will be in the rapidly evolving AI landscape.
Are you ready to assess your AI’s risk and ensure compliance? Let’s work together to navigate the EU AI Act and keep your innovations on track.
If you’re developing or deploying AI systems that extend beyond the EU market, it’s essential to explore other relevant frameworks to stay informed and ensure compliance. Doing so will help you mitigate potential risks and avoid unintended consequences. For more insights, be sure to check out our blog on additional AI frameworks and regulations.
Contact us at [email protected].