AI risk classification under the EU AI Act determines which compliance obligations apply to your artificial intelligence systems, directly impacting market access and potential penalties up to €35 million or 7% of global turnover. The European Commission established a risk-based approach that categorises AI systems into four distinct risk levels, each triggering different regulatory requirements for organisations deploying AI in the EU market.
• This guide explains all four EU AI Act risk categories (unacceptable, high risk, limited, and minimal risk), the specific classification criteria that determine which category applies to your AI system, and the compliance obligations required for each risk level. We focus exclusively on the EU AI Act framework that became enforceable in August 2024.
• It’s designed for AI providers, deployers, compliance officers, and organisations developing or using AI systems within the EU.
• Accurate risk classification is mandatory for EU AI Act compliance and determines your organisation’s legal obligations, documentation requirements, and market access rights. Misclassification can result in significant harm to your business, including regulatory penalties, operational disruptions, and restricted access to the European market.
What You’ll Learn:
• The four AI risk categories and their specific compliance requirements
• Step-by-step classification criteria to assess your AI systems
• Mandatory obligations for high-risk AI systems, including conformity assessment
• Practical solutions for common classification challenges and borderline cases
The EU AI Act adopts a risk-based approach, assigning compliance obligations in proportion to the potential risks that AI systems pose to health, safety, and fundamental rights. This regulatory framework recognises that different AI applications carry varying levels of risk to individuals and society, requiring tailored oversight mechanisms.
The classification system determines everything from prohibited applications to documentation requirements, with higher risk levels triggering more stringent compliance obligations. Organisations must accurately classify their AI systems to ensure proper regulatory compliance and avoid substantial penalties.
The regulatory burden directly correlates with assessed risk levels: minimal-risk AI systems face virtually no obligations, while high-risk AI systems require extensive conformity assessments, technical documentation, and EU database registration. Annex III specifically defines which AI applications qualify as high risk based on their potential impact on fundamental rights and safety.
This approach protects individuals from significant harm while allowing innovation in lower-risk applications, such as AI-enabled video games and spam filters, to proceed with minimal regulatory interference.
Sector-specific considerations have a significant influence on classification, particularly in applications such as healthcare, finance, law enforcement, border control management, and educational institutions.
Building on the regulatory framework concept, these criteria determine whether your AI system requires conformity assessment, human oversight protocols, or simply transparency disclosures to end users.
Each risk level under the EU AI Act carries distinct compliance obligations, from complete prohibition to voluntary ethical guidelines, based on the assessed potential for significant harm to individuals or society.
These AI systems are prohibited entirely due to their inherent threat to fundamental rights and human dignity. Examples include cognitive behavioural manipulation designed to exploit vulnerabilities, social scoring systems that evaluate social behaviour, and biometric categorisation systems that infer sensitive characteristics, such as sexual orientation, political opinions, or philosophical beliefs.
Limited exceptions exist for law enforcement in cases involving serious crimes or environmental offences, but civilian applications remain banned, regardless of the safeguards in place. Deploying prohibited AI systems can result in penalties up to €35 million or 7% of global turnover – the Act’s highest penalty tier.
High-risk AI systems include safety components in regulated products (medical devices, automotive systems, aviation equipment) and applications listed in Annex III, such as remote biometric identification systems, critical infrastructure management, credit scoring, and workers management systems. These systems require extensive conformity assessments before market entry.
Mandatory obligations include technical documentation, EU database registration, risk assessments, human oversight protocols, and ongoing monitoring for serious incidents. Organisations must demonstrate that previously completed human assessments are less effective than AI-assisted decision-making.
Generative AI systems, such as large language models, must inform users that they’re interacting with AI. General-purpose AI models exceeding systemic risk thresholds (10^25 FLOPs) face extra obligations, including model evaluation and incident reporting. These transparency requirements acknowledge potential risks while avoiding the extensive oversight applied to high-risk systems.
Unlike high-risk applications, limited-risk AI systems don’t require conformity assessment or EU database registration, focusing instead on user awareness and responsible deployment practices.
AI applications, such as AI-enabled video games, spam filters, and basic recommendation systems, fall into this category due to their low potential for significant harm. These other AI systems may voluntarily adopt codes of conduct and ethical AI principles, but they face no mandatory obligations under the AI Act.
This category represents the majority of current AI applications where innovation can proceed without regulatory barriers, supporting the Act’s goal of fostering AI development while protecting fundamental rights.
Proper classification requires a systematic evaluation against the EU AI Act criteria, beginning with prohibited applications and progressing through risk levels to determine the applicable compliance obligations.
When to use this: Organisations deploying AI systems in the EU market must complete this classification before entering the market or continuing operation.
1. Identify Prohibited Applications: Check if your AI system performs social scoring, manipulates human behaviour, or conducts biometric categorisation in publicly accessible spaces without law enforcement justification.
2. Assess Against Annex III: Determine if your system qualifies as high risk by reviewing applications in critical infrastructure, education, employment, law enforcement, border control management, and other specified domains.
3. Evaluate Transparency Requirements: Check if your system requires user disclosure (for generative AI) or meets general-purpose AI systemic risk thresholds that require additional obligations.
4. Document Classification Decision: Record your assessment methodology, supporting evidence, and resulting compliance roadmap for regulatory review and internal governance.
| Requirement | High Risk AI Systems | Limited Risk AI Systems |
| Conformity Assessment | Mandatory before market entry | Not required |
| Technical Documentation | Extensive documentation required | Basic transparency disclosures |
| Human Oversight | Proper human review protocols are mandatory | User notification sufficient |
| EU Database Registration | Required registration | Not applicable |
High-risk systems face significantly higher compliance complexity, requiring dedicated resources for documentation, assessment, and ongoing monitoring, compared to low-risk systems focused on transparency.
Organisations often struggle with borderline cases, evolving AI capabilities, and systems serving multiple purposes, all of which complicate straightforward risk assessment.
Solution: If an AI system could fall under more than one Annex III category or its risk level is uncertain, treat it as high-risk to stay on the safe side. For complex cases that may affect fundamental rights or safety, review the European Commission’s guidance and get legal advice.
Document your reasoning thoroughly to demonstrate good faith compliance efforts during regulatory review.
Solution: For general-purpose AI systems, calculate training compute requirements against the 10^25 FLOPs threshold to determine systemic risk status. Consider both the foundation model and downstream applications separately, as different risk levels may apply depending on the deployment context.
Monitor for model updates or fine-tuning that may alter the classification status over time.
Solution: When AI systems serve both high-risk and minimal-risk functions, apply the highest applicable risk classification to guarantee comprehensive compliance. This approach protects against regulatory gaps while acknowledging that preparatory tasks or narrow procedural task components may require different treatment.
Segment system functions where technically feasible to minimise regulatory burden on low-risk components.
Accurate AI risk classification forms the foundation of EU AI Act compliance, determining your organisation’s legal obligations and market access rights while protecting individuals from potential harm through the deployment of artificial intelligence.
To get started:
1. Conduct AI System Inventory: Catalogue all current and planned AI applications within your organisation
2. Perform Risk Assessment: Apply the four-step classification process to each identified system
3. Develop Compliance Roadmap: Prioritise high-risk AI systems for immediate conformity assessment and documentation
Accurate AI risk classification under the EU AI Act is essential for compliance and market access. Understanding and applying the correct risk category guarantees your AI systems meet legal obligations while protecting fundamental rights and safety.
The EU AI Act categorises AI systems into four categories: unacceptable risk (prohibited), high risk (requiring strict compliance), limited risk (requiring transparency), and minimal risk (with no specific obligations).
An AI system is high risk if it is part of regulated products or listed in Annex III, such as biometric ID, critical infrastructure, credit scoring, or border control. High-risk systems need conformity assessments, documentation, registration, and human oversight.
Providers must inform users when they interact with AI, guaranteeing transparency. No conformity assessments or registrations are required; however, responsible use and user awareness are essential.