The EU AI Act requires your organisation to classify AI systems by risk level, meet specific obligations based on that classification, and face penalties up to €35 million or 7% of global turnover for non-compliance. If you operate AI systems that affect the EU market, regardless of where your company is based, these regulations apply to you, with full high-risk compliance required by August 2026-2027.
This guide breaks down what you need to know and do.
The European Union’s Artificial Intelligence Act represents the world’s first comprehensive regulatory framework specifically designed to govern artificial intelligence systems.
The EU AI Act distinguishes itself through a risk-based approach to regulating AI. Rather than treating all AI applications the same, it categorises AI systems based on their potential to harm health, safety, and fundamental rights.
Territorial scope matters for non-EU companies. The EU AI Act applies to:
• Providers placing AI systems on the EU market
• Deployers of AI systems located within the European Union
• Providers and deployers located outside the EU when their AI system outputs are used within the EU
• Importers and distributors of AI systems
The EU AI Act has four risk categories for AI systems falling under its scope. Each category carries different obligations, from total prohibition to voluntary compliance.
| Risk Level | Description | Obligations | Examples |
| Unacceptable Risk | AI systems pose clear threats to safety, rights, or livelihoods | Banned outright | Social scoring, manipulative AI, and real-time biometric identification |
| High Risk | AI used in sensitive areas with significant impact potential | Stringent pre-market requirements | Credit scoring, hiring tools, and medical diagnostics |
| Limited Risk (Transparency) | AI requires user awareness | Disclosure obligations | Chatbots, emotion recognition systems, and generative AI systems |
| Minimal Risk | Most AI systems with low-risk applications | No specific obligations | Spam filters, video games, and inventory management |
Most AI systems on the market today fall into the minimal risk category. The regulations focus enforcement resources on AI applications that genuinely threaten people.
The AI Act prohibits eight specific AI practices, effective February 2, 2025. These bans address harmful AI-based manipulation and exploitation of vulnerable populations.
Banned practices include:
• Harmful AI-based manipulation using subliminal techniques to distort behaviour and cause significant harm
• Harmful AI-based exploitation targeting vulnerabilities related to age, disability, or social/economic circumstances
• Social scoring by public authorities leading to detrimental or unfavourable treatment
• Predictive policing, assessing individual criminal risk based solely on profiling or personality traits
• Untargeted scraping of the internet or CCTV material to build facial recognition databases
• Emotion recognition systems in workplace and educational settings
• Biometric categorisation inferring protected characteristics like race, political opinions, or sexual orientation
• Real-time remote biometric identification systems for law enforcement purposes in publicly accessible spaces (with narrow exceptions for serious crimes)
Penalties for using prohibited AI practices reach up to €35 million or 7% of global annual turnover – whichever is higher.
High-risk AI systems face the most demanding compliance requirements under the EU AI Act. These are AI systems considered to pose significant risks to health, safety, or fundamental rights.
Mandatory requirements for high-risk AI include:
• Conformity assessments before market placement
• Risk management systems throughout the AI lifecycle
• Data governance for training datasets to minimise bias
• Technical documentation for authority verification
Providers must register high-risk systems in the EU database maintained by the European Commission. Compliance deadlines run from August 2026 for new systems, extending to August 2027 for AI embedded in regulated products like medical devices or vehicles.
General-purpose AI models, foundation models trained on broad data for multiple downstream applications, face specific obligations from August 2, 2025. This includes large language models and other general-purpose AI systems powering various applications.
The EU AI Act creates two tiers of obligations based on systemic risk:
Standard GPAI model obligations:
• Provide technical documentation to the European AI Office
• Prepare summaries of training data content
• Maintain policies respecting EU copyright law
• Share information with downstream providers
The AI Office is developing a General Purpose AI Code of Practice, finalised in 2025, providing detailed compliance guidance. The AI Board advises on implementation and harmonisation across EU Member States.
The EU AI Act assigns different obligations to participants across the AI value chain. Understanding your role determines your compliance responsibilities.
Providers develop or place AI systems on the EU market. The AI Act requires providers to:
• Establish risk management systems for high-risk AI
• Meet data governance requirements for training data
• Prepare technical documentation
• Design systems enabling human oversight
• Achieve required accuracy, robustness, and cybersecurity levels
• Register systems in the EU database (for high-risk)
• Conduct conformity assessments
Deployers use AI systems under their authority. Obligations include:
• Operating AI systems according to instructions
• Assigning competent human oversight personnel
• Monitoring the AI system operation
• Conducting fundamental rights impact assessments (for public deployers)
• Keeping logs generated by high-risk systems
• Meeting AI literacy obligations for staff
Those bringing AI systems into the EU market or making them available must:
• Verify conformity markings and documentation
• Confirm provider compliance with applicable requirements
• Report non-compliant systems to national authorities
• Maintain traceability information
Practical preparation steps:
1. Determine your role(s) in the AI value chain for each system
2. Identify which systems fall under which risk category
3. Map existing processes to regulatory requirements
4. Assign internal responsibility for AI governance
5. Begin documentation and monitoring preparations
Different sectors face distinct challenges under EU AI regulations. Here’s how the rules affect key industries.
Financial institutions using AI for credit scoring, fraud detection, or automated trading face high-risk classification for many AI applications. Banks must prepare for:
• Conformity assessments for lending decisions
• Bias testing in creditworthiness evaluations
• Documentation of training data and model logic
• Human oversight for automated decisions affecting customers
• Overlap compliance with DORA operational resilience requirements
Medical AI systems often qualify as high-risk systems, particularly those involved in diagnosis or treatment recommendations. Requirements include:
• Integration with existing medical device regulations
• Clinical validation documentation
• Patient safety monitoring systems
• Extended the compliance timeline to August 2027 for some applications
AI hiring tools, performance monitoring, and workforce management systems face stringent requirements. The sector must address:
• Prohibited emotion recognition in workplaces
• High-risk classification for recruitment and promotion of AI
• Bias mitigation in candidate screening
• Transparency obligations to job applicants
Law enforcement purposes receive special attention under the AI Act. Key restrictions include:
• Prohibition on real-time remote biometric identification in publicly accessible spaces (with limited exceptions)
• High-risk classification for crime analytics tools
• Strict documentation for predictive systems
• Fundamental rights safeguards
The AI regulation establishes a multi-layered enforcement structure with significant penalties for non-compliance.
European AI Office: Centralised oversight for general-purpose AI models and systemic risk assessment.
National Competent Authorities: Each EU Member State designates authorities responsible for supervising AI systems within their territory.
Market Surveillance Authorities: Monitor AI systems already on the market for ongoing compliance.
National authorities can:
• Conduct inspections and audits
• Require corrective actions
• Order withdrawal of non-compliant AI systems from the market
• Impose administrative fines
• Pursue injunctions and other remedies
AI systems often trigger multiple regulatory frameworks. Managing overlapping requirements efficiently prevents duplication and gaps.
The General Data Protection Regulation applies when AI processes personal data. Key overlaps include:
• Legal basis requirements for AI training data
• Automated decision-making restrictions under Article 22
• Data subject rights applicable to AI outputs
• Privacy by design principles in AI development
AI systems in critical infrastructure sectors must meet cybersecurity obligations:
• Risk management measures for AI components
• Incident reporting for AI-related security events
• Supply chain security for AI providers
• Overlap with AI Act cybersecurity requirements for high-risk systems
Financial sector AI falls under the Digital Operational Resilience Act:
• ICT risk management for AI systems
• Resilience testing for AI in critical functions
• Third-party risk management for AI providers
• Incident classification and reporting
Compliance preparation should begin immediately, given the February 2025 deadline for prohibited practices. Here’s a structured approach.
Start by cataloguing all AI systems your organisation develops, deploys, or distributes. Identify which systems affect the EU market, including indirect impacts, and assess each one’s preliminary risk classification to determine applicable regulatory obligations.
Assign individuals or teams responsible for AI compliance, establish decision-making processes for system approval, create policies governing AI development, procurement, and deployment, and ensure staff meet AI literacy requirements through targeted training programs.
Develop technical documentation templates aligned with regulatory requirements; maintain training data summaries and quality records; implement risk assessment procedures; and establish systems for incident logging and reporting.
Establish procedures for reporting serious incidents to national authorities, define thresholds for internal escalation, create communication plans for affected users, and implement remediation processes for non-compliant AI systems.
Complying with EU AI regulations is essential for any organisation deploying AI in or affecting the EU. By understanding risk categories, meeting high-risk obligations, implementing governance and documentation systems, and preparing incident responses, businesses can reduce legal risks, avoid hefty fines, and build trustworthy AI systems.
Yes, if your AI systems or their outputs are used within the European Union. The AI regulation has extraterritorial reach covering providers and deployers outside the EU whose AI systems affect natural persons in EU Member States.
Check whether your AI falls into Annexe III categories (biometric identification, critical infrastructure, employment, essential services, law enforcement, migration, justice) or is a safety component of a product covered by EU harmonised legislation listed in Annexe I.
Forhigh-riskk AI systems: technical documentation covering system description, development methodology, risk management, data governance, testing results, and monitoring plans. For GPAI models: technical documentation, plus training data summaries and copyright compliance evidence.