What is the EU AI Act?
The EU AI Act is a regulation that sets rules for AI systems in the European market. If you sell or deploy AI in Europe, it applies to you.
The regulation entered into force on August 1, 2024. Requirements are phasing in over three years, with most high-risk obligations taking effect August 2, 2026.
The core idea: different AI systems pose different risks, so they get different rules. A spam filter doesn't need the same oversight as an AI that screens job applicants.
Key point: The AI Act applies to anyone who provides or deploys AI systems in the EU market, regardless of where the company is based.
Risk classification
The Act sorts AI systems into four tiers based on risk. Your classification determines what rules apply.
Prohibited AI
Banned entirely in the EU:
- Social scoring by governments
- Real-time biometric identification in public spaces (with limited exceptions for law enforcement)
- Emotion recognition in workplaces and schools
- AI that exploits vulnerabilities of specific groups
- Predictive policing based on profiling
High-risk AI
Subject to strict requirements. Defined in Article 6 and Annex III:
- Biometric identification and categorization
- Critical infrastructure management (water, gas, electricity)
- Education and vocational training assessment
- Employment decisions: hiring, termination, task allocation, performance monitoring
- Access to essential services: credit scoring, insurance, social benefits
- Law enforcement
- Migration and border control
- Administration of justice
Important: Use case determines classification, not technology. Using GPT-4 for customer support is minimal-risk. Using it to screen job applicants is high-risk.
Limited-risk AI
Requires transparency obligations:
- Chatbots must disclose they're AI
- AI-generated content must be labeled
- Emotion recognition systems must inform users
Minimal-risk AI
No specific requirements. This includes most business applications: spam filters, recommendation systems, AI-enabled games.
High-risk requirements
If your AI system is classified as high-risk, you need to meet seven categories of requirements before placing it on the market.
1. Risk management system (Article 9)
Run continuously through the product lifecycle. Identify and analyze known and foreseeable risks, implement mitigation measures, test under realistic conditions.
2. Data governance (Article 10)
Training data must be relevant, representative, and reasonably error-free. Check for biases. Consider geographic, contextual, and behavioral gaps in your data.
3. Technical documentation (Article 11)
Document system design, development process, and testing procedures. This must be ready before you put the product on the market and kept updated throughout its lifecycle.
4. Record-keeping (Article 12)
Log AI system operations automatically. Enable traceability of decisions. Follow sector-specific retention requirements.
5. Transparency (Article 13)
Provide clear instructions for use. Document capabilities, limitations, human oversight measures, and expected accuracy levels.
6. Human oversight (Article 14)
Design for human intervention. Humans must be able to interrupt, override, or reverse outputs. They need to understand the system's limitations.
7. Accuracy, robustness, cybersecurity (Article 15)
Achieve appropriate accuracy for your use case. Build resilience against errors and adversarial inputs. Protect against unauthorized access.
Timeline and deadlines
| Date | What happens |
|---|---|
| Aug 1, 2024 | AI Act enters into force |
| Feb 2, 2025 | Prohibited AI practices banned; AI literacy obligations begin |
| Aug 2, 2025 | GPAI model rules apply |
| Aug 2, 2026 | High-risk AI system obligations apply |
| Aug 2, 2027 | Extended transition for certain embedded systems ends |
Penalties
Fines are significant and scale with company size:
- Prohibited AI violations: Up to EUR 35 million or 7% of global annual turnover
- High-risk non-compliance: Up to EUR 15 million or 3% of global turnover
- Incorrect information to authorities: Up to EUR 7.5 million or 1% of turnover
Getting started
Here's what to do now:
- Inventory your AI systems. List everything that could qualify as an AI system under the Act's definition.
- Classify each system. Use Article 6 and Annex III to determine risk levels.
- Gap analysis. For high-risk systems, map current practices against the seven requirement categories.
- Prioritize. Focus on systems launching soon or with the biggest compliance gaps.
- Start documenting. Technical documentation takes time. Start now.
Need help? Subscribe to The Compliantist for weekly analysis and practical implementation guides.