EU AI Act

EU AI Act compliance guide for builders

Everything you need to know about the EU AI Act: risk classification, requirements for high-risk systems, compliance deadlines, and practical next steps. Written for CTOs, founders, and engineers.

What is the EU AI Act?

The EU AI Act is a regulation that sets rules for AI systems in the European market. If you sell or deploy AI in Europe, it applies to you.

The regulation entered into force on August 1, 2024. Requirements are phasing in over three years, with most high-risk obligations taking effect August 2, 2026.

The core idea: different AI systems pose different risks, so they get different rules. A spam filter doesn't need the same oversight as an AI that screens job applicants.

Key point: The AI Act applies to anyone who provides or deploys AI systems in the EU market, regardless of where the company is based.

Risk classification

The Act sorts AI systems into four tiers based on risk. Your classification determines what rules apply.

Prohibited AI

Banned entirely in the EU:

High-risk AI

Subject to strict requirements. Defined in Article 6 and Annex III:

Important: Use case determines classification, not technology. Using GPT-4 for customer support is minimal-risk. Using it to screen job applicants is high-risk.

Limited-risk AI

Requires transparency obligations:

Minimal-risk AI

No specific requirements. This includes most business applications: spam filters, recommendation systems, AI-enabled games.

High-risk requirements

If your AI system is classified as high-risk, you need to meet seven categories of requirements before placing it on the market.

1. Risk management system (Article 9)

Run continuously through the product lifecycle. Identify and analyze known and foreseeable risks, implement mitigation measures, test under realistic conditions.

2. Data governance (Article 10)

Training data must be relevant, representative, and reasonably error-free. Check for biases. Consider geographic, contextual, and behavioral gaps in your data.

3. Technical documentation (Article 11)

Document system design, development process, and testing procedures. This must be ready before you put the product on the market and kept updated throughout its lifecycle.

4. Record-keeping (Article 12)

Log AI system operations automatically. Enable traceability of decisions. Follow sector-specific retention requirements.

5. Transparency (Article 13)

Provide clear instructions for use. Document capabilities, limitations, human oversight measures, and expected accuracy levels.

6. Human oversight (Article 14)

Design for human intervention. Humans must be able to interrupt, override, or reverse outputs. They need to understand the system's limitations.

7. Accuracy, robustness, cybersecurity (Article 15)

Achieve appropriate accuracy for your use case. Build resilience against errors and adversarial inputs. Protect against unauthorized access.

Timeline and deadlines

Date What happens
Aug 1, 2024 AI Act enters into force
Feb 2, 2025 Prohibited AI practices banned; AI literacy obligations begin
Aug 2, 2025 GPAI model rules apply
Aug 2, 2026 High-risk AI system obligations apply
Aug 2, 2027 Extended transition for certain embedded systems ends

Penalties

Fines are significant and scale with company size:

Getting started

Here's what to do now:

  1. Inventory your AI systems. List everything that could qualify as an AI system under the Act's definition.
  2. Classify each system. Use Article 6 and Annex III to determine risk levels.
  3. Gap analysis. For high-risk systems, map current practices against the seven requirement categories.
  4. Prioritize. Focus on systems launching soon or with the biggest compliance gaps.
  5. Start documenting. Technical documentation takes time. Start now.

Need help? Subscribe to The Compliantist for weekly analysis and practical implementation guides.

Stay compliant out there

Get weekly analysis of AI regulation. Written for builders, not lawyers.