# The Compliantist — Complete documentation > AI regulation for builders. Expert knowledge. Human delivery. **Domain:** compliantist.com **Author:** Tomica Cesar **Updated:** 2026-01-20 --- ## About The Compliantist I spent way too many hours reading the EU AI Act. This publication is where I share what I learned so you don't have to do the same. We're honest about the journey (the late nights, the Annex deep-dives) and confident about what we found. We respect the regulation. We just don't respect boring explanations of it. ### Who this is for - CTOs evaluating compliance requirements - Founders building AI products in regulated markets - Product managers scoping compliance features - Engineers implementing governance systems - Compliance officers new to AI regulation ### Who this is not for - Lawyers seeking legal analysis - Policy academics researching regulation - General audiences curious about AI --- ## EU AI Act overview The EU AI Act regulates AI systems sold or used in Europe. It entered into force August 1, 2024, with requirements phasing in through August 2027. ### Risk classification The Act sorts AI systems into four tiers: **Prohibited AI (Article 5)** — Banned entirely: - Social scoring by governments - Real-time biometric identification in public spaces (with exceptions) - Emotion recognition in workplaces and schools - AI exploiting vulnerabilities of specific groups - Predictive policing based on profiling **High-risk AI (Article 6 + Annex III)** — Strict requirements apply: - Biometric identification and categorization - Critical infrastructure management - Education and vocational training assessment - Employment decisions (hiring, termination, task allocation) - Access to essential services (credit, insurance, social benefits) - Law enforcement - Migration and border control - Administration of justice **Limited-risk AI** — Transparency required: - Chatbots must disclose they're AI - AI-generated content must be labeled - Emotion recognition systems must inform users **Minimal-risk AI** — No specific requirements: - AI-enabled video games - Spam filters - Most business applications ### High-risk requirements If your AI system is high-risk, you need to meet these seven requirements: **Risk management (Article 9)** Run continuously through the product lifecycle. Identify and analyze risks, implement mitigation measures, test under real conditions. **Data governance (Article 10)** Training data must be relevant, representative, and reasonably error-free. Check for biases. Consider geographic, contextual, behavioral, and functional gaps. **Technical documentation (Article 11)** Document design, development process, and testing procedures. Have this ready before you put the product on the market. Keep it updated. **Record-keeping (Article 12)** Log AI system operations automatically. Enable traceability of decisions. Follow sector-specific retention requirements. **Transparency (Article 13)** Provide clear instructions for use. Document capabilities, limitations, human oversight measures, and expected accuracy. **Human oversight (Article 14)** Design for human intervention. Humans must be able to interrupt, override, or reverse outputs. They need to understand the system's limitations. **Accuracy, robustness, cybersecurity (Article 15)** Achieve appropriate accuracy for your use case. Build resilience against errors. Protect against unauthorized access. ### Compliance timeline - **August 1, 2024:** Act enters into force - **February 2, 2025:** Prohibited practices banned; AI literacy required - **August 2, 2025:** GPAI model rules apply - **August 2, 2026:** High-risk obligations apply - **August 2, 2027:** Extended transition for certain embedded systems ### Penalties - Up to EUR 35 million or 7% of global turnover for prohibited AI violations - Up to EUR 15 million or 3% for high-risk non-compliance - Up to EUR 7.5 million or 1% for providing incorrect information --- ## ISO 42001 overview ISO/IEC 42001:2023 is a standard for AI Management Systems. It's voluntary, but certification bodies are now offering it and some companies use it to show due diligence. ### What ISO 42001 covers **Leadership and commitment** — Top management accountability, AI policy, roles and responsibilities. **Planning** — AI-specific risk assessment, objectives, how to achieve them. **Support** — Resources, competence requirements, awareness, documentation. **Operation** — Development processes, third-party considerations, impact assessment, lifecycle management. **Performance evaluation** — Monitoring, internal audit, management review. **Improvement** — Handling nonconformities, continual improvement. ### How ISO 42001 relates to the EU AI Act | Aspect | ISO 42001 | EU AI Act | |--------|-----------|-----------| | Nature | Voluntary standard | Legal requirement | | Scope | Global | EU market | | Focus | Management system | Product compliance | | Certification | Available | Not applicable | | Penalties | None | Up to 7% turnover | ISO 42001 gives you a management framework. The EU AI Act requires specific product compliance. Having an AI management system makes meeting the Act's requirements easier, but certification alone doesn't equal compliance. --- ## Frequently asked questions ### General **What is the EU AI Act?** It's a European regulation that sets rules for AI systems based on their risk level. Entered into force August 1, 2024. **When do I need to comply?** Most high-risk AI obligations apply from August 2, 2026. Prohibited practices are already banned. GPAI rules apply from August 2025. **Does the AI Act apply to my company?** If you provide, deploy, import, or distribute AI systems in the EU market, it probably applies. Doesn't matter where you're based. ### High-risk classification **How do I know if my AI system is high-risk?** Check Article 6 and Annex III. High-risk includes AI for employment decisions, credit scoring, education assessment, law enforcement, and critical infrastructure. Use case determines classification, not the technology. **Is using ChatGPT or other LLMs high-risk?** Depends what you use it for. Customer support chatbot? Probably minimal-risk. Screening job applicants? High-risk. The application matters. **What if my AI could be high-risk depending on how customers use it?** You need to consider reasonably foreseeable use cases. If your AI is marketed for or commonly used in high-risk applications, the requirements likely apply. ### Compliance requirements **What documentation do I need?** High-risk systems need technical documentation covering design, development, testing, risk management, data governance, and intended purpose. Have it ready before market placement. **Do I need third-party certification?** Most high-risk systems can self-assess. Third-party assessment is required for biometric AI and AI used as safety components in regulated products. **What is a conformity assessment?** The process to verify your AI system meets requirements. Review documentation, run tests, issue a declaration of conformity. ### ISO 42001 **Do I need ISO 42001 certification?** No, it's voluntary. But certification demonstrates due diligence and can help with EU AI Act compliance. **How long does certification take?** Usually 6-12 months, depending on your existing systems and certification body availability. **Does ISO 42001 certification mean I comply with the EU AI Act?** No. ISO 42001 is a management system standard. The AI Act requires product compliance. A good management system helps, but it's not the same thing. --- ## Resources ### Official sources - [EU AI Act full text](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689) - [AI Act Explorer](https://artificialintelligenceact.eu/ai-act-explorer/) - [EU AI Office](https://digital-strategy.ec.europa.eu/en/policies/ai-office) - [ISO 42001:2023](https://www.iso.org/standard/42001) ### Compliance deadlines | Date | What happens | |------|--------------| | Aug 1, 2024 | Act enters into force | | Feb 2, 2025 | Prohibited AI banned; AI literacy required | | Aug 2, 2025 | GPAI model obligations apply | | Aug 2, 2026 | High-risk AI obligations apply | | Aug 2, 2027 | Extended transition ends | --- ## Contact **Website:** compliantist.com **Newsletter:** Weekly analysis of AI regulation **Email:** hello@compliantist.com --- *Stay compliant out there.* *— The Compliantist*