It's January 2026. You have about 7 months until the main EU AI Act deadline hits. That sounds like a lot of time. It's not.
I've mapped out what actually needs to happen before August 2, 2026—and the timeline is tighter than most teams realize. Here's what you need to know.
The key dates
The EU AI Act has a phased rollout. Some deadlines have already passed. Others are coming soon.
| Date | What's Required | Who's Affected |
|---|---|---|
| February 2, 2025 | Prohibited AI practices banned | Everyone |
| August 2, 2025 | GPAI model rules; governance structures; penalties | GPAI providers, all organizations |
| August 2, 2026 | High-risk AI systems must comply | Providers and deployers of high-risk AI |
| August 2, 2027 | Extended deadline for certain AI systems in Annex I | Specific sectors (aviation, medical devices, etc.) |
What August 2, 2026 actually means
On August 2, 2026, the core requirements for high-risk AI systems become enforceable. That means:
- Risk management system must be operational (Article 9)
- Data governance requirements must be met (Article 10)
- Technical documentation must be complete (Article 11)
- Record-keeping and logging must be in place (Article 12)
- Transparency and user instructions must be provided (Article 13)
- Human oversight mechanisms must be implemented (Article 14)
- Accuracy, robustness, and cybersecurity requirements must be met (Article 15)
- Quality management system must be established (Article 17)
- Conformity assessment must be completed (Article 43)
- EU database registration must be done (Article 71)
That's not a checklist you complete in a weekend. Each of those items represents substantial work.
A realistic implementation timeline
Based on our experience and conversations with other companies going through this, here's what the timeline actually looks like:
Phase 1: Assessment (4-6 weeks)
- Inventory all AI systems in your organization
- Classify each system by risk level
- Identify gaps against requirements
- Prioritize based on risk and deadline
This phase is often underestimated. Most companies don't have a complete inventory of their AI systems. "We use a few ML models" often turns into "we found 23 systems that might qualify as AI" when you actually look.
Phase 2: Documentation (6-10 weeks)
- Create technical documentation per Article 11
- Document training data characteristics
- Write instructions for use per Article 13
- Establish record-keeping procedures
Documentation takes longer than anyone expects. A single AI system's technical documentation can run 50-100 pages if done properly.
Phase 3: System Implementation (8-12 weeks)
- Implement risk management processes
- Build or enhance logging systems
- Create human oversight interfaces
- Establish quality management system
- Implement bias monitoring
This is the engineering work. Building proper oversight interfaces, implementing comprehensive logging, establishing monitoring—this is where the real effort goes.
Phase 4: Verification (4-6 weeks)
- Conduct testing per Article 9 requirements
- Complete conformity assessment
- Address any gaps found
- Prepare for registration
Testing always reveals issues. Build in time to fix them.
Phase 5: Registration (2-4 weeks)
- Register in EU database
- Submit required documentation
- Finalize deployment approvals
Total: 6-9 months
If you're starting today (January 2026), you're cutting it close. A realistic implementation takes 6-9 months for a moderately complex high-risk AI system. More complex systems or larger portfolios take longer.
What happens if you're not ready?
The penalties are significant:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | €35 million or 7% of global revenue |
| High-risk AI non-compliance | €15 million or 3% of global revenue |
| Incorrect information to authorities | €7.5 million or 1.5% of global revenue |
For SMEs and startups, the regulation includes reduced caps, but the percentages still apply. A startup with €10M revenue faces up to €300K in fines for high-risk non-compliance. That's meaningful.
Beyond fines, there's the operational impact. Non-compliant AI systems may need to be taken offline. If your high-risk AI is core to your product, that's a business continuity issue.
How to prioritize
You probably can't make everything compliant by August 2026. Here's how to prioritize:
- Revenue-critical high-risk systems first. If a system is classified as high-risk and generates significant revenue, it's your top priority. Non-compliance means potential shutdown.
- Customer-facing systems second. High-risk systems that affect customers directly create liability exposure. Prioritize these.
- Internal systems third. Internal high-risk AI (like HR screening tools) still needs compliance, but enforcement risk is slightly lower than customer-facing systems.
- Non-high-risk systems last. Limited risk systems need transparency measures but face lower penalties. Handle these after high-risk compliance is secured.
Key Takeaways
- August 2, 2026 is the main deadline: High-risk AI requirements become enforceable. That's 7 months away.
- Realistic implementation takes 6-9 months: Start now or you won't make it.
- Penalties are real: Up to €15 million or 3% of global revenue for high-risk non-compliance.
- Prioritize ruthlessly: Focus on revenue-critical, customer-facing high-risk systems first.
What to Do This Week
- Inventory your AI systems. Every model, every feature that uses ML. Get a complete list.
- Classify risk levels. Run each system against the Annex III criteria. Know what's high-risk.
- Block implementation time. Put holds on calendars. This work won't happen without dedicated time.
Stay compliant out there.
— The Compliantist