ISO 42001 costs about CHF 180 (~€185) to purchase from ISO. Before you buy it, let me save you some time: 60% of the requirements are standard ISO management system requirements you'll recognize if you've ever done ISO 27001, ISO 9001, or ISO 14001.
The other 40% is the interesting part—AI-specific controls and requirements that help you govern AI systems responsibly. That's what I'll focus on here.
I've been mapping ISO 42001 against our existing processes at Queli. Here's what I found: we had about 70% of the requirements covered already through good engineering practice. The remaining 30% required deliberate work. Your mileage will vary, but the standard is more achievable than it looks.
What is ISO 42001?
ISO/IEC 42001:2023 is the world's first international standard for AI management systems. Published in December 2023, it provides a framework for organizations to establish, implement, maintain, and continually improve an AI Management System (AIMS).
Three things to understand upfront:
- It's certifiable. Unlike guidelines or best practices, you can get third-party certification to ISO 42001. An accredited body audits your management system and confirms compliance.
- It's risk-based. The standard doesn't tell you exactly what to do. It tells you to identify risks and address them appropriately for your context.
- It's internationally recognized. Unlike the EU AI Act (which is regional), ISO 42001 applies globally. Certification carries weight across jurisdictions.
The 10-clause structure
ISO 42001 follows the Annex SL structure used by all modern ISO management system standards. That means 10 clauses:
| Clause | Topic | ISO-standard or AI-specific? |
|---|---|---|
| 1-3 | Scope, references, terms | Standard |
| 4 | Context of the organization | Standard + AI adaptations |
| 5 | Leadership | Standard |
| 6 | Planning | AI-specific risk assessment |
| 7 | Support | Standard + AI competence |
| 8 | Operation | Heavily AI-specific |
| 9 | Performance evaluation | Standard |
| 10 | Improvement | Standard |
Clause 8 (Operation) is where ISO 42001 diverges most significantly from other ISO standards. It contains AI-specific requirements for:
- AI system impact assessment
- AI system lifecycle processes
- Data management
- Documentation of AI systems
What you probably already have
If you're running a moderately mature engineering organization, you likely have these in place:
Version control and documentation
Git, README files, architecture decision records. ISO 42001 wants documented processes—it doesn't prescribe the format. Your existing version control and documentation practices probably satisfy the documentation requirements.
Access controls
If you're controlling who can push to production, who can access training data, and who can modify models, you're addressing the access control requirements. Formalize what you're already doing.
Testing and validation
CI/CD pipelines with automated tests. Model evaluation before deployment. A/B testing in production. These practices map to the "verification and validation" requirements in Clause 8.
Incident response
If you have an on-call rotation and a process for handling production incidents, you're partway to the "nonconformity and corrective action" requirements in Clause 10.
Regular retrospectives or reviews
Sprint retrospectives, quarterly planning, annual reviews—these map to the "management review" requirements in Clause 9.
What's actually AI-specific
Here's where ISO 42001 adds requirements beyond standard management systems. These are the areas where most teams need deliberate work.
AI system impact assessment
Clause 6.1.4 requires an AI-specific risk assessment that considers impacts on individuals, groups, and society. This goes beyond technical risk (will the model work?) to societal risk (could it cause harm?).
Key areas to assess:
- Bias and fairness: Does the system treat different groups equitably?
- Privacy: What personal data is processed, and how is it protected?
- Transparency: Can users understand how decisions are made?
- Human autonomy: Does the system inappropriately replace human judgment?
- Reliability: What happens when the system fails?
Data quality management
Clause 8.4 requires processes for managing data used in AI systems. This includes:
- Data acquisition and collection
- Data labeling and annotation
- Data quality assessment
- Data bias evaluation
- Data preparation and preprocessing
Most teams do some of this, but informally. ISO 42001 wants it documented and repeatable.
Model lifecycle documentation
For each AI system, you need to document:
- The intended purpose and scope
- Training data characteristics
- Model architecture and parameters
- Validation results and performance metrics
- Known limitations
- Deployment conditions
- Monitoring approach
Responsible AI objectives
Clause 6.2 requires you to establish "AI objectives" that address responsible AI principles. These objectives must be measurable and monitored. "Be ethical" isn't an objective. "Achieve less than 5% demographic disparity in hiring recommendations, measured quarterly" is.
The certification path
If you decide to pursue certification, here's the typical path:
| Stage | Duration | Activities |
|---|---|---|
| Gap analysis | 2-4 weeks | Assess current state against requirements |
| Implementation | 3-6 months | Build missing processes, create documentation |
| Internal audit | 1-2 weeks | Verify your system works as documented |
| Management review | 1 day | Leadership confirms readiness |
| Stage 1 audit | 1-2 days | Auditor reviews documentation |
| Stage 2 audit | 2-5 days | Auditor verifies implementation |
Total timeline: 4-9 months, depending on your starting point and the complexity of your AI systems.
Certification costs vary by organization size and complexity. For a small company (10-50 people, 2-3 AI systems), expect €15,000-30,000 for the certification process including auditor fees. Larger organizations pay more.
Do you need certification?
Maybe not. Certification makes sense if:
- Enterprise customers require it (increasingly common)
- You want presumption of conformity for EU AI Act (when harmonized)
- You operate in multiple jurisdictions and want a single compliance framework
- You want external validation to build trust
If none of those apply, you might get 80% of the benefit by implementing the standard internally without certification. The discipline of following the framework improves your AI governance regardless of whether an auditor checks your work.
Key Takeaways
- 60% standard, 40% AI-specific: If you've done ISO 27001 or similar, you know most of the management system requirements already.
- You're closer than you think: Good engineering practices (version control, testing, documentation) cover significant ground. The gap is usually formalization, not creation.
- AI impact assessment is the new work: Evaluating bias, fairness, and societal impact isn't something most engineering teams do systematically. Build this capability.
- Certification is optional but strategic: It's not required, but EU AI Act harmonization may make it valuable. Consider your customers and jurisdictions.
What to Do Next
- Run a gap analysis: Compare your current AI development practices against the ISO 42001 clauses. Identify what you have and what's missing.
- Start with Clause 8: The AI-specific operational controls are where most teams have gaps. Focus there first.
- Document one AI system fully: Pick your most critical AI system and create complete documentation per ISO 42001 requirements. Use it as a template for others.
Stay compliant out there.
— The Compliantist