ISO 42001 AI Management System Knowledge Base

ISO 42001 Explained

  • Overview and guidance on every ISO 42001 clause
  • Practical advice on responsible AI governance for developers and deployers

ISO 42001 AI Management System - Clause by Clause Guide

ISO 42001:2023 is the international standard for an artificial intelligence management system. It sets requirements for organisations that develop, provide or use products and services involving AI, and is the first management system standard dedicated to the responsible governance of AI. This section of the Knowledge Base covers every clause of the standard in plain language, explaining what each requirement means in practice and what an organisation needs to do to comply.

For the Annex A controls of ISO 42001:2023, see ISO 42001 Annex A Controls.

Who ISO 42001 applies to

The standard is deliberately broad. It applies to any organisation, of any size or sector, that is involved with AI in any of the following roles - producer, provider, partner, customer or user. This means it is just as relevant to a developer building an AI product as to a professional services firm using an off-the-shelf AI tool to draft client communications.

Most organisations adopting ISO 42001 sit in one of two camps. The first is the AI developer or provider, who is building, training or fine-tuning AI systems and selling or licensing them to others. The second is the AI deployer or user, who is buying AI tools from third parties and embedding them in their own operations. The standard treats both perspectives, with Annex B providing implementation guidance that distinguishes between developer-side and user-side controls where it matters.

What ISO 42001 covers

Unlike many AI frameworks, ISO 42001 is not a technical standard about how to build AI systems. It is a governance and management system standard concerned with how an organisation directs, controls and improves its use of AI over time.

The standard follows the same Annex SL high-level structure as ISO 9001, ISO 14001, ISO 27001 and other modern management system standards, which makes it possible to integrate ISO 42001 with an existing management system rather than building one in parallel.

The main clauses (4 to 10) follow the familiar Plan-Do-Check-Act structure - context and scope, leadership and policy, planning and risk management, support and competence, operational controls, performance evaluation, and continual improvement. The AI-specific elements that distinguish it from other standards are concentrated in Clause 6 (AI risk assessment, AI risk treatment, AI system impact assessment) and in Annex A, which lists 38 reference controls grouped into nine areas covering policy, accountability, resources, impact assessment, system life cycle, data, information for interested parties, responsible use, and third-party relationships.

Organisations select which Annex A controls to apply through a Statement of Applicability, in the same way as ISO 27001. This is one of two clear structural similarities to the information security standard, the other being that ISO 42001 is a joint ISO/IEC publication developed under the same technical subcommittee as the AI terminology and risk management standards.

How ISO 42001 fits with other management system standards

Because ISO 42001 shares the Annex SL structure, it can be integrated with existing management systems rather than run separately. Organisations already certified to ISO 9001, ISO 27001 or ISO 14001 will recognise most of the clauses, and can extend their existing context analysis, policy framework, internal audit programme and management review process to cover AI without duplicating effort.

The standard itself acknowledges this and is designed to complement information security (ISO 27001), privacy (ISO 27701) and quality (ISO 9001) management systems. For organisations processing personal data through AI systems, integration with privacy and information security frameworks is particularly important, as the AI-specific impact assessment under Clause 6.1.4 can sit alongside or feed into existing data protection impact assessments.

The AI system impact assessment

The most distinctive AI-specific concept in ISO 42001 is the AI system impact assessment under Clause 6.1.4. This is a formal, documented process for identifying and evaluating the potential consequences of an AI system on individuals, groups of individuals and societies. It sits alongside but is separate from the AI risk assessment, which looks at risks to the organisation's objectives.

The impact assessment must consider the technical and societal context in which an AI system is deployed and the applicable jurisdictions. It feeds into the risk assessment and informs the controls selected through the Statement of Applicability. For organisations using high-impact AI - particularly anything affecting employment decisions, access to services, healthcare, criminal justice or democratic processes - the impact assessment is the central piece of evidence demonstrating responsible AI governance.

Anyone certified to ISO 27001 already will find this standard reassuringly familiar. The Annex A structure, the Statement of Applicability, the risk assessment and treatment cycle - it all maps across. The AI-specific elements add a layer rather than replace what is already in place.

Where it gets harder is the impact assessment under 6.1.4. Most organisations are used to assessing risk to themselves. ISO 42001 requires you to formally assess the impact your AI systems have on the people and groups they affect, and on society more broadly. That is a very different exercise.

When auditing against ISO 42001, the first thing I want to see is a clear statement of the organisation's role with respect to its AI systems. Are you a developer, a provider, a deployer, or some combination? Without that determination, nothing else in the management system makes sense, because the controls that apply to a developer are different from those that apply to a deployer.

I also expect to see the AI Process Register kept up to date. AI tools change quickly, and an organisation that adopted three new AI systems in the last quarter without updating its scope, risk assessment or impact assessment is going to struggle to demonstrate conformity.

We treat AI like any other piece of plant. We know what it does, we know who is responsible for it, we know how it can fail, and we have an eye on it day to day. That is what got us through our first surveillance visit, and it is what ISO 42001 is asking for.

Load more stories
payment logos