ISO 42001 is ISO’s relatively new standard on “AI Management Systems,” and companies have already begun announcing their certification to it. At the same time, the European Union is rolling out a massive new set of regulations known as the “AI Act.” There is some intersection between these, but only some; a lot of consultants are posting social media content and getting it all wrong, which will lead folks in the wrong direction. Let’s take a look at what it all really means.

ISO 42001

ISO 42001 is a somewhat poorly written, rushed bit of work by ISO to capture the expanding AI dominance in the public’s consciousness and within the business environment. It also suffers from a confusing scoping of its intended audience: it is intended for organizations that may use AI within their company and/or those that use AI in their products and services. So you don’t necessarily need to be selling AI-powered products to use ISO 42001, you may simply be using CoPilot or ChatGPT in the workplace, and ISO says 42001 will apply. That’s clumsy, as the two scenarios are vastly different, yet ISO shoved them both into the same standard.

As I had written before, ISO then took the cybersecurity standard — ISO 27001 — and copied it entirely to make ISO 42001. This was clearly done to get ISO 42001 published fast, so ISO could start making money. ISO then replaced ISO 27001’s controls (defined in ISO 27002) with AI-related controls, which it included within the ISO 42001 standard as an Annex. The controls are “guidance,” so the Annex is entirely optional — sort of; the requirements of ISO 42001 appear in clauses 4 through 10, like any other management system standard. However, just as in ISO 27001, you must craft a “statement of applicability” (SoA) defining the controls you are implementing for your organization and justifying any clauses in the Annex that you deemed were not relevant. As a result, ISO 42001 is a set of both requirements and guidance, making it even more clumsy. I’ve already seen consultants bungling this by saying that all the requirements in ISO 42001 are optional or all the controls in the Annex are mandatory. Be careful.

The controls in the Annex are not bad, but likely to be outdated in mere months, as the AI boom explodes and technology expands. Organizations will also find them expensive to implement. now, you will find consultants claiming they can do all of this in record time, but they will be lying. Just like the cybersecurity controls for ISO 27001, those for AI systems in ISO 42001 require careful consideration and then the investment of technology and intelligence. This is not something you can template kit your way into.

So, implementing ISO 42001 will be a heavy lift and not cheap.  It will also be highly dependent on hiring a consultant, unfortunately.

Finally, there is the problem with certification. It’s not at all clear how accreditation bodies like UKAS and ANAB are already accrediting certification bodies to sell ISO 42001, since the associated CB and AB rules are not even finished yet. For example, the rules governing CBs are defined in ISO 42006, which is still only at the DIS (draft international standard) stage and is likely to be changed. Nevertheless, the IAF is allowing its AB members to issue entirely meaningless accreditations to certification bodies anyway. Once again, ISO and IAF collude to prove that rules only apply to other people and not their paying clients.

The AI Act

At the same time, the European Commission has begun rolling out the new AI Act. The AI Act is based on Europe’s CE Mark scheme, and — I want to make this next part very clear — is a product certification. It is only applicable under a strict set of conditions, such as you use AI to produce a product that will be sold in Europe or AI is included in such products. Now, the AI Act is a complicated read, so that’s an oversimplification, and there are lots of weebly-wobbly, timey-wimey permutations, but the point is that while SO 42001 is a certification for a company’s management system. the AI Act will require certification of individual products.

It is also not at all isolated to European companies. As with CE Mark, if you intend to sell a product within the EU, the AI Act will apply no matter where you are in the world. China, India, the United States — they will all have to comply with the AI Act if they want to sell in any EU country.

Like CE Mark, the AI Act will then require “notified bodies” — typically (but not always) each nation’s accreditation body — to issue the certificates. This means producing a huge data package, much like one might make to introduce a new medical product under CE Mark, and then proving the product was properly tested, etc. Obviously, that can mean a lot of different things since AI systems are not as clearly defined as physical products.

If a company produces multiple AI products, then each will have to obtain the certification; it is not a one-and-done thing.

There are lots of exceptions, however. Users of AI systems that do not end up in products or casual users (using ChatGPT for personal use, for example) are exempt. Unlike ISO 42001, you would not have to pursue AI Act compliance if you only use AI in your company, and the AI systems never “touch” your product. The AI Act will be another boon for consultants since it will be crucial to hire one to analyze just how much of the AI Act applies or if it applies at all. A lot of greedy folks’ mouths are watering right now.

And so are the lawyers’. Compliance with the AI Act would, for most large companies, strongly be improved if they had in-house counsel advising them. The legal liabilities for noncompliance are severe and world-breaking, and some of the language of the Act would require an attorney to help decipher. So, again, not cheap.

The Intersection

So, where do ISO 42001 and the AI Act overlap? In truth, only a little.

The AI Act requires the company to have a base “quality management system” as one of many prerequisites. To be honest, a simple ISO 9001 system would suffice here, as the Act does not detail what the QMS must cover or contain. It does not call out ISO 42001 as a specific requirement.

Furthermore, the AI Office has issued some statements indicating their overall displeasure with ISO 42001, so they are not likely to make it mandatory. At least not directly. But ISO 42001 is likely to slip through the back door anyway.

Like most EU regulations, the AI Act does endorse “harmonised standards,” specifically those published in the Official Journal of the European Union (OJEU). For now, ISO 42001 doesn’t appear, but all signs point to it being included very soon — within months, not years. Europe tends to auto-stamp any ISO standard for inclusion in the OJEU and then let objections and protests fly afterward. The EC does nearly no checking into how ISO standards are made and refuses to hold ISO to its requirements under the World Trade Organization’s “Technical Barriers to Trade” (TBT) regulation; as a result, ISO violates the TBT with impunity (for example, allowing standards to be written by non-elected toadies rather than nominated subject matter experts), and the EU does nothing. Such was the case with ISO 42001, where entire portions of the standard were written by a bureaucrat at the ISO Technical Management Board (TMB) rather than an AI subject matter expert.

So it’s terrifying that ISO standards are being rolled into laws through the back door, but the EU remains asleep at the switch, happy to let ISO — a private sector publishing company — write standards on its behalf so it doesn’t have to spend the money and effort to do so.

Worse, the EU relies on the European co-operation for Accreditation (EA) to oversee ISO certification and accreditation body compliance with EU regulations governing conformity assessment. EA, however, is a woefully corrupt and unfair dealer. Accreditation bodies pay fees to join EA, and so EA has a financial disincentive to perform its oversight duties. For example, UKAS still falls under EA control (despite Brexit), and the EA has repeatedly refused to investigate allegations that UKAS violated European laws and international sanctions. The EA has likewise refused to investigate the Netherlands’ RvA or Germany’s DAkkS, even when presented with overwhelming evidence of their violation of accreditation standards. The EA is a rubber stamp, and the European Commission does not seem to care so long as the stamp marks look pretty.

ISO 42001 is likely to get roped into the AI Act only within two possible scenarios. First, it could be used to meet the QMS requirement, and that makes sense; ISO 42001 is a better fit here than the generic ISO 9001. Second, the controls listed in ISO 42001’s Annex align somewhat with what the AI Act demands, so properly implementing those controls might help fast-track a company to AI Act product certification.

Maybe.

A poor implementation of the controls would still get the company certified to ISO 42001, but only because ISO certification bodies are so incompetent and the rules for training ISO 42001 auditors have not even been finalized yet. However, an ISO 42001-certified company may still never pass muster under the AI Act. A lot of companies are going to pursue ISO 42001 certification and get a very shocking surprise when they submit their data packages under the AI Act, only to have their products rejected.

To be clear, compliance with the AI Act does not require ISO 42001 certification. It might be a good idea, but will be expensive and need careful justification. That is not likely to stop people from selling ISO 42001 certification under the false claim that it ties in, somehow, with the AI Act. Here is what consulting giant KPMG just made up out of thin air, with no concern for the fact that their claim is absolutely false:

Many countries are now drafting laws, including the EU AI Act, with ISO/IEC 42001 serving as a cornerstone and providing essential guidance for compliance.

Or this outright fabrication by ISMS.online:

By integrating their processes with ISO 27001 and ISO 42001, businesses can meet the current requirements of the EU AI Act and future-proof themselves against emerging AI regulations that are likely to be introduced in other jurisdictions

Or claims like those made by these guys, and these guys, and these guys.

You get the picture.

Now, again, ISO 42001 might be a great thing to implement if you want controls over your AI systems; maybe. But the investment necessary to implement ISO 42001 must be based on sound advice, a review of the costs and overhead, and not based on false claims made by shady consultants.

 

Advertisements

ISO 14001 Implementation