[Over at The LinkedIn, ISO expert Ian Hendra reminded me that auditors are not the problem, people are always going to perform poorly in a defective system. I agree. I think this article still stands, though; just read it through to the end.]

The validity of ISO 9001 and related certifications is declining dramatically, a problem driven from many sources: ISO’s inept standards development process, the IAF’s intentional abdication of their duties in overseeing registrars, and a market driven by a few highly conflicted publishing sources which print only glowing pro-ISO materials, and reject all critical analysis. This has led to an overall decline in the trust of ISO certifications, despite what ISO and its dutiful press pals say.

One of the main root causes of this problem is the poor quality of the ISO audits themselves. The results are well known: companies like Takata, Kobe Steel, BP Oil all held ISO 9001 or related certs while they were engaged in nefarious practices, some of which resulted in actual loss of life. Questions abound as to how this could be so, and what role the individual auditors may have played in such scandals.

ISO 9001 end users will no doubt recognize a more obvious problem with auditors: they’re just not very good. Auditors appear to fall into two camps: the ones that want to be your “best friend” and never write nonconformities, no matter what they find, and the “tin badge cop” who comes in, bragging of his experience, and beating you up over the most inane findings, often utterly out of scope of the QMS. The first wants to be your buddy, the other one wants to be your bully. Both are loathsome.

Problems with human auditors include the following bad practices, with individual auditors often engaging in multiple offenses:

  • Inventing requirements that don’t exist
  • Failing to understand the requirements
  • Softgrading nonconformities: downgrading NCs as “opportunities for improvement” in order to placate the client
  • Hardgrading nonconformities: upgrading minor NCs to majors in order to justify follow-on audit days
  • Falsifying audit reports
  • Shortening audits but charging full amount (theft)
  • Imposing personal interpretation on QMS implementation onto the client
  • Illegally distributing clients’ QMS documentation to other clients without permission
  • Engaging in quid pro quo relationships with private consultants
  • Engaging in unethical or illegal behavior during audits (harassment, discrimination, etc.)
  • General unprofessional behavior (arguing, profanity, etc.)

In reality, the list goes on and on. Which begs the question: if humans are so bad at auditing, why not get rid of the humans?

Mechagodzilla Rising

Artificial intelligence (AI) is truly upon us. To most, they don’t know the difference between the last decade’s “coded script responses” that appeared to be AI — think your GPS barking directions, or the voice prompts your bank uses to direct your call — and actual AI. The real deal AI is far different: it’s not pre-scripted “if this, then that” lines of code running, this is the machine learning in a fashion similar to humans, and then applying that learning to the unique circumstances of the moment. No script can do that; the machine is thinking now. Ask it a question, and it will understand the context, the importance and — soon enough — even the nuance.

I get it. The bulk of you reading this are horrified. But just as a generation of people wouldn’t set foot in a “horseless carriage” that went over 30 miles per hour, our generation will be pushed aside by the one that embraces and utilizes AI in all its forms: driverless cars, pilotless airplanes, and even cyber warfare. It will come with the usual commensurate downsides and risks, of course, but we shouldn’t expect our generation to get it. We should just shut up and stand aside, or get with the program.

In fact, ISO auditing would be one of the least risky propositions for AI, as well as one of the simplest. In its purest form, auditing is supposed to consist of an auditor asking a question based on a requirement, then getting a response in the form of a verbal answer or some form of evidence. The entire thing falls apart when humans do it because first of all, they don’t know how to ask the question: auditors take one of two tacks, they either read from the ISO standard, thus totally confusing the auditee, or they try to put it into “Human English” and wind up inserting their preconceived opinions into the question. For example, rather than say, “how do you control documented information relative to version control,” the auditor will ask, “do you have a document master list?” The latter sounds more like English, but it includes a requirement that doesn’t exist — the master list — and now the client is scrambling to produce something to answer the question, but which isn’t even necessary. AI would be fed the ISO 9001 requirements, and then use machine learning to interpret them without injecting additional requirements. So rather than ask for a “document master list,” the system would ask, “how do you ensure your employees have the latest documents?”

Next, of course, AI can’t be bribed or harassed. It can’t be in a bad mood. So hardgrading and softgrading can’t enter the picture at all, and neither can any of the other human-based foibles that turn audits into nightmares. An AI audit won’t flirt awkwardly with the receptionist, and won’t blurt out some racist garbage during lunch. In fact, there’d be no lunch; no donuts, no coffee, no alcohol afterward. You’d save a ton of money, and wouldn’t have to waste hours entertaining someone you’d much rather see run over by a truck.

Gathering evidence would be tricky, but hardly impossible. Documents and photos can be scanned; new image analysis software, such as Google Lens, is already showing how AI can analyze what it “sees” in a photo to understand what it is and even the context. An AI audit would consist of the system conducting an interview, and then analyzing the verbal responses, along with documents, records and photographic evidence provided to it. Walking a camera through the plant would allow the AI auditor to look for things that you haven’t fed into it, adding additional questions on-the-fly, and responding to real-time events.

In the end, AI audits would be done faster, cheaper and with greater efficiency and objectivity. But it wouldn’t be all roses.

No One Can Be Told What The Matrix Is

The risks to AI audits eventually come from human foibles, yet again. The systems would be susceptible to hacking; data fed into it could be leaked to competitors or international bad actors. The software could be subtly tweaked to begin rejecting companies that might actually deserve certification; worse, the CB home office could alter the software to allow it to pass each client, no matter what, similar to what Volkswagen did with its emissions software. But these downsides are already present: CB information is a nearly unprotected cache of treasure for industrial spies, just waiting to be hacked, because CBs are morons when it comes to information security. CBs are already falsifying audit reports to ensure everyone passes, which is why Volkswagen had an ISO cert just as it was doing all that emissions fiddling.

Clients would also see this as a way of gaming the system, by feeding the AI auditor only pre-selected evidence aimed at showing just how awesome the company is. But, again, that’s already happening; an AI system would learn how to identify pre-selected evidence and demand other evidence be presented instead, and then react to that.

So if anything, some problems that already exist would remain — sure — but they’d be reduced, not worsen.

You would think that CBs would jump at the chance to finally get rid of their entire audit pool.  The home offices generally loathe their auditors — yes, auditors, you wouldn’t believe what your bosses tell me about you behind your back — and replacing them with a cheap software option would eliminate that problem permanently. It would also make auditor shortages a thing of the past. CBs won’t be jumping on this bandwagon likely ever, first of all, because of costs. Eventually, AI software will become omnipresent, so costs will reduce, and that won’t be an excuse anymore.

But CBs make money only when their clients pass audits, not when they fail them. So they pass everyone, only pausing to occasionally give a company a few “major nonconformities” in order to milk a few extra audit days out of them. Because of their obscurity — no reporter really knows what the hell a “certification body” is, never mind what it does — they are able to quickly hide when one of their disreputable ISO certified clients kills 100 people in some massive product quality failure. The registrars, through IAF and ISO CASCO, have ensured the rules give them sufficient cover in case of disaster, just for these reasons. If they were to add some form of truly objective AI into the mix, they couldn’t play all those games anymore.

So AI audits are not likely, even if they might completely revolutionize ISO auditing while returning it to its intended purpose: to provide independent, objective assessment of a quality system against an international standard.

Advertisements

Traditional Tri-System