← All posts

AI and ML model compliance and what ISO/IEC 42001 means for your product

By QualiHQ Team

If you are building a product with a machine learning model in it and you are starting to think about compliance, you have probably noticed that most of the guidance out there was written for traditional software. In December 2023, ISO published ISO/IEC 42001 -- the first international standard specifically for AI management systems -- and it changes the picture considerably for teams building with AI.

Most seed-stage teams have not heard of it yet but if your product includes machine learning, it is worth understanding now.

What ISO/IEC 42001 actually is

ISO/IEC 42001 defines what responsible governance of an AI system looks like: how you document what it is supposed to do, how you test and verify it, how you manage changes, and how you monitor it once it is in production.

Think of it the same way you would think of ISO 13485 for medical devices. It is a framework for running things properly, in a way that can be audited and proven. It is not a checklist of things to fear. It is a description of what a well-run team building AI software should already be doing.

The timing matters. The EU AI Act is now in force and creates real legal obligations for companies deploying AI in high-risk settings. ISO/IEC 42001 is not a legal requirement under it, but it is the management framework that teams are adopting to get their AI governance in order ahead of those obligations. Getting familiar with it now puts you well ahead of most teams at your stage.

What it actually asks for

If you already have a QMS in place, you are closer than you think. The core building blocks are the same ones you are already using. What changes is how you apply them to a model. If you are new to QMS concepts entirely, our plain-English guide to what a QMS is is a good place to start.

Requirements are statistical, not binary. A conventional software requirement says the system must do X under condition Y. An ML requirement says the model must achieve a defined accuracy on a specific dataset, with documented acceptance thresholds. That is still a valid, testable requirement. You just need to document the dataset and evaluation approach alongside the outcome.

Verification captures metrics rather than pass or fail. The logic is identical to any other verification: here is what we said the system would do, here is the evidence it does it, here is who reviewed and approved it. The records look a little different. The principle is the same.

Retraining a model is a product change. This is the one most teams miss. When you retrain a model on new data, your product can behave differently in ways that matter to users. That is a version change and it needs to go through change control, the same as any other update to your product.

You need to monitor after deployment. Software does not quietly get worse over time. A model can, as the data it encounters in production drifts from what it was trained on. Having a defined process for catching this is part of responsible AI governance. It does not need to be complex. It just needs to exist.

The honest take for small teams

Right now, most auditors and certification bodies are still getting to grips with ISO/IEC 42001. You are probably not going to be asked to produce a full AI management system certification in your next customer audit or funding round.

But that is not really the point. What auditors, enterprise customers, and investors actually want to see is that you are taking this seriously. That you know what your model is supposed to do. That you have evidence it does it. That you have a process for managing changes and catching problems.

Not a perfect system. A documented, reviewable one. That is what compliance means at this stage, and it is entirely achievable. It is also worth noting that using AI to help build that documentation is entirely legitimate -- the standards care about outcomes and accountability, not how the first draft was produced.

The other reason to get on this early is practical: it is significantly easier to build good habits into your process now than to retrofit documentation onto a model that has already been through several retraining cycles and production incidents. The teams who leave it until a customer or auditor forces the issue tend to find it a much bigger lift than it needed to be.

Where QualiHQ fits in

The building blocks ISO/IEC 42001 asks for are exactly what QualiHQ is built around. You define a requirement for your AI component the same way you would for any other part of your product. You link a verification to your evaluation run. You attach a test case to the specific model version. You push the change through a release record that captures what changed, what was tested, and who signed off.

If something goes wrong, you have a clear trail. If a customer asks how you govern your AI, you have an answer with evidence behind it.

You do not need to become an AI governance expert. You need a QMS that is simple enough that your team will actually use it. That is what QualiHQ is built for -- quality management that fits around the way your team works, not the other way around.

If you are building with AI and starting to think about compliance, get started free and we will help you get the foundations right from day one.

Ready to try QualiHQ?

Get started free →