AI Compliance and Regulation

Currently the attempts to regulate AI are far from unified with regions and countries proposing varying levels of willingness to enforce guardrails fearing they could potentially hinder progress and innovation.

Here are a few:


ISO/IEC 42001

The International Organization for Standardization (ISO) is an independent, non-government organization that develops and publishes standards. Founded in 1947 and based in Geneva, Switzerland - ISO uses input from global experts to establish best practices across various fields. For example defining security, data management and governance standards in the field of Information Technology (IT).

Key principles ISO/IEC 42001: Governance over the AI System's life-cycle, roles and accountability, risk management, data management and model security, privacy controls, bias and ethical policies and ultimately trust and transparency.

In December 2023 the ISO, in consultation with the International Electrotechnical Commission (IEC) published Version 1 of the ISO/IEC 42001 'Information technology — Artificial intelligence — Management system'. Aimed primarily at establishing a governance framework over AI design, development and deployment. The standard is not yet mandatory but with an increased focus on the safety and security around AI, organizations who have acquired 42001 compliance can indicate as a AI System provider to their users or in the case of an AI System Developer providing the API for another organization to use in their own products - the assurance that best practices have been applied throughout the AI model life cycle. Including ongoing monitoring of AI models performance.

EU AI Act (EU AIA)

Compliance with the EU AIA is mandatory and eventually will require Providers and Deployers of high-risk systems to display a CE Marking - similar to a product that has passed a particular safety standard. The approach is to focus on controls over the AI development by Providers that limits the risk to individual users exposing PII (Personal Identifiable Information). The EU AI Act also aims to regulate the deployment of AI systems and interaction with vulnerable groups, for example the use of chatbots in schools (Deployer), where engagement with children could have unintended or harmful consequences.

Published in July of 2024 with elements of the Act taking effect in 2025 [EU AIA timeline]:


Territorial reach EU AIA

The rules of the EU AIA are intended to extend beyond countries within the European Union,  Article 2 states that the regulation shall apply to:

The UK is no longer within the European Union, therefore the EU AIA does not apply unless a UK based company as a Provider or Deployer of an AI System, wishes to use an AI system in a product or service on offer to users within the EU. Similarly, any US based company seeking to market their AI products within the EU, will have to comply to the AIA!

EU AIA Definitions:


*update UK AI Opportunities Action Plan

On Monday 13 January 2025, the UK Government announced its UK AI Opportunities Action Plan. A very ambitious set of principles (or promises) that aims to invest into Artificial Intelligence throughout the public sector. Below is a timeline of sorts :


Within 6 months:


Within the next year:

As regards the previous, UK Pro-Innovation AI Framework - the white paper published in March of 2023, with it's promise to avoid regulatory confusion and focus on supporting innovation while encouraging transparency and explainability of AI Systems. The consultation and the publication of proposals of a Monitoring & Evaluation (M&E) Framework was due for completion by September of 2024, however this may have been paused (or scrapped) thanks to the General Election of July 2024.

It's my view that the implementation of the EU AIA will be followed closely and will inform how the UK moves forward. For now, Providers and Deployers of AI Systems are encouraged to keep on innovating but make a start on putting into place Risk Management Strategies and processes around Explainable AI (XAI).


California Senate Bill 1047 (SB-1047)

California Senate Bill 1047 (SB 1047), also known as the 'Safe and Secure Innovation for Frontier Artificial Intelligence Models Act', is a proposed legislation aimed at regulating advanced AI models. The bill was introduced by Senator Scott Wiener and passed by both houses of the California legislature in 2024. This is one of many bills currently active within state legislatures, while rules are being defined for AI Systems. However, since changes in administration, the US' view on regulation may soften or at a minimum de-prioritised. What will be interesting to see is how companies who wish to market their products and services in the EU, react to the EU AIA. There is some fear that an 'innovation gap' will materialize when US AI Providers start to withhold (or delay) releasing new AI into the EU market. Already EU countries have been excluded from waiting lists to get their chance to experiment on certain new experiences.


January 2025

Written by Sean Simone, IOKA


References / Sources:

ISO/IEC 42001:2023 - iso.org

Article 3, Definitions - EU AI Act

California's SB-1047 - dlapiper.com, February 2024

UK AI Opportunities Action Plan