AI Compliance and Regulation
Currently the attempts to regulate AI are far from unified with regions and countries proposing varying levels of willingness to enforce guardrails fearing they could potentially hinder progress and innovation.
Here are a few:
ISO/IEC 42001 : International Standards
EU AI Act (EU AIA) : Europe
UK Pro-Innovation AI Framework : United Kingdom
California Senate Bill 1047 (SB-1047) : State regulation (not national)
ISO/IEC 42001
The International Organization for Standardization (ISO) is an independent, non-government organization that develops and publishes standards. Founded in 1947 and based in Geneva, Switzerland - ISO uses input from global experts to establish best practices across various fields. For example defining security, data management and governance standards in the field of Information Technology (IT).
Key principles ISO/IEC 42001: Governance over the AI System's life-cycle, roles and accountability, risk management, data management and model security, privacy controls, bias and ethical policies and ultimately trust and transparency.
In December 2023 the ISO, in consultation with the International Electrotechnical Commission (IEC) published Version 1 of the ISO/IEC 42001 'Information technology — Artificial intelligence — Management system'. Aimed primarily at establishing a governance framework over AI design, development and deployment. The standard is not yet mandatory but with an increased focus on the safety and security around AI, organizations who have acquired 42001 compliance can indicate as a AI System provider to their users or in the case of an AI System Developer providing the API for another organization to use in their own products - the assurance that best practices have been applied throughout the AI model life cycle. Including ongoing monitoring of AI models performance.
EU AI Act (EU AIA)
Compliance with the EU AIA is mandatory and eventually will require Providers and Deployers of high-risk systems to display a CE Marking - similar to a product that has passed a particular safety standard. The approach is to focus on controls over the AI development by Providers that limits the risk to individual users exposing PII (Personal Identifiable Information). The EU AI Act also aims to regulate the deployment of AI systems and interaction with vulnerable groups, for example the use of chatbots in schools (Deployer), where engagement with children could have unintended or harmful consequences.
Published in July of 2024 with elements of the Act taking effect in 2025 [EU AIA timeline]:
As of 2 February 2025 :
Chapter 1 (Articles 1 -4) This law aims to foster a safe and ethical European AI market. It establishes rules for AI development and deployment, including bans on certain practices and specific regulations for high-risk systems. The law emphasizes human rights, safety, and environmental protection, and mandates transparency for particular AI applications.
Chapter 2 (Article 5) The EU AI Act prohibits certain uses of artificial intelligence (AI). These include AI systems that manipulate people's decisions or exploit their vulnerabilities, systems that evaluate or classify people based on their social behavior or personal traits, and systems that predict a person's risk of committing a crime. The Act also bans AI systems that scrape facial images from the internet or CCTV footage, infer emotions in the workplace or educational institutions, and categorize people based on their biometric data. However, some exceptions are made for law enforcement purposes, such as searching for missing persons or preventing terrorist attacks.
As of 2 August 2025 :
Notifying Authorities and Notified Bodies (Chapter III, Section 4), for each member state, their roles and responsibilities defined
General-Purpose AI models (Chapter V), their rules for classification, the responsibility of Providers and Codes of Practice
Governance at Union Level (Chapter VII), the AI Office, establishment of a European AI Board, advisory roles and points of contact
Confidentiality (Article 78), The EU AI Act states that all parties involved in applying the regulation must respect the confidentiality of information and data they obtain. This includes protecting intellectual property rights and trade secrets.
Penalties (Articles 99 and 100), enforcement measures, Bodies, Offices and Agencies.
Providers: Providers of GPAI models that have been placed on the market / put into service before this date need to be compliant with the AI Act by 2 August 2027. (Article 111(3))
Territorial reach EU AIA
The rules of the EU AIA are intended to extend beyond countries within the European Union, Article 2 states that the regulation shall apply to:
1. (a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country.
(b) deployers of AI systems that have their place of establishment or are located within the Union;
(c) providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union;
(d) importers and distributors of AI systems;
(e) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
(f) authorised representatives of providers, which are not established in the Union;
(g) affected persons that are located in the Union."
The UK is no longer within the European Union, therefore the EU AIA does not apply unless a UK based company as a Provider or Deployer of an AI System, wishes to use an AI system in a product or service on offer to users within the EU. Similarly, any US based company seeking to market their AI products within the EU, will have to comply to the AIA!
EU AIA Definitions:
AI system : means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; Related: Recital 12
AI System Provider : a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model (or that has an AI system or a general purpose AI model developed) and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge. (See AI Ecosystem)
AI System Deployer : any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
*update UK AI Opportunities Action Plan
On Monday 13 January 2025, the UK Government announced its UK AI Opportunities Action Plan. A very ambitious set of principles (or promises) that aims to invest into Artificial Intelligence throughout the public sector. Below is a timeline of sorts :
Within 6 months:
The government should set out a long-term plan for the UK’s AI infrastructure needs, backed by a 10-year investment commitment.
The government should begin expanding the capacity of the AI Research Resource ( AIRR ) by at least 20x by 2030.
Within the next year:
The government should establish an internal headhunting capability to bring elite AI talent to the UK.
The government should explore how the existing immigration system can be used to attract graduates from universities producing some of the world’s top AI talent.
The government should expand the Turing AI Fellowship offer.
As regards the previous, UK Pro-Innovation AI Framework - the white paper published in March of 2023, with it's promise to avoid regulatory confusion and focus on supporting innovation while encouraging transparency and explainability of AI Systems. The consultation and the publication of proposals of a Monitoring & Evaluation (M&E) Framework was due for completion by September of 2024, however this may have been paused (or scrapped) thanks to the General Election of July 2024.
It's my view that the implementation of the EU AIA will be followed closely and will inform how the UK moves forward. For now, Providers and Deployers of AI Systems are encouraged to keep on innovating but make a start on putting into place Risk Management Strategies and processes around Explainable AI (XAI).
California Senate Bill 1047 (SB-1047)
California Senate Bill 1047 (SB 1047), also known as the 'Safe and Secure Innovation for Frontier Artificial Intelligence Models Act', is a proposed legislation aimed at regulating advanced AI models. The bill was introduced by Senator Scott Wiener and passed by both houses of the California legislature in 2024. This is one of many bills currently active within state legislatures, while rules are being defined for AI Systems. However, since changes in administration, the US' view on regulation may soften or at a minimum de-prioritised. What will be interesting to see is how companies who wish to market their products and services in the EU, react to the EU AIA. There is some fear that an 'innovation gap' will materialize when US AI Providers start to withhold (or delay) releasing new AI into the EU market. Already EU countries have been excluded from waiting lists to get their chance to experiment on certain new experiences.
January 2025
Written by Sean Simone, IOKA
References / Sources:
ISO/IEC 42001:2023 - iso.org
Article 3, Definitions - EU AI Act
California's SB-1047 - dlapiper.com, February 2024