Four years after its initial proposal, the EU AI Act is now in force, marking the world's first AI legislation. This landmark ruling governs the development, use, and application of AI across the European Union.
It requires that AI systems introduced in Europe adhere to standards of safety, transparency, non-discrimination, traceability, and environmental friendliness. The Act also establishes a classification system based on the level of risk associated with different AI applications, ranging from minimal risk to complete prohibition.
High-risk applications will require extra controls to ensure data security and minimise bias, and they may need to share detailed documentation with authorities to meet conformity assessment. Companies that breach the EU AI Act could face significant fines ranging from €35 million (£29 million) or 7% of global annual revenues, whichever is higher, to €7.5 million or 1.5% of global annual revenues, depending on the infringement and company size.
The Act introduces a risk-based classification system based on the risk posed to the individuals using the solution:
High Risk: Includes services affecting citizens' lives (e.g., creditworthiness evaluation, educational opportunities, critical infrastructure). These must undergo strict assessment regimes before market introduction. The EU will regularly update this list.
Limited Risk: AI applications with obligations such as disclosing user interaction with an AI system.
Minimal Risk: Applications like spam filtering or video games are deemed minimal risk and not subject to further regulatory requirements.
Prohibited AI Applications: The Act outlines several uses and applications that will be banned under the regulation. Banned applications include AI systems that:
It's important to note that the rules on prohibited AI systems will be among the first provisions to take effect in February 2025. See below for a full timeline of when other requirements will take effect.
Compliance requirements vary depending on the risk level of a system. High-risk systems must undergo a Conformity Assessment to demonstrate adherence to the AI Act before being placed on the EU market. For example, high-risk systems will need to implement a quality management system (QMS) that meets compliance with the regulation.
The QMS will need to include but is not limited to:
For applications processing highly sensitive information, an external certification body may be required. This process is similar to the current ISO 27001 process, where notified bodies evaluate the system and its documentation, issuing a certificate if compliant or mandating corrective actions if not.
Aside from businesses based in the EU, the AI Act applies to any company with customers in the EU or where AI system outputs are used in the EEA. This includes businesses in the UK and USA. Some organisations have already opted to limit or not roll out services across the EU ahead of the Act, including Meta, which has pulled the release of its most advanced AI model across the EU in light of regulatory uncertainty.
For guidance on what this means for your business, please reach out to our compliance team. They are on hand to determine whether this affects your business and to guide you through the necessary controls to meet compliance.
On this side of the channel, while we await details of the Cyber Resilience Bill and whether it includes AI or results in a complete AI bill, there are several upcoming developments for AI-enabled healthcare solutions.
The MHRA is now calling for solutions to join its Regulatory Sandbox AI-Airlock. This sandbox aims to support the development and deployment of AI and software-based medical devices. With the review of the Medical Device Regulations, several AI products are expected to be moved to higher-risk categories and potentially have additional compliance requirements. This change aligns with the risk-based classification system outlined in the Medical Devices Regulations.
Additionally, there is upcoming guidance to consider:
We'll soon publish our in-depth AI guide to help you navigate these changes. In the meantime, please contact our compliance team if you have any questions about how these AI developments might impact your business. They are ready to assist you in understanding the implications and ensuring your compliance. Stay tuned for more updates.
Book a 15-minute demo and discover how Naq can take the complexity out of your compliance.