The EU AI Act entered into force on 1 August 2024 and is the first of its kind globally. As AI providers adapt their systems to follow EU law, this might lead to the Act becoming an international standard.
The Act gives AI providers time to comply. It is interesting to note that, as at the time of this article, existing AI systems appear to fall short of the standards under the Act. These shortcomings include diversity and fairness and explaining key concepts. Use of AI has already caused much controversy, particularly with allegations of copyright infringement.
Below we’ve outlined how the Act will affect Britain, particularly in a post-Brexit landscape.
Controversial definition
The Act introduces a definition of “artificial intelligence system”:
“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that…infers, from the input it receives, how to generate outputs…”
Defining a rapidly changing technology such as AI was never going to be easy and it has attracted criticism. Some say it is too broad and might capture even spreadsheets with basic calculations. Others say it is not broad enough and could quickly become outdated. In any event, the final version of the definition closely mirrors the OECD definition.
The Act adopts a risk-based approach. It prohibits some systems and designates others as high-risk and allows for lower-risk activities.
Prohibited AI practices
The Act prohibits practices such as:
- Using subliminal techniques or exploiting a person’s vulnerabilities to distort behaviour
- Using real-time biometric ID systems in publicly accessible spaces for law enforcement. The use of automatic facial recognition has already caused controversy under GDPR.
- Social scoring by public authorities which could lead to detrimental or unfavourable treatment.
High-risk AI systems
The Act also lists high-risk systems such as:
- AI systems in aviation, cars, medical devices, road traffic and the supply of water, gas, heating and electricity
- Biometric identification and categorisation of people, and emotion recognition
- Using AI to evaluate and manage people through recruitment, educational tests, border control and the administration of justice
- Use of AI systems to assign emergency call outs, pricing insurance or by law enforcement to assess the risks of someone offending.
Providers of high-risk AI must jump through various hoops including risk management, validating the data used, keeping records and allowing a degree of human oversight. Providers who believe their AI system is not high-risk must document this assessment before marketing and selling it.
General purpose AI
The Act also addresses general purpose AI (“GPAI”) which is an AI model:
“When trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks“.
Providers of GPAI models must draw up technical documentation and provide information to those who intend to integrate that system into their own AI system. Additionally, they must respect copyright and publish a sufficiently detailed summary about the content used for training the GPAI model.
Enforcement & penalties
The EU AI Office will have oversight and each EU member state must appoint their own local authority. The Act introduces the right to lodge a complaint and a right to an explanation of individual decision-making.
Failure to comply with the Act can result in a fine – although not within the first year after entry into force of the Act. The supply of incorrect, incomplete or misleading information when required to do so can lead to a fine of the higher of €7.5m or 1% of turnover.
The fine for providers of GPAI models is the higher of 3% or €15m. This includes infringements of the obligations or non-compliance with the enforcement measures, e.g. requests for information.
The penalty for providing prohibited AI could lead to a fine of the higher of €35 million or 7% of annual turnover. This is higher than a fine for breach of GDPR and is more akin to an anti-competition fine.
Key dates
- 1 August 2024: The AI Act entered into force
- 2 February 2025: Prohibited AI restrictions start
- 2 May 2025: Codes of practice for GPAI must be finalised
- 2 August 2025: Several rules start to apply, including those for GPAI models, governance, confidentiality, and fines
- 2 August 2026: The remainder of the Act starts to apply apart from high risk AI
- 2 August 2027: High risk AI and providers of GPAI models placed on the market before 2 August 2025 must comply
- 31 December 2030: Deadline for AI systems that are components of large-scale IT systems to be brought into compliance.
Impact in the UK
While the UK government is paying lip service to AI regulation, the EU is marching ahead. In theory, providers and users in the UK will not be directly impacted by this. In truth, it is likely that AI systems that are used in the UK and the EU will be adapted to meet EU standards to reduce the likelihood of non-compliance.
Since there is no other comprehensive regulation, you can expect the Act to become the standard for AI systems going forward. This is likely to be simpler than AI providers having one version for the EU and a different version elsewhere. If you are using an AI model, you should ensure your provider will become compliant. It is likely that “EU AI Act compliant” will soon form part of procurement conversations.