The EU AI Act: A new legal framework for ethical, safe and innovative use of artificial intelligence

The EU Artificial Intelligence Act (EU AI Act) establishes the first-ever horizontal regulatory framework dedicated to AI within the European Union. Catherine Caspar outlines the purpose and parameters of the ambitious regulation, from its founding principles to the accompanying measures planned to support innovation while tackling the ethical issues of AI.
Regulation (EU) 2024/1689, known as the "Artificial Intelligence Regulation" or "AI Act", marks a decisive step in the framework for the development and use of artificial intelligence (AI) within the European Union (EU). Its ambition is to ensure that AI is a technology at the service of people, in line with European values, while supporting innovation and competitiveness. It is the world's first horizontal legal framework dedicated to AI.
The regulation's objective is to establish a uniform legal framework to promote the uptake of human-centred and trustworthy AI, while ensuring a high level of protection of the health, safety and fundamental rights enshrined in the Charter of Fundamental Rights of the EU, to protect against the harmful effects of AI systems and to support innovation.
The EU AI Act develops a very complex set of rules defining the obligations towards different AI actors. This article provides a simplified reading of the main contributions of the text.
Founding principles
The EU AI Act distinguishes between general-purpose AI systems that can competently perform a wide range of distinct tasks, both for direct use and integration into other AI systems, and special-purpose AI systems.
For the latter, the EU AI Act defines a classification based on risks, quantified according to the use of the technology. AI systems are classified according to four levels of risk:
- Systems with unacceptable risk, which are prohibited;
- High risk systems, which are regulated;
- Limited risk systems, which are subject to lighter obligations, including transparency;
- Systems with minimal risk, which are not regulated.
The AI Act covers all AI systems placed on the market, commissioned or used in the EU, whether developed in the EU or abroad. It also extends to systems whose results are used in the EU, even if operated outside the EU.
Impacted parties
The majority of the obligations lie with providers and developers of high risk AI systems, who place high risk systems on the market or in service in the EU, whether based in the EU or in a third country, as well as third-country providers when the results of the high-risk AI system are used in the EU.
Those who implement an AI system in a professional context are targeted when they deploy high risk systems or as soon as the results of the AI system are used in European territory, although their obligations are less onerous than those imposed on suppliers or developers.
Importers and distributors of AI systems are also targeted. Before importing a high risk system, the importer must ensure, for example, that the system complies with the requirements of the AI Act, verify that the non-EU supplier has carried out the conformity assessment procedure, and ensure that the technical documentation (including the EU declaration of conformity) is available.
Before distributing an AI system, the distributor must ensure, for example, that the CE marking is affixed, the EU declaration of conformity is present, the accompanying documentation (leaflets, instructions, etc.) is available and correct, and that the importer and the supplier have complied with their obligations.
The four risk levels for AI systems:
- Systems with unacceptable risk
This covers any systems considered a threat to individuals, such as those that use unconscious manipulation, exploitation of human vulnerabilities, social scoring, biometric identification, and biometric categorisation (ethnicity, religion, etc.)
Such systems are prohibited, including in the case of real-time remote biometric identification in public spaces such as facial recognition, except under specific strict law enforcement conditions that must first be approved by a court.
- High risk systems
This covers systems impacting security or fundamental rights, including AI in toys, medical devices, critical infrastructure, education, vocational training, employment, essential private services (bank credit, insurance), law enforcement, migration and legal interpretation, and access to essential public services (health, emergency calls, justice).
These systems must be registered in an EU database, undergo a thorough risk assessment and be regularly reported to ensure strict compliance and monitoring. An assessment will have to be carried out before high-risk AI systems are placed on the market and then throughout their lifecycle.
- Limited risk systems
This category includes generative AI models. Although they are not high risk per se, they must meet transparency requirements and comply with European copyright law.
Providers of low-risk AI models and applications must inform users that their content is AI-generated, prevent the generation of illegal content, and publish summaries of copyrighted data.
High-impact AI models, i.e., with capabilities equal to or better than the capabilities of the most advanced general-purpose AI models, must be thoroughly assessed and serious incidents must be reported to the European Commission. AI-generated or modified content (e.g., deepfakes) must be clearly identified as such.
- Systems with minimal risk
These include spam filters, AI-based video games and inventory management systems, for example.
Most AI systems in this category are not subject to any obligations under the EU AI Act, but companies can voluntarily adopt additional codes of conduct. The primary responsibility will fall on the "providers" of AI systems, although any company that uses them must remain vigilant about its compliance obligations.
The special case of general-purpose AIs
Developers must provide a detailed summary of the training datasets (including to ensure copyright compliance), instructions for use and technical documentation (as defined by Annex IV of the AI Act) with the information necessary to demonstrate that the system is compliant with the Act. These must be prepared before the system is placed on the market or put into service.
The technical documentation requirements do not apply to providers of general-purpose AI models published under a free and open license to view, use, modify and distribute the model, and whose parameters (weight, model architecture information, model usage information) are made public.
To the extent that general-purpose AI systems can be used as high risk systems as such, or as components of other high risk AI systems, cooperation with the providers of the relevant high risk systems is required to enable them to comply with their obligations.
In the particular case of general-purpose AI posing a systemic risk – that is to say, having a significant impact due to its scope or actual or reasonably foreseeable adverse effects on public health, safety, public safety, fundamental rights or society as a whole, that can be widely disseminated along the value chain, e.g., having a computing power of more than 10 to the power of 25 FLOPS – in addition to the requirements for transparency of assessment and testing requirements, risk mitigation, serious incident reporting, cybersecurity and energy consumption analysis.
Exemptions
The EU AI Act does not apply to AI systems – whether placed on the market or not – when they are put into service or used exclusively for military, defence or national security purposes, regardless of the type of entity carrying out these activities.
The Act does not apply to AI systems specifically developed and commissioned solely for scientific research and development purposes, nor to their outputs.
Deployers who are natural persons using AI systems in the context of a strictly personal, non-professional activity are exempt.
Finally, AI systems made available under a free or open licence – unless they are high risk systems, fall under prohibited uses or transparency obligations (due to their use under Article 50) – are exempt.
Innovation support measures
The EU AI Act also includes provisions to support innovation, with a particular focus on SMEs and start-ups that have their head office or a branch in the EU.
While seeking to overcome the potential ethical issues of AI and foster the development of safe and trustworthy AI through binding provisions, the AI Act provides a supportive framework for innovation, so that the obligations imposed on SMEs do not hinder their market access or their ability to innovate.
Planned measures include, for example, priority, free and simplified access to regulatory sandboxes that allow developers to test their AI solutions in real-world conditions. SMEs can also benefit from support regarding compliance with the Act to help them understand the challenges of standardisation and the technical standards translating the legal requirements of the RIA into concrete, measurable criteria applicable by developers.
Other measures for SMEs include a simplification of compliance requirements (e.g. a simplified form for the documentation of high risk AI systems). SMEs can also benefit, via the EU member states, from awareness-raising and training actions on the application of the EU AI Act which are adapted to their needs.
Conclusion
At first glance, the implementation of such a regulation appears to be particularly complex and restrictive for the companies concerned, especially as these AI systems are constantly evolving.
It will be interesting to see how they will be able to comply and to what extent the right balance can be found so that AI is truly a technology at the service of people, in line with European values, overcoming the potential ethical issues of AI without significantly affecting innovation and competition.
For further information on the new EU AI Act or for support on any topic covered in this article, please contact your Novagraaf attorney or contact us below.
Catherine Caspar is a French and European Patent Attorney at Novagraaf in France.