AI Act – The European Way Forward
According to latest news following months of discussions, members of the European Parliament have reached a provisional political deal on AI Act legislation some days ago. Both a final vote of the European Parliament and related official documentation is not publicly available at this stage. However, the legislative process now seems to approach the trilogue procedure before the AI Act may enter into force end of 2023.
Revised Definition of AI
According to latest news publicly available about the discussions, the members of the European Parliament seem to have adopted a definition of AI which largely overlaps the definition developed by the Organisation for Economic Cooperation and Development (OECD). As such, an AI system shall mean “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments”. Reportedly, additional text in the preamble shall make clear that the definition shall be “closely aligned with the work of international organizations working on artificial intelligence to ensure legal certainty, harmonization and wide acceptance”.
General Purpose AI Systems and Foundation Models
Following passionate debates on how to deal with AI systems not having a specific purpose (so called General Purpose AI), the members of the European Parliament reportedly confirmed the opinion of the European Council in its General Approach on the Artificial Intelligence Act draft version as of 6 December 2022. Furthermore, the European Parliament agreed on a set of new rules for so called Foundation Models or Generative AI. These are now available as a sub-category of General Purpose AI, but – in contrast to GPAI – trained on a broad data at scale to achieve a wide range of downstream tasks, including those for which they have not been specifically developed for. Prominent examples would be Chat-GPT and Stable Diffusion (which is able to create a latent text-to-image diffusion). Due to their tendency and related risk to develop a life of their own, Foundation Models shall be governed by stricter rules than conventional General Purpose AI. This includes, e.g., stricter requirements on risk management, extensive analysis and testing activities.
In addition, Generative AI shall be designed and developed in compliance with the laws applicable in the European Union and its fundamental rights, in particular, freedom of speech, and, as such, is required to successfully pass a fundamental rights impact assessment test.
Additional Layer for High-risk AI Classification
In terms of classifying high-risk AI, the members of the European Parliament aligned on an additional layer. According to the initial proposal, any AI solution pertaining to critical categories or use cases listed on Annex III would be classified as high-risk AI and, as such, would have to comply with a stricter regime of requirements on risk management, transparency and data governance. According to latest news available to the public, AI solutions listed on Annex III now shall only be classified as high-risk AI if the respective solution also poses significant risk of harm to health, safety or fundamental rights. Such significant risk is defined as “a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and it’s the ability to affect an individual, a plurality of persons or to affect a particular group of persons.” The same applies for recommender systems of large online platform providers and AI solutions being used to manage critical infrastructures, such as water management systems or energy grids, if they entail severe environmental risks. Examples of currently recorded high-risk AI systems would include:
- Critical infrastructures that could endanger the lives or health of individuals (e.g., traffic, transport);
- educational or vocational training, which may influence someone’s access to education and career path (e.g., scoring of exams);
- product safety features (e.g., AI application in robot-assisted surgery); and
- access to self-employment, worker management, and employment (e.g., CV sorting software for recruitment procedures).
Revised Set of Prohibited Activities
According to the latest information available in the public on the provisional political deal on the AI Act, the following AI systems are prohibited:
- AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior;
- AI systems that exploit vulnerabilities of a specific group of persons due to their age, physical or mental disability;
- ‘Real-time remote biometric identification systems in public spaces – Such systems shall only be permitted ex-post and limited to serious crimes, subject to previous approval by the competent court; and
- AI systems for emotional recognition in border management, at workplaces or in educational institutions.
Additional Safeguards and Sustainability Standards
According to latest co-rapporteurs proposal, additional principles shall apply to AI in general. The principles include respect of human dignity as well as personal autonomy and oversight, technical robustness and safety, privacy and data governance, transparency, social and environmental well-being, diversity, non-discrimination and fairness. According to the co-rapporteurs, such principles shall be implemented by respective technical standards and documentation, but not stipulate additional requirements.
The members of the European Parliament also aligned on extra safeguards against biases if an AI system processes sensitive data, such as sexual orientation or religious beliefs.
Last, but not least, high-risk AI-systems shall keep records of their environmental footprint and foundation models shall comply with European environmental standards.
According to latest news, voting activities in the Committee on Civil Liberties, Justice and Home Affairs (LlBE) as well as the Internal Market and Consumer Protection Committee (IMCO) are expected for 11 May 2023. Voting activities in the European Parliament are expected for 12 June 2023. The deliberations between the European Commission, the European Council and the European Parliament (trilogues) are then expected to start on 15 June 2023.
We will monitor any further activities in this regard and keep you posted.
 Annex III (2) (a) of the AI-Act-Proposal
 Annex III (3) of the AI-Act-Proposal
 Annex III (2) (a) of the AI-Act-Proposal
 Annex III (4) of the AI-Act-Proposal
You may also be interested in
The Generation Game: Points to consider when leveraging generative AI in business
Many of us will by now have played around with Chat-GPT, Dall-E and other generative AI products. We will have seen how powerful and impressive these tools are. We have probably also wondered exactly where this is all heading, what it means for our business, and how quickly things will change. While the answers to... Continue Reading…
Data Protection Impact Assessments meet AI: Smart questions for building compliant AI systems that use personal data
As the tech world continues to discuss the future of AI regulation, it is important to remember that there are already robust legal regimes that impact the development and launch of AI systems, including the General Data Protection Regulation (the “GDPR”).
UK launches new AI Standards Hub for the development of AI best practices
In January 2022, DLA Piper reported on an announcement of a new initiative, as part of the UK’s National AI Strategy, to shape the way organisations and regulators develop technical standards for artificial intelligence (“AI”). The initiative, the AI Standards Hub (“Hub”), was highlighted as a collaborative effort between the Alan Turing Institute, the British... Continue Reading…