Lorem

On 6 December 2022, the Council of the European Union (“EU Council”) announced that it had adopted a position on its general approach to the development of a harmonised regime for regulating AI in the EU (the proposed EU AI Act (“AI Act”)).

The adoption is the latest step by the EU towards the development of a harmonised approach to regulation since the announcement of the original wording in April 2022, alongside other prominent developments in the area of AI (for example, the proposal of a revolutionised liability regime to address the issues created by complicated technology and supply chains).

While the revised general approach (“General Approach”) by the EU Council makes amends to much of the original wording, the primary areas of amendment are as follows.

Definition of an AI System

One of the most discussed issues with the original text of the AI Act (as seen in our earlier post) was with regard to what constitutes AI or an AI System. The original definition from April 2022 did offer a helpful starting point, but was so broad that it risked catching a large number of more straightforward systems within the regulatory net. In November 2022, the definition was revised, listing a variety of technologies and techniques which helpfully narrowed the scope.

While a step in the right direction, that amendment, still cast the regulatory net over a wide array of technologies, and subjected a far greater range of systems and operators to the strict terms of the AI Act. In acknowledgement of this, the finalised General Approach narrows the definition further to distinguish more simple software systems to those of machine learning and logic-/knowledge-based systems, and means that the regulation will be more appropriately targetted.

Prohibited AI Practices

For the purposes of the AI Act, ‘prohibited practices’ are a list of uses of AI systems that are banned (with a few very limited exceptions). These are largely prohibited where the AI would take undue advantage of certain categories of people or manipulate behaviour in an unethical manner.

In earlier drafts of the AI Act, public authorities were prohibited from using forms of social scoring, such as social credit ratings. In this latest approach, this ban extends to private organisations, thereby preventing situations akin to the episode ‘Nosedive’ in Black Mirror, where a social score dictates much of the interactions of citizens and their ability to access certain services and roles.

The updated approach also expands the prohibition on exploiting vulnerabilities of certain groups, by including vulnerabilities arising due to social or economic situations, thereby reducing the risk that AI might be used to exploit people in desperate situations. Previously, the text had limited these classes to more common forms of classification, such as children and those with disabilities.

Of interest to those working in the areas of civil liberties, privacy, and human rights, the updated text also clarifies the objectives that allow for law enforcement to make use of real-time biometric identification systems (such as facial recognition systems) in public spaces. This is intended to therefore restrict abuse of this ability and limit use of these systems to a very few exceptional cases.

Classification of AI systems as high-risk

As in the previous iterations of the text the revised General Approach continues to classify AI systems based on their risk profile. High-risk AI systems are generally categorised as such due to their safety implications, or because they have the potential to materially impact the fundamental rights of natural persons. This includes AI systems included in credit scoring/decision making, AI included in certain medical devices (such as advanced cancer scanning machines), and AI used as part of insurance decision-making processes.

The latest revision of the text by the EU Council acknowledges that many high-risk AI, as far as the standard definition is concerned, may not pose substantial risks to the rights of persons. They have therefore included a horizontal wrapper of sorts to the classification, which allows for a contextual easing of requirements in certain circumstances. This ensures that where a system is not likely to cause any serious fundamental rights violations, or other significant risks to persons, that they are not caught, and therefore not subjected to the strict obligations that would otherwise be required.

Requirements for high-risk AI systems

The previous versions of the proposed AI Act set out several requirements for stakeholders (including importers, distributors, and users), that would be required to be followed. Failure to do so could result in heavy penalties of up to €30 million or 6% of global turnover.

As many AI systems are developed and distributed through a complex network of participants, it is often difficult to fully establish who should be doing what, and to what standard. The amended text seeks to rectify this by clarifying many of the roles played by stakeholders in the AI value chain that were previously confusing or open to interpretation. Other requirements have also been clarified in such a way that they are much more technically feasible, such as those with respect to quality of data and the requirement to produce technical documentation to demonstrate compliance.

The amended text also goes on to clarify the relationship between the obligations imposed under the AI Act and those set out under existing legislation – a fact which will likely be used to interpret the relationship with new and complimentary regulations, such as the proposed AI liability regime mentioned above.

Transparency and other provisions for affected persons

Several changes are also included in the General Approach that are intended to increase transparency in the use of high-risk AI systems. As the EU Council notes, certain provisions have been updated to require certain public entity users of AI to register with a database for high-risk AI systems to allow for sufficient market oversight.

The revised text also includes a new provision requiring that those who make use of emotion recognition systems adequately inform those being exposed to the system that they are subject to such interactions.

The text goes on to clarify that both natural and legal persons may complain to the appropriate market surveillance authority in the relevant jurisdiction regarding any suspected non-compliance. These complaints will then be handled in accordance with the protocols of that specific authority.

General purpose AI systems

Obligations relating to operators of general purpose AI systems (those that can be used for several different purposes, such as speech/audio recognition software) are also subject to change in the latest version of the EU Council’s position. These amendments also seek to target general purpose technology where it can be subsequently integrated into another high-risk system, such as an image recognition software that can be used for live facial recognition systems.

The settled position states that, in the case of general purpose AI systems, a contextual approach would be taken. Rather than directly applying the rules of high-risk AI systems, an implementing act would specify how they should be applied, basing their decision on a detailed impact assessment of the use and the specific characteristics of the systems, technical feasibility, and developments in technology.

Scope and provisions relating to law enforcement

One of the most discussed aspects of the previous versions of the text were the sections that relate to law enforcement authorities. This has been fuelled by, among other things, the desire to balance the ability of law enforcement personnel to do their job in a world of advancing technology, while ensuring that we do not allow for the development of an Orwellian level of surveillance.

In this latest version, explicit reference has been made to the previous exclusions permitting the use of high-risk and prohibited uses in the context of national security, defence, and military purposes. Several changes have been implemented to ensure that, while exceptions apply to the use in these contexts, they must be subject to safeguards, depending upon the sensitive nature and pervasiveness of their use.

Compliance framework and AI Board

The text acknowledges that several the obligations it contains, in their current form, create compliance challenges. In response, the latest text includes clarifications and simplifications to the provisions containing conformity assessment procedures. The text also goes on to clarify several provisions in relation to market surveillance. These edits aim to allow regulators to more easily implement controls across their respective jurisdiction and function.

The new General Approach also aims to ensure greater autonomy of the AI Board. To that end, the revised text includes provisions to strengthen its role in governance of the wider architecture of the AI Act. To ensure that the board continues to involve stakeholders in decision on implementation, the new provisions also include the requirement to establish a permanent subgroup to discuss implementing and delegating acts.

Measures in support of innovation

The latest version of the text is not exclusively set on amending the prescriptive elements of the AI Act. Instead, the General Approach also seeks to bolster the provisions for encouraging innovation within the bloc.

For example, it has been clarified that AI regulatory sandboxes, such as the first established in Spain in June 2022, should allow AI systems to be tested in certain real-world conditions, as well as in very specific controlled environments. The new provisions go on to permit certain instances of unsupervised real-world testing, subject to specific conditions and safeguards.

It has also been highlighted that the proposed text should not apply to AI systems and their outputs used solely for research and development in an academic and non-professional capacity, thereby encouraging those who seek to experiment with the technology in areas that would otherwise be deemed prohibited.

Perhaps most interesting to smaller organisations, the text also now includes several actions that can be taken to encourage smaller operators to take advantage of these initiatives, thereby reducing barriers to entry.

Amendment to Penalties for SMEs and start-ups

One item that was raised in the earlier texts, with respect to stifling innovation, was the concern smaller organisations had with respect to the penalties they faced if they were to find themselves in breach of the AI Act.

In the General Approach, the EU Council have attempted to quell these concerns (while ensuring organisations are aware of the severity of non-compliance) by introducing a need for more proportionate caps on fines they can face. This has been done by introducing additional criteria into their provisions that must be considered when handing down these fines, such as whether such infringement was intentional or negligent in character.

What comes next?

The adoption of this latest approach by the EU Council will now allow for negotiations to commence with the European Parliament once they have adopted their proposed position. The outcome of these negotiations will be the near final, if not final, wording to be used across the EU.

Find out more

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

You can find a more detailed guide on the AI Act and what’s in store for AI in Europe in DLA Piper’s AI Regulation Handbook.

To assess your organisation’s maturity on its AI journey (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool.

You can find more on AI and the law at Technology’s Legal Edge, DLA Piper’s tech-sector blog.

DLA Piper continues to monitor updates and developments of AI and its impacts on industry across the world. For further information or if you have any questions, please contact the authors or your usual DLA Piper contact.