Lorem

Since its release in April 2021, the draft AI regulations (the “Regulations”), also referred to as the EU AI Act, have acted as a catalyst of legal, political, and societal developments within the field of AI. Forming the basis of many of the developing trends for AI in the European Union, the Regulations have received extensive feedback from stakeholders of all forms and functions, often with positive developments made (as was the case with the very definition of AI).

What remains constant in the shifting sands for AI in Europe is that the Regulations are set to stay and form the foundations for which man and machine are set to interact. While, understandably, there is a substantial level of restrictions contained within the Regulation, the provisions equally set out a number of initiatives to encourage the use and development of AI. One such initiative is the use of regulatory ‘sandboxes’ to allow stakeholders the chance to build, and develop, in the safety of regulatory isolation, without the substantial repercussions they would otherwise face should they fall foul of the Regulations once enacted.

Taking full advantage of this positive initiative, on the 27 June, the government of Spain announced at an event alongside the European Commission that they were to begin piloting the first (likely of many) regulatory sandbox on AI.

What is a regulatory sandbox?

As the European Commission puts it, a regulatory sandbox is a way to connect innovators and regulators by providing a controlled environment for them to interact and develop their operations. This will therefore permit them to develop, test, and validate AI systems with a view of ensuring compliance with the Regulations when they come into effect. On the other side of the spectrum, regulators are able to see the bottlenecks and sticking points of the way a regulation behaves in its current form. Changes can therefore be made as needed, after having extensive feedback and testing carried out by the involved parties. Much like a sandcastle, if a process is consistently incompatible with the Regulations, or the Regulations are found to be unsuitable in practice, they can be sculpted and remoulded to better meet the needs of the EU.

What benefit is expected?

The pilot initiated by the Spanish government is set to look at operationalising the requirements of the Regulations, alongside other features, such as conformity assessments and post-market activities.

The initiative is expected to create “easy-to-follow, future-proof best practice guidelines”, alongside an array of other practical guides and materials. These are expected to assist companies, particularly SMEs and start-ups reach, compliance with the provisions of the Regulations.

It is also expected that the results of the pilot will create guidance on methods to control and monitor compliance that can be used by each individual Member State’s national authorities.

When can stakeholders start building?

Testing is set to commence in October 2022 and the results are expected to be published in the second half of 2023. The experience collected from this sandbox will be presented in the form of newly developed ‘good practices’ and ‘implementation guidelines’ that will be made available to all Member States and the European Commission in anticipation for the full implementation of the Regulations.

Does this matter if I’m not in the EU?

In short, yes.

The Regulation will apply to high-risk AI which is available within the EU, used within the EU or whose output affects people in the EU. The aim of this is to reassure and protect the key rights of EU citizens. It is therefore irrelevant whether or not the provider or user is in the EU. For example, where AI creating data models from patient data is hosted on a server outside of the EU and/or the decisions which the AI makes, or enhances, is an activity carried out outside of the EU, the regime can still apply.

Therefore the materials and good practice behaviours that will become solidified in pilots, such as this, will, with a high degree of certainty, be expected to be carried out by all parties involved in activities concerning AI and the EU. It would therefore be prudent for organisations matching such a description to monitor the development of this pilot, alongside future pilots as they emerge.

Find out more

For more information on AI and the emerging legal and regulatory standards, contact the authors or your usual DLA Piper contact or find out more at DLA Piper’s focus page on AI.

You can find a more detailed guide on the AI Regulation and what’s in store for AI in Europe in DLA Piper’s AI Regulation Handbook.

To assess your organisation’s maturity on its AI journey in (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool.

You can find more on AI, technology and the law at Technology’s Legal Edge, DLA Piper’s tech sector blog.

DLA Piper continues to monitor updates and developments of AI and its impacts on industry in the UK and abroad. For further information or if you have any questions please contact the author or your usual DLA Piper contact.