Lorem

As AI intersects into ever more areas of everyday life, legislators across the world are increasingly focused on ensuring that intersection does not become unwelcome intrusion. In recent months the EU has taken centre stage with the release of drafts of its proposed EU AI regulation (the “EU Regulation”). On the other side of the Atlantic, the US Federal Trade Commission (the “FTC”) has announced its intent to hold companies using discriminatory algorithms accountable under its existing authority. New proposals are gaining traction in US Congress that would formalise such requirements and grant the FTC jurisdiction to oversee and enforce penalties for violations.

Until recently, there were not many specific hints as to the approach that the UK might take to regulating AI, the UK having published only high level documents such as the National AI Strategy released in September 2021. However, on 18 July 2022 the UK Government published a Policy Paper: ‘Establishing a pro-innovative approach the regulating AI’ (the “Policy Paper”) alongside its AI Action Plan. For the detectives among us waiting in anticipation for the publication of the AI White Paper that the UK Government will release later this year (the “White Paper”), the Policy Paper at last offers some tangible clues about the future regulation of AI in the UK.

As a certain fictional Belgian detective might ask: can we use our ‘little grey cells’ to work out how intellect residing in ‘little silicon cells’ will be regulated in the near future?

Depth of Denial: ‘Core characteristics’ approach versus a detailed definition of AI

One of the most notable aspects of the Policy Paper is the UK Government’s approach to understanding and categorising AI itself. Rather than working to a technology-based definition of AI, which is the approach taken by the EU Regulation, the UK seems set to follow a principles based approach to regulating AI.

The paper cites two axes for measuring if an AI presents a risk and therefore ought to be more closely regulated:

  • Autonomy: a system operating with relatively less oversight might more easily create mischief before it is spotted; and
  • Adaptiveness: a system that has learned for itself and is a less easily explainable / less intuitively understood system that might be more likely to make some capricious or arbitrary decisions.

These core principles will be applied to inform the scope of the UK’s future regulatory framework. The primary benefit to this approach is that it will allow regulators to develop their own sector-specific definitions to meet the evolving nature of AI as technology advances. Regulators will be invited to apply these two principles in addition to sector-specific factors.

The Policy Paper acknowledges that a context driven approach may lead to less uniformity between regulators than a more rigid approach such as that advocated by the EU. Left unchecked, this could cause duplicated effort for AI operators, who will potentially need to consider the regimes applied by multiple regulators as well as the measures required to be taken to deal with regimes with extra territorial scope such as the EU Regulation. To address the risk of a multiplicity of overlapping (and potentially contradictory) regulatory regimes, the Policy Paper puts forward a set of ‘overarching principles’ designed to ensure that similar challenges can be approached uniformly across all sectors. Whilst this is a good start, it is undoubtedly the case that the cooperation of regulators across jurisdictions through initiatives (such as the Digital Regulation Cooperation Forum in the UK) will still be needed. Through such international cooperation, regulators can hope to ensure that a harmonious approach is taken that will not require international businesses to have to navigate a labyrinth of differing AI requirements.

It seems reasonable to expect that these core principles will be fleshed out in greater detail within the White Paper. Whilst the specifics of the approach remain subject to speculation, it would therefore be prudent for organisations to at this stage consider the examples set out in the Policy Paper in the context of their own business as a means of predicting if they face significant remediation efforts as we look set to move from outline policy towards regulation in the near future.

A Mysterious Affair of Style: New pro-innovation approach through cross-sectoral principles

The Policy Paper lays out a set of early proposals for the cross-sectoral principles that should govern AI use (many of which build on the OECD ‘Principles on Artificial Intelligence’). The proposed principles are “deliberately ‘values’ focused”, demonstrating that the UK Government wants to ensure that AI-driven growth is in line with the wider values of the UK. An example provided in the Policy Paper is the expectation that well-governed AI should be used with due consideration to the concepts of fairness and transparency. Whilst the proposed principles are to be interpreted individually by regulators, the UK Government is also considering methods of how it can offer a “strong steer” to regulators to adopt a proportionate and risk-based approach to ensure that, although there may be some sector-specific differences, regulators will be aligned in their end goal.

The proposed principles include the following:

Ensuring that AI is used safely

The Policy Paper acknowledges that AI may include functions and outcomes that may significantly impact safety of those with which it interacts. Safety is therefore expected to be a core concern for some regulators, particularly those involved with sensitive data or where physical harm may occur. Ensuring safety in AI may therefore require new approaches and ways of thinking to successfully balance the benefits of AI against the potential risks of its use.

The UK Government intends that this should be approached contextually and that regulators should assess the potential risks of AI within their domain. It could be, for example, more appropriate to impose tighter obligations on users of AI where there is a risk of physical harm, such as within factories and line production. It is therefore likely that on release the White Paper will seek to implement a similar requirement, whereby the greater the risk, the greater the expectation that the regulator will impose checks and safeguards on the use of and interaction with AI. The specific methods of implementation will likely be influenced by the responses to its consultation, but standards-based approaches are often a favourite approach for safety driven regulation.

Ensuring that AI is technically secure and functions as designed

The Policy Paper highlights that is not sufficient to ensure that users are physically protected to ensure that AI is safe. AI must also be technically secure, functioning as it is intended within the scope of its use. Consumers and the public must have confidence in the fact that the AI they interact with is behaving as intended, to both garner trust and to ensure the commercialisation of AI within organisations can continue at speed.

AI will therefore likely be required to have inbuild resilience and security, much in the same way that current computing systems are held to specific standards. It is equally likely that testing, deployment and audit regimes may feature in the White Paper, ensuring that functionality can be monitored and approved throughout the lifecycle of the AI system’s use.

Making sure that AI is appropriately transparent and explainable

In any sufficiently complex system, the rationale for a particular outcome or decision can be difficult to divine. For AI systems, decisions can appear to emerge from an unknowable ‘black box’, underscoring the importance of measures to ensure a degree of transparency and explainability for those affected by the outputs of such systems. This goes to the heart of questions of fairness and trust for those interacting with AI systems. The Policy Paper notes that in situations of high risk, regulators may decide that decisions that cannot be explained entirely should be prohibited – such as in the instance of tribunals and court decisions.

It is therefore likely that the White Paper may seek to expand on this by setting minimum standards of explainability in a similar way to those set out in the EU Regulation. The benefit to this is that this standard could be set relatively low, with the option for regulators to implement higher standards (whether through their own insight or with the assistance of international standards) where decisions may have more serious consequences (as in the case of the tribunals example given in the Policy Paper). This need for explainability does sit in tension with a need to protect valuable trade secrets in some context, however. It is therefore likely that the White Paper will look into strengthening provisions of protection, such as IP rights, in order to ensure creators have sufficient motive to continue to innovate. We may therefore see the strengthening of existing protective measures for AI in addition to new regimes that balance the overall benefits of transparency and explainability with measures retaining the creative motivations of developers.

Embedding considerations of fairness into AI

The Policy Paper acknowledges that AI has the potential to significantly impact the lives of those subject to its decisions. Examples that spring to mind include insurance, credit scoring, and hiring processes. It is therefore necessary that the outcomes of AI are fair, as well as safe and transparent – no trust can be built if decisions are without justification or arbitrary.

The UK Government therefore expects regulators to ensure fairness is upheld within their designated sectors by, in particular:

  • interpreting and articulating examples of ‘fairness’ that are relevant to their sector or domain;
  • deciding in which contexts and specific instances fairness is important and relevant (which it may not always be); and
  • designing, implementing, and enforcing appropriate governance requirements for ‘fairness’ as applicable to the entities that they regulate.

The cunning detective might therefore deduce that the White Paper may include greater detail and guidance with respect to how regulators will govern ‘fairness’. At least two distinct approaches appear possible. The first is the contextual / sector specific approach that we have seen throughout the Policy Paper and previous publications of the UK Government’s AI arm, with broad freedom for regulators to determine the best approach. Whilst this does allow regulators to target particular areas of concern, it continues to raise issues of conflicting standards in instances where regulatory crossover would occur. How could this be approached for those who are involved in industrials, or medical software systems, for example? The alternative could be a more regimented approach, like that of the EU Regulation, whereby a minimum standard (such as those found in the Official Journal of the European Union or BSI Standards) would be applied across sectors, and regulators could then tailor tougher standards and approaches based on how invasive the AI in their regulatory domain may behave.

Defining legal persons’ responsibility for AI governance

An obvious double edged sword of AI is that many systems have the ability to operate with a high degree of autonomy, capable of decision making or producing an outcome that has not explicitly been programmed. The question of responsibility of these decisions and outcomes therefore comes to light.

Due to the difficulty in assigning responsibility for an AI’s actions in some contexts, the White Paper may seek to follow a similar approach to that seen in various other UK Government initiatives. For example, the UK Law Commission has recommended that in the case of self-driving cars there should be a shift in responsibility for road safety from the drivers to the manufacturers and operators. If the White Paper follows suit, the UK may therefore see the emergence of new business environments (such as insurance for manufacturers of automated systems) that could offer new opportunities for AI-focused business to grow. The Policy Paper provides scant clues about how any new laws might approach this question, however, so we are, alas, thwarted in our deduction of this issue for now.

Clarifying routes to redress or contestability

While the above overarching principles offer several beneficial protections to those impacted by AI, they are of little use without a means of redress or contest when things go wrong. The UK Government therefore seeks to ensure that, subject to context and proportionality, decisions made by AI should not be immune from appeal. It is therefore expected that regulators will implement proportionate measures to ensure contestability of outcomes in regulated situations.

Orientation Expressed: Everyone has a motive in consultation

Over the coming months, the UK Government intends to determine the best method of implementing and refining its approach to AI. In particular, it seeks to:

  • consider the proposed framework and whether it adequately addresses the UK Government’s prioritised AI-specific risks;
  • evaluate how the approach can be put into practice, including establishing roles of regulators, powers, remits, and whether an institutional architecture is needed to oversee the landscape as a whole; and
  • monitor the framework to ensure it delivers on the UK Government’s vision for regulating AI in the UK.

To assist with this process the UK Government invites stakeholders to provide reflections on its proposed approach as set out in the Policy Paper. Responses to the consultation will be used to influence the forthcoming White Paper that is set to be published later this year. The UK Government is particularly interested in views (alongside any supporting evidence available to be shared) with respect to the challenges posed by the approach outlined in the Policy Paper and whether or not establishing a set of cross-sectoral principles is a suitable approach, amongst other things.

The call for views and evidence will be open for a period of 10 weeks and will close on 26 September 2022.

Sparkling AI site: Get in touch 

We hope you’ve enjoyed working through the clues left by the Policy Paper with us. For more information on AI and the emerging legal and regulatory standards visit DLA Piper’s focus page on AI.

You can find a more detailed guide on the AI Regulation and what’s in store for AI in Europe in DLA Piper’s AI Regulation Handbook.

To assess your organisation’s maturity on its AI journey in (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool.

You can find more on AI, technology and the law at Technology’s Legal Edge, DLA Piper’s tech-sector blog.

DLA Piper continues to monitor updates and developments of AI and its impacts on industry in the UK and abroad. For further information or if you have any questions, please contact the authors or your usual DLA Piper contact.