Lorem

On 29 March 2023, the UK Government (“Government”) published its long-awaited white paper (“Paper”), setting out the Government’s proposals to govern and regulate artificial Intelligence (“AI”). Headed “A Pro-Innovation Approach”, the Paper recognises the importance of building a framework that builds trust and confidence in responsible use of AI within the UK, while also acknowledging the risk of ‘overbearing’ regulation which may adversely impact innovation and investment. Further details of the Paper can be found in DLA Piper’s earlier analysis of the proposals.

The Information Commissioner’s Office and its role in AI

On 11 April 2023, the Information Commissioner’s Office (“ICO”), issued its response to the Government’s consultation on the Paper.

The ICO recognises the criticality of AI in the development of the UK’s prosperity, and the potential for regulation to push this in the correct direction. Equally, the ICO recognises AI’s immutable relationship with personal data. It is therefore no surprise that the ICO acknowledges that it plays a central role in the governance of AI. However, the ICO  also recognises the important role that other UK regulators play in governing the use and development of AI in different sectors or contexts.

Comments, Concerns, and Considerations: ICO Response to AI White Paper Proposals

The Role of Regulators:

The response from the ICO on the regulator led approach in the Paper is largely positive – the ICO welcomes the Government’s intention to have regulators create joint guidance and ‘sandboxes’ for organisations. However, as regulators must create guidance and advice in alignment with the laws they oversee separately from the Government, the ICO has requested clarification on what the specific roles of regulators and the Government will be in issuing of guidance and advice, particularly where there is opportunity for overlapping and contradicting areas of oversight. The ICO proposes that one method to mitigate this is the use of collaborative initiatives, such as the Digital Regulation Cooperation Forum (DRCF), which already plays an active role in the analysis of the impacts of AI across the member regulator sectors.

Statutory Duties and AI Principles:

One of the primary aspects of the Paper is the creation of a set of principles, to which regulators should have due regard when implementing their guidance and monitoring their respective sectors. The ICO concludes that these principles “map closely to those found within the UK data protection framework”. However, the ICO recognises the need to work closely with the Government to ensure the Paper’s principles can be interpreted in a way that is compatible with data protection principles, “to avoid creating additional burden or complexity for businesses”.

The ICO provides specific comments which it hopes will assist to bring about consistency:

  • Fairness: The ICO states that the principle of fairness, in similar fashion to the data protection fairness principle, should be implemented across the stage of development of an AI System, as well as its use. The definition of the principle should therefore specifically include reference to development as part of its requirements.
  • Contestability and redress: Under this principle, regulators will be expected to clarify  methods of contesting outputs/decisions made by AI and avenues of redress should things go wrong. As the ICO notes, it is typically the responsibility of the organisation using the AI system, not the regulator, that is expected to clarify these details. The ICO therefore  requests clarity  in relation to regulators’ roles in this area.
  • Interactions with UK GDPR Article 22: Under the Paper’s proposals, in instances of automated decisions that have a legal or significant effect on a person, regulators must consider the suitability of requiring AI system operators to provide an appropriate justification for that decision to affected parties. However, where Article 22 UK GDPR is engaged,  this justification is required, rather than simply a consideration. The ICO therefore requests clarification on this point, to ensure that this does not create confusion and contradictory standards. While the Paper acknowledges that these types of conflicts may emerge, the ICO concludes that given a substantial portion of AI systems will process personal data, it is important for regulators to interpret the principles in the Paper in a way that is compatible with their meaning under UK data protection law.

Format of Guidance:

The Paper proposes that regulators should work together, where possible, to produce joint guidance.  The ICO recommends that the Government prioritise research into the types of guidance organisations involved in AI would benefit from.

Sandboxes:

In similar fashion to those we have seen in the EU, the Paper also proposes the creation of a joint regulatory sandbox for AI. The ICO recognises that this will provide clarity to AI developers on how the law will apply to varying use cases and assist with innovation and investment.

As in the case of guidance, the ICO notes that it would be prudent for the Government to carry out further research to determine where the best value to AI developers would be added.

Based on its own experience of operating regulatory sandboxes, the ICO recommends the following:

  • Scope of support: the ICO recommends that the scope of any sandbox should be extended to include all digital innovation, rather than just AI, noting that that it is unlikely that innovators will be strictly limited to AI and will want to progress a much wider family of technologies under the oversight of regulators.
  • Depth of support: the ICO states that timely support should be given, with the aim of providing clarification to businesses on the relevant laws. A more intensive and thorough testing in such an environment may limit the number of businesses that can be helped and may also only benefit businesses that are subject to very specific regulatory authorisation, such as medical devices. A balance should therefore be considered to assist as many organisations as possible.
  • Prioritisation of support: The ICO proposes that support should be prioritised in accordance with international best practice, focusing on: i) the degree of potential innovation; ii) the degree of regulatory barriers faced and support required; and iii) the potential for wider economic, social, and/or environmental benefit.

Cost Implications:

The ICO recognises that the proposals in the Paper will incur new and additional costs to cross-economy regulators, including the ICO, which will now need to produce products tailored to different sectoral contexts in coordination with other relevant AI regulators. This may understandably strain certain regulatory budgets, and so the ICO welcomes discussions with the Government to enable the Paper’s wider proposals to succeed.

Find out more

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

DLA Piper continues to monitor updates and developments of AI and its impacts on industry across the world. For further information or if you have any questions, please contact your usual DLA Piper contact.