15 May 2023

UK white paper on AI regulation: Is it enough to rein in the robots?

On 29 March 2023, the government published its long-awaited white paper on a pro-innovation approach to regulating artificial intelligence (AI). Coincidentally, on that same day, Elon Musk and over 1000 AI experts called for an immediate pause of the “out-of-control race” to develop ever more powerful AI, citing concerns over the profound risks it may pose to society and humanity.

In addition to launching a consultation on the white paper, which is open until 21 June 2023, the government has set out a list of the actions that it will take over the next year and beyond (see box "Next steps"). In the first six months, this includes publishing its response to the consultation, issuing cross-sectoral principles and initial guidance to regulators, and publishing an AI regulation roadmap with plans for establishing the central functions of the new regime.

The UK’s approach

Unsurprisingly, the white paper does not propose an overarching AI law, such as that proposed by the European Commission in its Artificial Intelligence Act. It instead follows the sector-specific approach that the government outlined in its policy paper published on 18 July 2022. The thinking behind this approach is that regulators are best placed to understand the risks in their sectors, and so the new regime would enable them to take a proportionate approach to regulating AI that is tailored to their specific area.

That said, the government does recognise the need for some consistency across the different regulators and so the framework will be underpinned by five principles that will apply across all sectors.

The Principles

These five principles are:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

These principles build on the Organisation for Economic Co-operation and Development's (OECD) AI principles and so will already be familiar to many. The white paper sets out a definition, explanation, and rationale for each principle. It also lists the factors that regulators may wish to consider when providing guidance on, or implementing, the principles, such as considering the role of available technical standards.

The government has said that, at least initially, the principles will not be placed on a statutory footing, as requiring businesses to implement new rigid and onerous legislative obligations could suppress AI innovation and reduce their ability to respond quickly to technological advances. However, it may introduce a statutory duty on regulators, requiring them to have “due regard” to the principles after an initial implementation period.

Key points

The white paper is 85 pages long and goes into some detail on how the government hopes to craft a regime that provides consistency across sectors while still giving individual regulators flexibility to address the specific risks facing their sectors. Some of the key points to note are set out below. 

Defining AI

While acknowledging that there is no general definition of AI that enjoys widespread consensus, the white paper does still offer its own views on a definition. It says that AI should be defined by reference to the two characteristics that generate the need for a bespoke regulatory response: adaptivity and autonomy. These characteristics can make it hard to explain, predict or control the outputs of an AI system and also create challenges in allocating responsibility for these outputs. The hope is that defining AI in this way, and avoiding blanket new rules for specific technologies, should help to future-proof the regime. However, the government confirms that it will keep the definition under review as part of its ongoing monitoring and iteration of the whole framework.

Context-specific approach

The proposed framework is context-specific and will regulate based on the outcomes that AI is likely to generate in particular applications. For example, an AI-powered chatbot that is used to triage customer service queries for an online retailer should not be regulated in the same way as a similar application that is used as part of a medical diagnostic process.

Regulatory co-ordination

Given the framework's sector-specific approach, and the expectation that regulators will provide guidance and tools relating to the principles, there is much focus in the white paper on the need for regulatory co-ordination. Without it, businesses may face an even more confusing web of guidance and rules than they face now.

Some regulators already co-ordinate; for example, the Digital Regulation Cooperation Forum was established to ensure greater co-operation on online regulatory matters and is already looking at issues relating to AI. However, the government has said that it will step in further to help with that co-ordination. For example, the white paper envisages the government supporting regulators and providing guidance that helps them to implement the principles. It also discusses a suite of centralised functions that are required to support the implementation of the new framework, including:

  • A central monitoring and evaluation framework.
  • A cross-sectoral risk function and register.
  • A multi-regulator AI sandbox, which was a recommendation in Sir Patrick Vallance’s review of digital technologies that was published in March 2023.

Despite assurances in the white paper, there are still concerns about the sector-specific approach. Some commentators have noted that AI is used in areas that are not heavily regulated. Even where regulation is in place, there are fears that those regulators may not have the necessary resources and expertise. Some regulators have already made efforts to upskill in relation to AI, for example the Information Commissioner's Office has produced a lot of guidance in this area and updated its main AI guidance in March 2023. However, this is not the case for all regulators.

Standards and AI assurance

The white paper notes the importance of standards and AI assurance in supporting the regulatory framework, which was previously highlighted in the UK’s AI strategy (see our blog for background). It promises the launch of a portfolio of AI assurance techniques in spring 2023. It also discusses the following layered approach to AI technical standards where regulators identify relevant technical standards and encourage their adoption:

  • Layer 1 would involve sector-agnostic standards that can be applied across use cases, such as risk management.
  • Layer 2 would address specific issues, such as bias and transparency.
  • Layer 3 could involve regulators encouraging the adoption of sector-specific technical standards.

An iterative approach

The government is deliberately taking an iterative approach to AI regulation and will constantly review whether the framework is working. This will include monitoring AI supply chains and whether legal responsibility for AI is effectively and fairly distributed throughout the AI lifecycle. However, given the fast-paced development of AI, some have criticised the time it will take for this approach to develop into a robust and effective regime.

Government Actions table below:

Short, mid- and long-term government actions

0-6 months

6-12 months

12 months +

Engage with stakeholders through consultation period Agree partnerships to deliver first central functions Deliver first iteration of central functions
Publish government’s response to consultation Encourage key regulators to publish guidance on how principles apply within their remit Work with other regulators to publish guidance on principles
Issue cross-sectoral principles to regulators (with implementation guidance) Publish proposals for monitoring & evaluation (M&E) framework Publish draft central, cross-economy, risk register
Publish AI Regulation Roadmap with plans for central functions Continue to develop regulatory sandbox Develop sandbox drawing on insights from pilot
Analyse findings from commissioned research (e.g. regarding barriers to framework compliance, best practice in measuring/reporting AI risk etc.)   Publish first M&E report – e.g. how are cross-sectoral principles working? Is statutory intervention needed?

 

This article first appeared in PLC Magazine May 2023, written by Rob Sumroy (Partner) and Natalie Donovan (Counsel PSL) of Slaughter and May’s Tech Team. It is part of our Regulating AI series and can be found on our Regulating AI hub.

 

 

This material is provided for general information only. It does not constitute legal or other professional advice.

Practices Tech Group
Contact Information
Rob Sumroy
Partner at Slaughter and May
Natalie Donovan
PSL Counsel and Head of Knowledge Tech and Digital at Slaughter and May