While some dedicated measures may be brought in for the makers of LLM systems, the majority of new rules and work to ensure compliance will be led by sector watchdogs
The UK’s regulatory response to artificial intelligence should largely be led in individual industries by sector-specific regulators, a minister has claimed.
But, alongside this work from existing watchdogs, the government is likely to bring in new and dedicated obligations for the companies behind so-called frontier AI systems – a category which includes the likes of OpenAI’s ChatGPT, as well as Google Gemini, and Microsoft Copilot.
Feryal Clark, minister for AI and digital government, said: “The vast majority of AI systems should be regulated at the point of use, and the UK’s existing expert regulators are best placed to do this. The government is committed to ensuring that regulators have the right expertise and resources to make proportionate and effective decisions about AI. The government also intends to introduce targeted requirements on the handful of companies developing the most powerful AI systems. These proposals will build on the voluntary commitments secured at the Seoul and Bletchley AI Summits and will strengthen the role of the AI Safety Institute.”
Related content
- Regulators must act now to get a grip on the relationship between cloud and AI
- AI Safety Summit: Major nations sign cooperation agreement recognising technology’s ‘potential for catastrophic harm’
- Government generative AI guidance promises ‘meaningful human control’
Clark’s comments – made in response to a written parliamentary question from Liberal Democrat MP Charlotte Cane – come hot on the heels of the publication of the government-commissioned AI Opportunities Action Plan.
The creation of the plan was led by tech entrepreneur Matt Clifford – whose 50 recommendations have all been approved by government.
The strategy advises ministers that “the UK’s current pro-innovation approach to regulation is a source of strength relative to other more regulated jurisdictions and we should be careful to preserve this”.
Labour ministers’ pledge to ensure sector watchdogs are equipped to deal with the impact of AI follows on from similar commitments made by the previous Conservative administration.
A year ago, government announced a £10m fund intended to support regulators in building their AI expertise. This was accompanied by a request for 13 of the UK’s most significant watchdogs – including Ofsted, Ofgem, the ICO, the Bank of England, and the CMA – to set out their “strategic approach” to new technology.