Experts warn UK could still be ‘powerless’ to tackle AI risks despite government’s £100m plan


Ministers this week revealed proposals to invest in creating research hubs and boosting regulators’ expertise, but onlookers believe the plans need to be better supported by legislative and enforcement frameworks

Tech sector experts have warned that government’s £100m plan to invest in research and regulation of artificial intelligence could still leave the UK “powerless” to tackle the major risks posed by the technology.

As part of government’s formal response to its public consultation on the white paper released last year outlining a “pro-innovation approach to AI regulation”, ministers this week unveiled plans to invest £10m in improving regulators’ expertise in artificial intelligence.

The response also committed £90m to fund the creation of nine research hubs around the country intended to support the implementation of artificial intelligence in fields such as healthcare, science, and maths. This funding will also support the establishment of a partnership between authorities in the US and UK dedicated to responsible use of AI technologies.

A further £2m will be spent through the Arts and Humanities Research Council to deliver projects examining responsible AI in the context of education, policing, and creative industries, while plans were also put forward to establish a new-look Responsible Technology Adoption Unit – based in the Department for Science, Innovation and Technology – to support organisations in adopting automation and data tools.

But, speaking to PublicTechnology sister publication PoliticsHome, Lord Tim Clement-Jones, the Lib Dem spokesperson for the digital economy in the Lords and co-founder of the All Party Parliamentary Group on AI, said that the government’s response to the white paper still did not address the issue of enforcement among big tech companies.

“Under the [EU] AI act, standards are beginning to converge and the UK is just getting left behind,” he said. “That’s absolutely classic for this government. £100m is a drop in the ocean; OK, so they announced all that stuff but in terms of making sure that people actually do it… I’m fairly relaxed, because I think we’ve missed the boat anyway… at the end of the day, big corporates are not going to worry too much.”

Michael Birtwistle, Associate Director at the Ada Lovelace Institute, agreed that the government’s proposed framework would be ineffective without legislative support.


Related content


“Only hard rules can incentivise developers and deployers of AI to comply and empower regulators to act,” he said. “It’s welcome that government is now open to the option of bringing forward legislation, but making this intervention dependent on industry behaviour and further consultation would fall short of what is needed. It will take a year or longer for a government to pass legislation on AI. In a similar period we have seen general-purpose systems like ChatGPT go from a niche research area to the most quickly adopted digital product in history. We shouldn’t be waiting for companies to stop cooperating or for a Post Office-style scandal to equip government and regulators to react. There is a very real risk that further delay on legislation could leave the UK powerless to prevent AI risks – or even to react effectively after the fact.”

He added that the government’s approach was still too “narrowly focused” on the most advanced AI systems and “reliant on the goodwill of AI companies like Microsoft, Google and Meta”, but welcomed that the government was “evolving” its approach.

Adam Leon Smith FBCS, of BCS, The Chartered Institute for IT and an AI expert told PoliticsHome it was “right” that the government was moving to fund and empower existing regulators, but added that BCS believed AI professionals should be professionally registered and held accountable to “clear standards”.

Read the full version of this article on PublicTechnology sister publication PoliticsHome

Zoe Crowther and PublicTechnology staff

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere