Action hero? Experts examine government’s grand AI plan


The government’s AI Opportunities Action Plan sets out 50 measures intended to transform public services and supercharge the economy. PublicTechnology gathers feedback on whether the strategy is artifice or intelligence.

“The defining opportunity of our generation.”

This is how prime minister Keir Starmer characterises artificial intelligence in the UK’s major new plan to progress its use of the technology throughout the country.

As the plan makes clear though, this is not just a single opportunity. To begin with, there are at least 50 courses of action for the UK to pursue – as per the recommendations made by tech entrepreneur Matt Clifford, who was commissioned by government shortly after last year’s election to compile the AI Opportunities Action Plan. In its response – published in January alongside the plan – government formally signalled its approval of all 50 recommendations, covering skills, computing infrastructure, business, the public sector, and legislation.

As the PM puts it: “In the coming years, there is barely an aspect of our society that will remain untouched by this force of change. But this government will not sit back passively and wait for change to come.”

Experts spoken to by PublicTechnology for this piece largely welcome this statement of intent from the very top of government, but some questioned whether it amounted to much more than that.

Dr Richard Whittle, fellow at the University of Salford’s Business School, says of the Action Plan that “I do not really see that we can call it a plan”.

“AI has strong potential for public services, for example, but it goes from A to C, without telling us about B,” he adds. “It is a statement of intent – which can be useful – but, as a plan itself, it does not tell us how it is going to get there, and it feels a little rushed.”

Whittle adds that the elements that might constitute the missing ‘B’ include more specific use cases, and greater insight into the type of AI models to be pursued – whether those developed directly by government based on open-source code, or proprietary systems built by vendors.

Antony Walker, deputy chief executive of techUK, says that the AI plan “demonstrates the government’s clear recognition of AI as central to their plan for change, with several well-thought-out initiatives that could significantly boost the UK’s AI capabilities”.

“The upcoming Spending Review will be a pivotal moment for turning these ambitions into reality, however, it is essential that the government provides sustained, long-term funding commitments beyond this single review to ensure stability and continued progress.”

Antony Walker, techUK

“However, the critical factor now is implementation pace and clarity,” he adds. “While the plan’s direction is promising, industry will be looking for more detailed information about how these initiatives will be actioned within the next six months. This is particularly crucial given the growing international competition in AI development. The upcoming Spending Review will be a pivotal moment for turning these ambitions into reality, however, it is essential that the government provides sustained, long-term funding commitments beyond this single review to ensure stability and continued progress.”

Dr Martin Wählisch, associate professor of transformative technologies, innovation, and global affairs at the University of Birmingham’s School of Government, also highlights the importance of continued focus – and sufficient financial support.

“Is the [UK’s] infrastructure going to be enough, and is the investment going to come quickly enough?,” he says. “There needs to be more investment to make this real, otherwise it is just going to be a placeholder. The first challenge is not just to make this a priority, but to keep it a priority.”

A range of recommendations
Proposals that the government intends to make real over the coming months and years include the establishment of regional AI Growth Zones. These will be “areas with enhanced access to power and support for planning approvals, to accelerate the build out of AI infrastructure on UK soil”.

Also set out in the plan is an intention to create an AI Energy Council to seek out “clean and renewable energy solutions” to power new technology infrastructure. The AI strategy reiterates the Labour manifesto pledge to establish a National Data Library with a remit to “responsibly, securely and ethically unlock the value of public sector data assets to support AI research and innovation”.

The other headline ambitions of the plan are “creating a strong talent pipeline and ensuring we address wider skills demands” and “ensuring we have the right regulatory regime that addresses risks and actively supports innovation”.

Across the detail of the 50 individual recommendations, there are six measures dedicated to improving national tech infrastructure, including a pledge to provide, by the summer, “a long-term plan for UK’s AI infrastructure needs, backed by a 10-year investment commitment”.

The next seven recommendations focus on “unlocking data assets in the public and private sector”, beginning with efforts to “rapidly identify at least five high-impact public data sets” to be made available to researchers via the National Data Library.

Nine recommendations are committed to “training, attracting and retaining the next generation of AI scientists and founders”. This includes initial work to “accurately assess the size of the skills gap”, before moving on to “launch a flagship undergraduate and master’s AI scholarship programme” and “establish an internal headhunting capability on a par with top AI firms to bring a small number of truly elite individuals to the UK”.

The area of “regulation, safety and assurance” is covered by the next eight recommendations, starting with a call to “continue to support the AI Safety Institute to maintain and expand its research on model evaluations, foundational safety and societal resilience research”.

In the weeks since the publication of the plan, the institute – which is based in the Department for Science, Innovation and Technology – has been renamed the AI Security Institute and its remit has been slimmed down, with the cessation of its work related to the impact of AI on bias and free speech.

Also featured in the Action Plan are two sets of recommendations setting out measures respectively intended to “enable public and private sectors to reinforce each other” and “address private-sector-user adoption barriers”.

Featured here is a commitment from government to “procure smartly from the AI ecosystem as both its largest customer and as a market shaper”, as well a pledge to “drive AI adoption across the whole country”.


50
Number of recommendations in the Action Plan – all approved by the government

10 years
Length of ‘compute roadmap’ and investment commitment government has pledged to publish this summer

1,900%
Planned growth of UK’s sovereign computing capacity by the end of this decade

£25bn
Money invested by the private sector in UK datacentres since the 2024 election, according to government

500MW
Electricity to be made available for datacentres in designated AI Growth Zones, including one centred on the Oxfordshire HQ of the UK Atomic Energy Authority


The final section is comprised of a single recommendation – which has already been fulfilled – to create a new Sovereign AI Unit within government. This unit will partner with AI firms and, in some cases, invest financially in companies.

The largest set of recommendations, however, cover the intention to “Adopt a ‘Scan > Pilot > Scale’ approach in government”.

This includes the appointment of a dedicated lead to seek out potential roles for AI in each of the government’s five ‘missions’, as well as a commitment to “build a cross-government technical horizon scanning and market intelligence capability” in the new-look Government Digital Service in DSIT.

GDS will also house a new “rapid prototyping capability that can be drawn on for key projects”, while departments will also be given “specific support to hire external AI talent”.

DSIT will lead a trial of the “consistent use of a framework for how to source AI – whether to build in-house, buy or run innovation challenges – that evolves over time”. This will be supported by the implementation of a “faster, multi-stage gated and scaling AI procurement process that enables easy and quick access to small-scale funding for pilots and only layers bureaucratic controls as the investment-size gets larger”.


Public vs private
Martin Ferguson, policy and research director at Socitm – the membership body for public sector digital, data and IT professionals – tells PublicTechnology that, “while there are certainly some good points” in the AI plan, there are numerous “missed opportunities”, particularly in its proposals for the public sector.


“[Collaboration] is talked about only in terms of cooperation with industry – they don’t talk about a collaborative approach with public services, for example, or local government. And that’s a shortcoming,” he says. “It kind of assumes the private sector will be the source of all innovation.”

Professor Whittle from Salford (pictured right) adds that the plan suggests that government is expecting rather too much from AI as a panacea to address challenges with public services.

“There is a very wide perception, one that is being emphasised by the UK government that, after a multi-decade underinvestment in skills and infrastructure, [issues] are going to be solved easily and cheaply by AI,” he says. “Deploying AI to help the public sector is seen as a deus ex machina.”

Whittle adds: “Public sector efficiency and productivity is not going to be solved by using large language models (LLMs) to [process] emails.”

Others are more optimistic about the strategy’s potential impact on the public sector.

Walker from techUK says: “The plan shows significant promise for transforming public services. The appointment of dedicated AI leads for each government mission is a particularly welcome step, ensuring that AI and technology considerations are embedded in decision-making across public service reform efforts.”

Shortly after the publication of the plan, Peter Mason, leader of Ealing Council, visited 10 Downing Street alongside one of the authority’s social workers to showcase an AI-powered note-taking tool being used in the London borough.

Mason (pictured right) says that this visit and the AI plan both speak to a wider trend of Whitehall and councils working closer together – following “the past five years [in which] there has been a bit of a disconnect between local and central government”.

“This is quite an exciting time, as the relationship has changed between local government and national government and we have a much more design-centric and solution-centric set of people [in government],” he says.

The council leader says that, while use cases like those demonstrated in Ealing can be a powerful exemplar, it is also useful to have a grand strategic overview as is provided by the Action Plan.

“Like all things, we need to have a big vision as well as the examples of how it can work,” he says. “People need to understand in abstract, as well as in practice.”

But, according to another expert from Salford’s Business School – Dr Gordon Fletcher (pictured right), associate dean of research and innovation – the plan would benefit from many more practical examples, particularly of operational use cases.

“The challenge really is about having a… level of public awareness of what [AI] is capable of doing,” he says. “Often the examples that are used, even within the media, are about [things like] automating email – that are almost trivial examples. It does not help the public awareness of what the real power of the technology is – which is the real thorough automation of processes.  There are so many tasks that are highly repetitive, and that consume not just minutes, but hours of a day, or hours of a person’s working year.”

To illustrate this kind of potential, government needs to do more to perpetuate “very tight, specific examples that can fit into a couple of sentences”, Fletcher says.


Sovereign state?
According to several experts quoted in this piece, one of the biggest questions facing government in its delivery of the AI Opportunities Action Plan is the extent to which it pursues development of its own technology – often referred to as ‘sovereign AI’ – or relies on systems created and owned by tech firms.

This choice may be complicated by two significant recent developments: the release of the open-source large-language model from Chinese firm DeepSeek, reportedly developed at about one-twentieth of the cost of OpenAI’s ChatGPT; and a $97.4bn bid from Elon Musk to acquire OpenAI – whose technology is the basis for the GOV.UK Chat online chatbot tool.

Dr Whittle from Salford suggests that organisations’ decisions taken about the kind of AI to use will come to be weighted with greater geopolitical significance and complexity and may, ultimately, equate to “picking a side”.

While the open availability of the code powering DeepSeek seemingly offers a much cheaper and easier means of developing LLMs from scratch than might otherwise have been thought possible, is colleague Dr Fletcher adds that “government’s taste for open source represents a challenge” to any potential deployment in Whitehall.

In the short term, the UK government – via its new Sovereign AI unit – recently entered into a memorandum of understanding to collaborate not with one of these market giants but with Anthropic, a Californian start-up established in 2021 as a public benefit corporation. The company has created its own AI assistant, Claude, and also operates an arm dedicated to “generating research about the opportunities and risks of AI”.

Its engagement with government in the coming months “will include sharing insights on how AI can transform public services and improve the lives of citizens, as well as using this transformative technology to drive new scientific breakthroughs”, according to DSIT.

Whatever and however AI technology is deployed in the coming years, experts cite several core factors that will determine the success of government’s grand vision.

Dr Wählisch from Birmingham says that many of the state’s ambitions need “a deeper social dialogue about technology in the 21st century”, that is “not just limited to the groups that want to contribute, but includes all groups of society”.

“AI has strong potential for public services, but the plan goes from A to C, without telling us about B; it is a statement of intent – which can be useful – but, as a plan itself, it does not tell us how it is going to get there, and it feels a little rushed.”

Dr Richard Whittle, University of Salford

Walker, deputy chief executive of techUK, suggests that the achievements of the plan can, in part, be measured against a baseline of a “UK AI industry [that] includes over 3,000 companies, generating £14bn in revenue and contributing £5.8bn gross value added”.

“Beyond these headline figures, success should be measured through the tangible delivery of key initiatives,” Walker adds. “This includes progress toward the 20-fold expansion in compute capacity, the successful establishment of AI Growth Zones, and the implementation of the National Data Library. The AI Energy Council’s effectiveness in addressing the energy challenges of AI deployment will be another crucial indicator.”

These may constitute ambitious objectives but, returning to Starmer’s foreword to the plan, the prime minister claims that “we start from a position of strength”.

“Nonetheless, this race is speeding up and we must continue to move fast,” he adds. “This is a unique chance to boost growth, raise living standards, transform public services, create the companies of the future in Britain and deliver our Plan for Change. This Action Plan shows we are ready to take it.”

There are clearly those that might disagree with this upbeat assessment. But it is equally clear that, ready or not, here AI comes.

Sam Trendall

Learn More →