AI in government: The need for human foundations


As the new government explores the potential of using artificial intelligence in public services, former OECD experts Ben Welby and Stefano Piano examine the need to build trust and expertise

In recent weeks, the UK government has taken a more bullish approach to artificial intelligence in the public sector.

While the King’s Speech initially gave a muted impression regarding its prominence, subsequent statements and actions have made it clear that AI is a priority for the current administration. The parliamentary debate on technology in public services on 2 September highlighted this shift, with discussions emphasising both AI’s transformative potential and the critical need for caution in its deployment.

The intensified focus on AI is not surprising.

The former government hailed AI as a “silver bullet” and Labour’s manifesto positioned it as a tool to “transform the speed and accuracy of diagnostic services, saving potentially thousands of lives” in healthcare. AI was a focal point at the Future of Britain Conference, convened by the Tony Blair Institute and informed by their paper on Governing in the Age of AI.

In one of its first moves, the  new government consolidated the Department for Science, Innovation and Technology with the Central Digital and Data Office, Government Digital Service and the Incubator for Artificial Intelligence, signalling a clear intent to reinvigorate technology’s role in society and government. 

The government’s enthusiasm for AI is increasingly evident. In his speech on fixing the foundations of the country, prime minister Keir Starmer declared: “We’ll move forward this autumn with harnessing the full potential of AI, for growth and the public good.”

This vision was echoed by the minister for early education, Stephen Morgan, who wrote confidently about AI as a means to reduce administrative burdens and enhance service delivery. Additionally, in a recent debate, the secretary of state for science, innovation and technology, Peter Kyle, emphasised AI’s role in shaping technology for the benefit of workers and as a cornerstone of government missions in service to the public.

Handle with care
Yet, while the promises of AI are seductive they are often over-hyped. We must not let this overshadow the real opportunities to add public value. Some applications have already demonstrated significant potential.

The Metropolitan Police, for example, has normalised the use of live facial recognition to identify wanted criminals. In the NHS, a pilot showed that AI could reduce missed appointments by 30%, and AI tools for detecting and diagnosing breast cancer are nearing mainstream rollout. These examples highlight AI’s potential but also underscore the importance of careful oversight and management to address risks such as accuracy and privacy.

Alongside ethical guardrails, the UK’s progress in AI needs to be built on strong foundations of trustworthy data and digital identity. Effective AI deployment depends on robust data frameworks and clear mechanisms for public consent and understanding of data usage.

In other areas, experiments are showing the art of the possible. The AI Labs at the UK-government-funded Oak National Academy have developed Aila, a free AI-powered lesson assistant that helps create lessons and editable resources. At GDS, explorations into GOV.UK Chat aim to help people find the information they need, and their head of interaction design, Tim Paul, used Claude AI and the GOV.UK design system to convert inaccessible PDFs into accessible webforms. 

However, in our enthusiasm, we must handle AI with care. Each of these use cases presents potential issues with accuracy, privacy, ethics, and environmental impact, as well as concerns about digital inclusion.

The recent parliamentary debate rightly highlighted the need to balance these opportunities with a consideration of risk. Even when it comes to the financial bottom line there is growing scepticism, with Goldman Sachs questioning whether the estimated $1 trillion being spent on AI in the coming years will ever pay off in terms of benefits and returns.

This points to a crucial need for a human-centred approach that prioritises ethical considerations and public trust.

The success of AI deployment hinges on the capability and knowledge of the workforce and the public’s trust in these  technologies. Currently, the UK government appears to be falling short on both counts.

The National Audit Office reports that only 20% of government departments have a formal AI strategy, and 70% of government bodies report skills as a barrier to the adoption of AI. Furthermore, public sentiment towards AI remains cautious with the former Centre for Data Ethics and Innovation finding “scary” and “worry” to be the most common  responses. 

Public trust in the UK government is already low.

The 2024 OECD Survey on Drivers of Trust in Public Institutions found that 56.9% of people in the UK reported low or no trust in the national government, significantly worse than the OECD average of 44.2%. This highlights the urgent need for the government to address trust issues more broadly, not just in relation to AI. Unlocking the trusted and successful use of any technology,  particularly AI, requires a comprehensive approach that includes leadership, legislation, guidance and funding to empower and equip teams to meet their users’ needs.

A team sport
The UK civil service boasts world-class tech talent that stand as an inspiration to other governments and the private sector in championing technology as an enabler of accessible, equitable and ethical user experiences.

Nevertheless, bridging the gap between technology and capability is essential to avoid making decisions based on overly optimistic assumptions about the promises of a given technology and to ensure meaningful, sustainable progress. To achieve this, the government must invest in training and retaining AI specialists, particularly given fierce competition from the private sector. 

However, effective use of AI is not just the domain of specialists. As outlined in the Government Service Standard, successful transformation relies on multi-disciplinary teams that include AI experts alongside user-centred design professionals, software engineers, lawyers, psychologists and policymakers.


56.9%
Proportion of UK citizens reported low or no trust in government, according to OECD research

$1tn
Estimated spend in the coming years on building AI infrastructure, according to Goldman Sachs research

Three
Number of public sector algorithmic transparency records published in the past two years

One in five
Proportion of government departments that the National Audit Office found currently have a formal AI strategy


To borrow a phrase from the Service Manual, AI has to be a team sport.

This means that the government should aim to improve AI literacy among a broad range of civil servants, ensuring that those making decisions about AI  understand not only the opportunities but also the risks and limitations.

Building public trust in AI requires a robust ethical and governance framework. Knowing who uses AI to make decisions that affect us, and how, is critical to ensuring that this new technology does not exacerbate the impact of existing failures in public trust. Transparency is therefore essential.

The UK’s Algorithmic Transparency Recording Standard helps public sector bodies provide clear information about their algorithmic tools. However, since July 2022, only three examples have been published suggesting that further efforts are required to make this a world-leading practice. Beyond transparency, the government should champion ethical principles, like the OECD’s Good Practice Principles for Data Ethics in the Public Sector, (heavily influenced by the UK’s own Data Ethics Framework). These measures support civil servants in their work and help demonstrate to the public that the government is serious about securing their trust. 

The government should also embrace contributions from civil society, academia and industry. A recent convening of people interested in building a Roadmap for Progressive UK Tech Policy highlights the valuable role external voices can play in reflecting diverse perspectives and balancing technological advancement with public trust.

Alongside ethical guardrails, the UK’s progress in AI needs to be built on strong foundations of trustworthy data and digital identity.

Effective AI deployment depends on robust data frameworks and clear mechanisms for public consent and understanding of data usage. The absence of comprehensive, public-sector-wide approaches to digital identity and data in the UK hampers the potential for seamless and personalised public services. Ensuring secure and transparent data handling and giving citizens control over their information is essential for rebuilding public trust. 

Investing in these areas will lay the groundwork for more advanced AI applications. Addressing long-standing challenges in data governance and access to digital identity is essential for creating a trustworthy environment for the future of public services, whether or not a given service uses AI. The public’s ability to consent and understand how their data is used is as crucial as any technical solution for digital identity or data management.

Artificial intelligence may revolutionise the public sector, but realising its promised benefits requires solid human foundations.

Using technology to transform public services is an area where the UK has previously led the world, and can lead again. By investing in the skills of public servants and building trust through ethical and transparent practices, the public sector can come to use AI as one tool to enhance public service delivery alongside the rest of its digital transformation toolkit.

Now is the time for decisive action to equip the civil service for the AI age, transforming the promises of digital tools and ways of working into reality.

Stefano Piano is a skills and innovation expert. Until recently, he was an economist for the OECD, where he worked with more than 15 governments on a range of issues, from mapping the skills for the future of work to strengthening the capacity of the civil service. He is now an advisor for Altruistic, focusing on building partnerships on AI literacy throughout the world.

Ben Welby is a digital transformation expert. He was part of the team at GDS that launched GOV.UK, led the early product work on Government as a Platform, and spent five years at the OECD advising governments around the world on data, digital identity, service design, and skills. He has joined the X-odus and can be followed here.

Ben Welby and Stefano Piano

Learn More →

2 thoughts on “AI in government: The need for human foundations

  1. 🔕 Ticket- TRANSACTION 1,82387 BTC. Continue >> https://telegra.ph/Go-to-your-personal-cabinet-08-25?hs=de3b6d1b803fff96d1444c6559baca26& 🔕 October 6, 2024 at 2:19 am

    uvf461

  2. private proxies October 12, 2024 at 12:21 pm

    Hello would you mind letting me know which hosting company you’re working with? I’ve loaded your blog in 3 completely different internet browsers and I must say this blog loads a lot faster then most. Can you suggest a good hosting provider at a fair price? Many thanks, I appreciate it!

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Newsletter Signup
Receive the top stories from the UK’s leading public sector digital and data publication direct to your inbox every lunchtime.
ErrorHere