Failing to secure public trust could derail government’s AI agenda


Government aims to forge an AI regulatory path between those of the EU and US. But, according to Enza Iannopollo of Forrester, much more focus is also needed on citizens.

The UK is embarking on an ambitious venture to integrate artificial intelligence into its public sector and position itself as a global AI leader.

As part of this effort, the UK is also trying to define a ‘third way’ on AI policy, treading the line between what some see as an overregulated EU and that others see as an underregulated US. The publication of the Artificial Intelligence Playbook for the UK Government exemplifies this concept.

While these initiatives promise to drive innovation and modernise government operations, there remains a critical obstacle that could derail this vision before it fully unfolds: citizen trust.

Transparency is central to earning trust. Without transparency into how AI decisions are made or the ethical guidelines regulating AI systems, citizens will likely oppose future implementation, especially in scenarios involving public welfare, such as healthcare or social services.

The UK government has demonstrated its eagerness to fuel AI innovation in public services, with initiatives such as deploying large language models like Anthropic’s Claude to improve citizen interactions and exploring AI applications in areas spanning policymaking, supply-chain management, and scientific research. These initiatives form part of a broader push to make the UK a hub for public-sector AI adoption.

However, public trust is worryingly fragile.

According to Forrester’s Global Government Trust Index, UK citizens scored the government at just 42.3 out of 100 in overall trust. This scepticism means the government cannot afford to make missteps as it integrates AI into systems that directly affect public services and, ultimately, people’s lives.

The risks of neglecting trust
AI adoption in the public sector comes with inherent risks, including challenges of transparency, ethical use, and the danger of unintended consequences.

The UK government faces a unique conundrum. Its pivot toward prioritising AI security over safety – demonstrated by renaming the AI Safety Institute to the AI Security Institute – has shifted focus more heavily toward cybersecurity concerns. While cybersecurity is undoubtedly important, the reduced emphasis on addressing societal impacts of AI, such as bias and inequality, could, at best, alienate citizens and, at worst, seriously harm them and their rights.

The government’s recent decision to forego signing an international agreement promoting ethical AI development further clouds its messaging.


42.3
UK’s score out of 100 on Forrester’s Global Government Trust Index

25%
Proportion of UK citizens that trust the government with personal data – compared with 35% for Apple

22
Number of mentions of ‘trust’ in the AI Playbook for the UK Government – compared with 176 mentions of risk


When juxtaposed with moves like these, the lack of a comprehensive and trustworthy framework risks deepening public scepticism. To worsen matters, the government is operating in a country where many citizens trust private companies like Apple to safeguard their data more than they trust their government.

A December 2024 Forrester survey found that only 25% of UK citizens trust the government with personal data, compared to 35% who trust Apple, for example. This “trust deficit” is particularly alarming as citizens begin to see AI applications influencing personal and societal matters.

Trust is not just an abstract concept but a fundamental enabler of successful AI implementation. Forrester’s trust model identifies tangible levers – including transparency, empathy, and consistency – that are essential for fostering public confidence. High-trust environments not only enable citizens to support government initiatives but also result in societal and economic benefits.

According to Forrester, institutions viewed as transparent are up to four times more likely to be forgiven for their mistakes than those that are perceived as opaque. This insight underscores the stakes for the UK government. When trust is low, public opposition increases, hamstringing the government’s ability to innovate. But when trust is high, citizens are more willing to accept the experimentation and potential errors that come with AI adoption.

The transparency imperative
Transparency is central to earning trust. The European Union’s AI Act, while not binding on the UK, provides a useful benchmark for establishing risk categories in AI use cases. By comparison, the UK’s current guidelines fail to clarify which actions foster trust and which could erode it. For example:

  • The UK’s AI Playbook mentions trust 22 times but falls short on defining actionable strategies for building citizen confidence.
  • Risk management dominates the playbook, with “risk” appearing a significant 176 times, yet lacks solutions for mitigating public concerns.

Without transparency into how AI decisions are made or the ethical guidelines regulating AI systems, citizens will likely oppose future implementation, especially in scenarios involving public welfare, such as healthcare or social services.

For high-risk AI applications, such as predictive analytics in healthcare or law enforcement, empathy also becomes a critical factor in driving trust. Citizens need to see that the government understands the risks and implications of AI use on their lives.

For instance, implementing AI-powered chatbots for accessing government services should be done with explicit communication to citizens about how their data will be handled and protected.

Consistency is another pillar of trust. Without clear, predictable governance policies for AI, citizens are more likely to perceive government actions as arbitrary or ad hoc.

Currently, the disparity between public and private-sector AI adoption further muddies the public perception.

As long as private firms operate with fewer constraints, developing AI systems without tight, ethical oversight risks generating poor-quality models or biased algorithms. Such issues have a ripple effect, eroding citizen trust in AI overall and undermining the government’s efforts to modernise the public sector.

A trustworthy framework
To succeed in its AI aspirations, the UK government must adopt a citizen-first approach that emphasises trust-building at every stage of AI implementation. Here’s how it can start:

  1. Adopt globally recognised AI governance standards to supplement the existing AI Playbook. Aligning with frameworks like the EU AI Act could provide clear definitions of risk categories and ethical guidelines, giving civil servants greater clarity and consistency.
  2. Communicate transparently and consistently. Every new AI initiative should include a public explanation of its purpose, benefits, and limitations, along with accountability measures.
  3. Pilot trust-building measures within key initiatives. High-impact projects, like AI-powered public service chatbots, should serve as testbeds for incorporating trust-focused elements such as explicit privacy protections and citizen feedback loops.
  4. Leverage partnerships thoughtfully. While partnering with AI leaders like Anthropic can amplify innovation, it’s vital to ensure these collaborations are framed with visible ethical standards and public accountability.

AI offers incredible opportunities to transform government services and improve the lives of citizens, but at what cost? For the UK government, disrupting outdated systems with AI could restore public confidence in its ability to deliver meaningful results. However, failing to prioritise trust risks public backlash, limited acceptance of AI-driven policies, and the eventual derailment of the ambitious national AI agenda.

The decision to adopt AI in public services is not just a technical endeavor but a social one. To thrive, the government must overcome its trust deficit by championing transparency, empathy, and consistency in its AI strategy and execution. Trust must be more than a buzzword or aspiration; it must be the foundation of every AI initiative.

Enza Iannopollo (pictured above) is a principal analyst at Forrester

Enza Iannopollo

Learn More →