Government LLM pilots to assess potential ‘employee satisfaction’ uplift


The minister for AI has indicated that a range of tech options and use cases are currently being trialled, with the impact on both productivity and sentiment to be considered

Departments’ ongoing experiments with the possible use of large language models will assess not only the impact on productivity, but also the potential implications for “employee satisfaction”, ministers have indicated.

According to Feryal Clark, the Department for Science, Innovation and Technology’s minister for artificial intelligence and digital government, “the Cabinet Office, on behalf of government, has assessed the potential [of] AI and large language models (LLMs) across the civil service, aiming to identify areas offering the highest value and impact”.

LLMs, which include the likes of Google Gemini and OpenAI’s ChatGPT, can create verbal or written content to instruction. Government’s trials of the generative technology so far include pitting several LLMs against human counterparts in an exercise to create policy documents.

Clark added that various other potential use cases are currently being put to test in various Whitehall agencies. These pilots will examine the effect on staff sentiment, as well as on their output, the minister indicated.

“A number of pilot projects are underway across multiple departments,” she said. “Pilots are currently underway for a range of tools investigating the potential impact on productivity and employee satisfaction, trials will be published once analysis has been completed.”


Related content


The minister, who was answering a series of written parliamentary questions from Conservative MP Andrew Murrison, indicated that LLMs from various tech firms – as well as those developed by agencies’ internal teams – are already in use in government.

“There are a number of generative AI and LLM models used across HMG,” she said. “The government publishes information on the use of these in the public sector through the Algorithmic Transparency Recording Standard, available on GOV.UK. These records show that departments use a mixture of in-house and commercial solutions, including tools built on foundational models. Use cases range from operational support to decision-making aids, and are subject to appropriate oversight and assurance processes.”

One of Murrison’s questions concerned what guidance has been issued to the public sector on “storing data generated by AI and large language models”.

In response, Clark flagged up the government-wide AI Playbook, published by DSIT’s Government Digital Service, “which gives departments advice on governing their use of AI, including LLMs”.

“The ‘Data Protection and Privacy’ section in the AI playbook sets out data protection principles relevant to the use of AI, including ‘storage limitation’,” she added. “The use of AI and Large Language Models for government business engages the department’s record’s management responsibilities and will be managed in accordance with the Code of Practice on the management of records issued under section 46 of the Freedom of Information Act 2000. Whether such information is retained and the period for which it is retained will vary depending on the technology used.”

Sam Trendall

Learn More →