Government AI Playbook instructs departments to ‘know limitations and have meaningful human control’


A new set of GDS guidelines enshrines 10 principles intended to direct Whitehall bodies in their use of new technology, including advice on procurement, assurance, and building the necessary skills

A new playbook to guide the use of artificial intelligence across government has instructed departments to take steps to understand the “limitations” of the technology while ensuring “meaningful human control” measures and continual input from commercial experts.

Created by the Government Digital Service, the Artificial Intelligence Playbook for the UK Government is a 119-page document that sets out 10 principles that should guide Whitehall bodies in their use of AI. The guidance goes on to provide a broad overview of AI, before setting out advice on how to build AI systems then operate them safely and responsibly. The final section of the playbook contains details of a number of existing public sector use cases of AI.

The first of the 10 principles advises that agencies should “know what AI is and what is limitations” are.

“AI is a broad field subject to rapid research and innovation, and many claims have been made about both its promise and risks,” the playbook says. “You should learn about AI technology to understand what it can and cannot do, and the potential risks it poses. AI systems currently lack reasoning and contextual awareness and their limitations vary depending on the tools you use and the context in which they operate. AI systems are also not guaranteed to be accurate. You should understand how to use AI tools safely and responsibly, employ techniques to increase the accuracy and correctness of their outputs, and have a process in place to test them.”

The second principle is that government should always use AI “lawfully, ethically and responsibly” and the third is that those deploying the technology should “know how to use AI securely”.


Related content


To do so, departments should seek specialist advice from legal and data protection authorities experts, the playbook advises.

“You should establish and communicate how you will address ethical concerns throughout your project, from design to deployment, so that diverse and inclusive participation is built into the project life cycle,” the document adds.

The fourth principle asks that deployments “have meaningful human control at the right stage”.

The playbook says: “This includes ensuring that humans validate any high-risk decisions inuenced by AI and that you have strategies for meaningful intervention. For applications where instant responses are required and human review is not possible in real time, such as chatbots, it’s important that you ensure human control at other stages of the AI’s development and deployment. You should fully test the product before deployment, and have robust assurance and regular checks of the live tool in place.”

The fifth principle builds on this advice, advising that departments should “understand how to manage the AI life cycle”. The sixth, meanwhile, implores that “you use the right tool for the job”.

This means that government “should be open to solutions involving AI” for a variety of reasons – but “you should also be open to the conclusion that, sometimes, AI is not the best solution for your problem: it may be more easily solved with more established technologies”.

The next two principles concern cooperation between Whitehall organisations, with the seventh urging agencies to be “open and collaborative” and the eighth advising that teams “work with commercial colleagues from the start”.

“There are many teams across government and the wider public sector using or exploring AI tools in their work. You should make use of existing cross-government communities where there is a space to solve problems collaboratively,” the playbook says. “You should also engage with other government departments that are trying to address similar issues and reuse ideas, code and infrastructure.”

‘A launchpad’
The penultimate principle urges departments to ensure that they “have the skills and expertise needed to implement and use AI”, while the final tenet is that organisations should “use these principles alongside your organisation’s policies and have the right assurance in place”.

“While you should use these principles when working with AI, many government organisations have their own governance structures and policies in place,” the document adds. “You should follow any organisation-specific policies, especially ones about security and data-handling.” 

In her foreword the playbook, AI and digital government minister Feryal Clark said that the release of the guidelines “highlights the competence and extraordinary work already being done in the AI space across the public sector”.

“The potential of AI to transform public services is enormous, giving us an unparalleled opportunity to do things differently and deliver more with less,” she added. “AI is already helping civil servants spend less time on repetitive tasks, enabling teachers to personalise lessons, and can allow doctors to access life-saving insights faster, through AI-assisted diagnostics. However, our journey with AI is just beginning. The AI Playbook is a launchpad that we will continuously revise and improve to help the UK public sector become a leading responsible user of AI technologies. As technology evolves, so too will our approach, ensuring we remain at the forefront of responsible innovation – always guided by the principle that technology must serve people.”

In a recent interview with PublicTechnology, Clark said that “the previous government spent far too long concentrating on the doomsday scenarios, and not the opportunities of AI” – and that her administration is committed to taking a differing approach.

Sam Trendall

Learn More →