Government generative AI guidance promises ‘meaningful human control’

Central Digital and Data Office releases detailed guidance on the use of tools like ChatGPT, including 10 guiding principles and advice on how to best implement and use the technology

Government has published detailed guidelines for how civil servants should use generative artificial intelligence in their work. The guidance aims to ensure officials deploy the technology “lawfully, ethically and responsibly” and with “meaningful human control at the right stage”.

Published late last week, the Generative AI framework for HM Government was created by the Cabinet Office-based Central Digital and Data Office. The document sets out 10 principles that are intended to guide how civil servants should use generative AI – the generic terms for tools such as ChatGPT, which can autonomously create written content, images, audio, video or software code to provided specifications.

After setting out the fundaments of the principles, the framework goes on to provide more detailed guidance on how to understand, implement, and make use of generative AI. Included in these guidelines is advice on areas such as procurement, data ethics, and security.

The CDDO’s chief technology officer for government, David Knott, writes in the foreword that the document “differs from other technology guidance we have produced: it is necessarily incomplete and dynamic”.

“It is incomplete because the field of generative AI is developing rapidly and best practice in many areas has not yet emerged. It is dynamic because we will update it frequently as we learn more from the experience of using generative AI across government, industry and society,” he adds. “As our body of knowledge and experience grows, we will add deeper dive sections to share patterns, techniques and emerging best practice.”

The first principle of the framework – which focuses on the large language models (LLMs), which create written or verbal content – is that officials should “know what generative AI is and what its limitations are”.

Such limitations include the fact that “LLMs lackpersonal experiences and emotions and don’t inherently possess real-world contextual awareness… [and] generative AI tools are not guaranteed to be accurate as they are generally designed only to produce highly plausible and coherent results, [which] means that they can, and do, make errors”.

The second principle outlines that government should “use generative AI lawfully, ethically and responsibly”.

This means taking into account ethical considerations from the outset, and consulting with likes of legal and compliance professionals.

Principles for using generative AI

  1. Know what generative AI is and what its limitations are
  2. Use generative AI lawfully, ethically and responsibly
  3. Know how to keep generative AI tools secure
  4. Have meaningful human control at the right stage
  5. Understand how to manage the full generative AI lifecycle
  6. Use the right tool for the job
  7. Be open and collaborative
  8. Work with commercial colleagues from the start
  9. Have the skills and expertise needed to build and use generative AI
  10. Use these principles alongside your organisation’s policies and have the right assurance in place

The third principle is that civil servants must “know how to keep generative AI tools secure”, which is likely to include the implementation of measures such as “content filtering to detect malicious activity and validation checks to ensure responses are accurate and do not leak data”.

The fourth principle asks that officials ensure there is “meaningful human control at the right stage” of their use of generative AI.

The guidance acknowledges that, in many cases – such as a chatbot, which produces instant responses – there may not be scope for humans to review content before it becomes public. Such uses must therefore “be confident in the human control at other stages in the product lifecycle”.

“Incorporating end-user feedback is vital,” the framework adds. “Put mechanisms into place that allow end-users to report content and trigger a human review process.”

The fifth principle is that users should “understand how to manage the full generative AI lifecycle”, which includes work “to monitor and mitigate generative AI drift, bias and hallucinations… [via] a robust testing and monitoring process in place to catch these problems”.

An instruction to “use the right tool for the job” is the sixth principle. This advice speaks to the fact that “generative AI is good at many tasks but has a number of limitations and can be expensive to use”, the guidance adds.

The next two principles – to be “open and collaborative” and to “work with commercial colleagues from the start” – are intended to ensure that those deploying generative AI tools work cooperatively with officials and teams from other disciplines.

This will include working to “identify which groups, communities, civil societies, non-governmental organisations, academic organisations and public representative organisations have an interest in your project”.

Commercial professionals, meanwhile, can help “ensure that the expectations around the responsible and ethical use of generative AI are the same between in-house developed AI systems and those procured from a third party”.

The framework adds: “For example, procurement contracts can require transparency from the supplier on the different information categories as set out in the Algorithmic Transparency Recording Standard.”

The penultimate principle is that those deploying technology should “have the skills and expertise needed to build and use generative AI”.

Such expertise is likely to include new and novel technology skills. Officials are encouraged to take advantage of government training courses dedicated to generative AI, and also to “proactively keep track of developments in the field”.

The final principle is that civil servants should “use these principles alongside your organisation’s policies and have the right assurance in place”.

This tenet reflects that “many government organisations have their own governance structures and policies in place, and you also should follow any organisation-specific policies”.

In his foreword, government CTO Knott writes: “Generative AI has the potential to unlock significant productivity benefits. This framework aims to help readers understand generative AI, to guide anyone building generative AI solutions, and, most importantly, to lay out what must be taken into account to use generative AI safely and responsibly. It is based on a set of ten principles which should be borne in mind in all generative AI projects.”

The publication of the framework comes six months after CDDO published an initial introductory guidance document for how government employees should use generative AI. Those guidelines acknowledged that tools such as ChatGPT may have the capability to create apparently authentic messages, summaries of government’s future plans, or policy documents, but instructed officials that the technology should not be used to do so.

In October, it was revealed that the government is undertaking trials of a chatbot – underpinned by technology from the firm behind ChatGPT – that is intended to help find content and answer user questions on GOV.UK.

According to a privacy notice for the GOV.UK Chat tool, the software operates as “a natural language interface… [which] means you are able to ask it a question and it provides a human-like response”.

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *