Cabinet Office responds to growing prevalence of generative artificial intelligence with new guidance that explores potential uses of the technology by government departments while setting clear limits on doing so
Government has published formal guidance on the use of generative artificial intelligence tools by civil servants, including an instruction that the technology should not be used to write policy papers or other formal documents.
The advice, newly published by the Cabinet Office, acknowledges that tools such as ChatGPT may have the capability to create apparently authentic messages, summaries of government’s future plans, or policy documents. But the document instructs officials that the technology must not be used to do so.
“Generative AI has the ability to produce written outputs in various styles and formats,” it says. “It is technically possible for one of these tools to write a paper regarding a change to an existing policy position. This is not an appropriate use of publicly available tools such as ChatGPT or Google Bard, because the new policy position would need to be entered into the tool first, which would contravene the point not to enter sensitive material.”
Such sensitive information would encompass all personal data, as well as anything with an official security classification or which otherwise “reveals the intent of government… [and] may not be in the public domain”.
- Artificial intelligence to empower public services, says minister
- Department for Education ‘assessing risks’ of ChatGPT
- AI laws must ‘support businesses while protecting citizens’, Scottish minister says
A complete ban on entering any such information into generative AI platforms is one of three core principles underpinning the guidance, alongside a warning that anything produced by the technology “is susceptible to bias and misinformation, [and] needs to be checked and cited appropriately”.
With these limitations in mind, another inappropriate use of generative AI flagged up by the guidelines is inputting data for algorithmic analysis where the data subject or owner has not provided express consent for such usage.
But the third principle advises civil servants that “with appropriate care and consideration, generative AI can be helpful and assist with your work”.
“You are encouraged to be curious about these new technologies, expand your understanding of how they can be used and how they work, and use them within the parameters set out within this guidance,” the document adds. “For all new technologies, we must be both aware of risks, but alive to the opportunities they offer us.”
To which end, possible appropriate uses for generative AI include using the technology for as a “research tool to help gather background information on a topic relating to your policy area that you are unfamiliar with”.
Another generalist example of how the technology can be appropriately used in government “is to summarise publicly available information such as a relevant academic or news article on a policy area, that could be added to an annex of a briefing”.
The guidance also provides two more specialised of ways in which civil servants could incorporate generative AI into their work.
The first of these is software developers who “may wish to use an generative AI to create a front end interface to a website, that will be released to the public, and use the outputs to speed up the work involved in design and build”.
Another potential specialist deployment of generative AI is its use “by data scientists and machine learning specialists as a data mining tool to read and analyse large qualities of text-based information, to try to find anomalies, patterns and correlations that lead to a greater understanding of a problem”.
With all use cases of generative AI, civil servants are instructed to consider “three ‘hows’” specified by the guidance: how questions asked will be used by the system; how the answers provided might be misleading; and how the platform in question works.
Generative AI is the generic term applied to any form of artificial intelligence that can autonomously create – to specification – written content, images, video, audio, or software code. The profile of the technology has grown significantly in recent months, following the public availability of Google’s Bard tool and, in particular, the ChatGPT platform from OpenAI – which has impressed many users with the apparent authenticity of its creations, including stories, essays, and poems.
The two platforms, which are both namechecked in the opening lines of the government guidance, are examples of large language models (LLMs), which can analyse vast volumes of data to generate written content in response to users’ commands.
As well as LLM systems, the guidance states that it “also covers… other forms of generative AI, including systems such as DALL-E which generates images based on text and BLOOM which generates computer code”.
As part of recent feature for PublicTechnology sister publication Civil Service World, officials tested the ability of LLMs to produce various kinds of document, including ministerial statements and policy summaries – many of which such uses are effectively forbidden under the new guidance.