Election: Government issues dedicated guidance on deepfake risks

Dedicated advice document published last week by the Cabinet Office encourages officials, candidates, and party staff to be wary of amplifying disinformation risks, but to swiftly alert authorities, where necessary

With less than a month until UK citizens vote in the general election, government has updated its portfolio of official security advice for candidates and officials with the creation of dedicated guidelines on the risks posed by generative artificial intelligence.

The new policies have been created because, “in recent years, the emergence of generative AI has provided attackers with further tools that can be used to disrupt the security of elections in the UK, to influence the result, or to undermine citizens’ trust in the electoral process itself”, the document says.

It adds: “Anyone involved in the election process could be targeted by online disinformation. This includes high-profile candidates and local party offices, as well as officials required to run the election and IT staff who provide technical support to candidates, local and central government, and political parties.”

The guidance, published last week by the Cabinet Office, identifies various types of content created by generative AI through which false information can be spread. This includes fake text, for which automated “tools can be used to quickly and cheaply create unique content to post on social media platforms”.

AI technology is also capable of generating fake images “intended to mislead the voting public, [which] could feature candidates, election procedures, the trustworthiness of election officials, and other issues that may affect voter behaviour or turnout”.

The advisory document notes that, while “it has been possible to create or doctor images for a long time, what’s changed is the ease with which fake content can now be created – and how quickly it can be shared online – allowing attackers to spread disinformation”.

Disinformation is typically defined as the coordinated creation and dissemination of falsity for ideological or political purposes – or in the context of warfare.

Most seriously, generative AI can also create deepfake audio or video content, the guidelines warn. The advisory brief says that such media can be “convincing… [and] may be used to mislead the public about candidates or the election, and to provide a seemingly trustworthy source for disinformation campaigns”.

Related content

If deepfake content is detected, election officials are advise to report the details – including “any instances of false information relating to the administration of the election, such as when, where and how people can vote, and who can vote” – to the relevant returning officer.

These officers should, in turn, “liaise with the Electoral Commission and their local police elections special point of contact if they are made aware of a deepfake incident involving a candidate or false information”.

“Local authorities should ensure that staff, including those employed through an agency, know how to report concerns,” the guidance adds.

Those encountering all forms of false content are advised by the government to “think before you respond to any reports of disinformation, [as] this may inadvertently amplify the suspected disinformation and could make the matter worse; if an official response is required, use official channels and avoid referencing the disinformation”.

If it is decided that formal action is required, the guidance provides contact details that can be used to report the disinformation to Twitter, Google – which owns YouTube – TikTok, Microsoft, or Meta: owner of Facebook, Instagram, WhatsApp, and Threads.

Candidates or party officials are also encouraged to notify their party, who should then “be able to offer support and have relevant comms channels in place to escalate cases to platforms or the police”.

“Independent candidates should contact platforms [or] police directly, in the absence of a central party,” the guidelines state.

For all instances of disinformation “where material is thought to constitute a criminal offence, you should report to the police as soon as possible”, according to the guidance.

“If you feel a threat or danger is immediate, you should call 999,” it adds.

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter