‘A gap that DSIT should urgently address’ – report calls for AI incident hotline


Think tank the Centre for Long Term Resilience has called for government to establish a function for citizens to report issues such as bias, public service impacts or harmful disinformation

Government should “urgently address” the UK’s need for a central mechanism to report and record failures of artificial intelligence technology, policy experts have claimed.

A report from specialist think thank the Centre for Long Term Resilience (CLTR) cites the lack of such a reporting facility as “a concerning gap in the UK’s regulatory plans” for AI. An incident-reporting function, which CLTR says should be based in the Department for Science, Innovation and Technology, is needed to track the potential risks being created by the technology, as well as enabling government to “coordinate responses to major incidents where speed is critical… [and] identify early warnings of larger-scale harms that could arise in future”.

Without a dedicated reporting service, “DSIT will lack visibility of a range of incidents”, including any burgeoning problems in which major AI models demonstrate “bias and discrimination… which could cause widespread harm to individuals and societal functions”, according to the think tank.

Other issues for which citizens should have a formal means of reporting include “incidents from the UK government’s own use of AI in public services, where failures in AI systems could directly harm the UK public, such as through improperly revoking access to benefits, creating miscarriages of justice, or incorrectly assessing students’ exams”.

A reporting platform would also allow government to be alerted to: incidents of misuse of AI systems, [such as] detected use in disinformation campaigns or biological weapon development, which may need urgent response to protect UK citizens; [and] incidents of harm from AI companions, tutors and therapists, where deep levels of trust combined with extensive personal data could lead to abuse, manipulation, radicalisation, or dangerous advice, such as when an AI system encouraged a Belgian man to end his own life in 2023”.

The CLTR report adds: “DSIT lacks a central, up-to-date picture of these types of incidents as they emerge. Though some regulators will collect some incident reports, we find that this is not likely to capture the novel harms posed by frontier AI.”

The think tank recommends three next steps which should be taken “urgently” by DSIT.


Related content


The first of these is to create a system allowing people to report issues with AI being used by public bodies. The report, which claims that such a system could be delivered by building on government’s existing Algorithmic Transparency Recording Standard, says that this such a move represents “low-hanging fruit that can help the government responsibly improve public services”.

The second urgent step called for by the think tank is that government should ask regulators and sector experts to identify where there are currently “the most concerning gaps” in how AI is regulated.

The final recommendation is that government should “build capacity within DSIT to monitor, investigate and respond to incidents, possibly including the creation of a pilot AI incident database”.

CLTR engages with governments and civil society to support its “mission to transform global resilience to extreme risks”. AI, alongside biosecurity, is now one of the two biggest such risks on which its work is focused.

“While Covid-19 demonstrates the impact that extreme risks can have, we are likely to face greater risks within our lifetimes,” the organisation’s website says. “Threats from misuse of biotechnology or powerful AI systems would likely be even more destructive, and we are even less well prepared for them.”

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *