Tom Tugendhat claims that the Home Office ‘engages regularly’ with companies creating artificial intelligence tools, with the aim of better understanding the technology’s possible impact on matters of national security
Government anti-terror experts are keeping tabs on the potential for chatbots and other automated technologies to radicalise internet users, a minister has claimed.
In a recent written parliamentary question, Labour’s shadow security minister Dan Jarvis asked the Home Office whether its ministers planning on “taking steps to prohibit the promotion of AI chatbots that have been programmed and taught through self-learning to encourage users to commit terrorist acts”.
In response, Jarvis’s government counterpart, Tom Tugendhat, claimed that is department and others are undertaking urgent work to better understand how artificial intelligence and automation could be used by terrorists – as well as other criminals.
- Government guidance bans civil servants from using ChatGPT to write policy papers
- Department for Education ‘assessing risks’ of ChatGPT
- AI minister turns to ‘very helpful’ ChatGPT to summarise legislation
“Rapid work is underway across government to deepen our understanding of the risks and to promote effective safety features through the lifecycle of AI products,” the security minister said. “We are carefully considering the impact that AI may have on different crime types including terrorism. The government is firmly committed to improving our understanding and tackling Generative AI technologies’ impact on radicalisation. This includes engaging with the Independent Reviewer of Terrorism Legislation.”
Tugendhat added that his department’s work includes working directly with AI firms to understand their technologies and how security measures can be embedded during the design process.
“The Home Office engages regularly with many companies developing generative AI technologies on a range of critical public safety issues, including terrorism and radicalisation, to promote online safety-by-design,” he said. “We will continue to develop safeguards and mitigations, working closely with international partners, civil society and academia, and we look forward to the outcomes of the AI Safety Summit in accelerating this important work.”