Body will be formed from parts of the existing Frontier AI Taskforce, other elements of which will become embedded as policy functions within the Department for Science, Innovation and Technology
The government’s taskforce for assessing risks from AI will become the AI Safety Institute, but some of its core functions will remain in the Department for Science, Innovation and Technology, it has been announced.
The institute will continue the Frontier AI Taskforce’s safety research and evaluations, while the other core parts of the taskforce’s mission – identifying new uses for AI in the public sector and strengthening the UK’s capabilities in AI – will remain in DSIT as policy functions.
It will also will work with other UK government functions, such as DSIT’s recently established Central AI Risk Function, to feed up-to-date information from the frontier of AI development and AI safety into government.
The government-backed body’s mission is “to prevent surprise to the UK and humanity from rapid and unexpected advances in AI”.
Prime minister Rishi Sunak said the AI Safety Institute “will act as a global hub on AI safety, leading on vital research into the capabilities and risks of this fast-moving technology”.
Technology secretary Michelle Donelan added: “The AI Safety Institute will be an international standard bearer. With the backing of leading AI nations, it will help policymakers across the globe in gripping the risks posed by the most advanced AI capabilities, so that we can maximise the enormous benefits.”
- AI Safety Summit: Major nations sign cooperation agreement recognising technology’s ‘potential for catastrophic harm’
- GOV.UK Chat – government tests AI from ChatGPT firm to answer online users’ questions
- Artificial intelligence to empower public services, says minister
The institute’s creation was confirmed on Thursday as part of the government’s two-day Global AI Safety Summit.
A joint statement from No.10 and DSIT said the institute “will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models”.
The statement said this will include “exploring all the risks, from social harms like bias and misinformation, to the most unlikely but extreme risk, such as humanity losing control of AI completely”.
The AI Safety Institute will look to work closely with the Alan Turing Institute, the national institute for data science and AI, the government added.
The creation of the institute has been welcomed by governments of the United States, Canada, Singapore and Japan, while the German government said it is “interestedly taking notice of the foundation of the AI Safety Institute and is looking forward to exploring possibilities of cooperation”.
The UK has so far agreed two partnerships: with the US AI Safety Institute, and with the government of Singapore to collaborate on AI safety testing.
The Frontier AI Taskforce – initially called the AI Foundation Model Taskforce – was a government start-up created in April with the aim of building the first team inside a G7 government that can evaluate the risks of frontier AI models. It was renamed in September, when its first progress report was published.
Ian Hogarth, who was appointed chair of the taskforce in June, will chair the new institute. The government said the institute will soon launch a recruitment process for a chief executive.