Stakeholders are cautiously optimistic following the conclusion of the AI Safety Summit, but have warned that further legislation is needed and focus must be redoubled to ensure objectives are achieved
Stakeholders have welcomed the international cooperation agreement unveiled at the AI Safety Summit last week – but urged government to ensure that the achievements of the event are followed up with action over the coming months and years.
Conservative MP and chair of the Science, Innovation and Technology Committee Greg Clark the summit’s achievement in getting the US and China to sign the agreement.
“It has been a success, It was clearly a good thing to have held the summit and an achievement to have the attendance the PM and team have managed,” he told PublicTechnology sister publication PoliticsHome. “USA and China and many more in between have said they are willing to work together,” he said.
Clark also welcomed the agreement from companies that governments should be able to access and test AI models.
“Were the government to discover something they found dangerous about the models they would have to act,” he said. “But the mechanism for acting has not been determined.”
The committee will hold a session with technology secretary Michelle Donelan this week, in which members will question her on what will come next following the summit.
Conservative MP and former digital minister Matt Warman agreed that getting China, the US, and the EU “in the same room, talking the same language” was a success in itself, and one that could only have been achieved by the UK.
- New AI Safety Institute seeks to ‘prevent surprise from unexpected advances’
- GOV.UK Chat – government tests AI from ChatGPT firm to answer online users’ questions
- Artificial intelligence to empower public services, says minister
Conservative MP and former justice secretary Robert Buckland said that he now wanted to see the government “delve into different sectors” and assess what harms AI could be causing now, rather than just in the future.
“The very fact that this is the first of several summits has given me encouragement that it wasn’t just a publicity stunt,” he said. “I think the declaration is a very good start, but that we do need to delve down into different sectors. There wasn’t a reference to justice, which I think has to be part of the consideration now and how we have a set of international principles with the way we use AI in justice because the deep fakes problem is already affecting justice.”
The Ada Lovelace Institute – an independent research entity focused on data and AI – called for the agreements at the summit to be followed up by supporting legislation.
“The conversations at Bletchley reinforced that the AI Safety Summit wasn’t fundamentally about technology or regulation, but about people,” Fran Bennett, interim director of the institute, said. “Any effective governance must be backed by legislation. Without it, we won’t be able to incentivise developers and users of AI properly to make AI safe, or give regulators the scope, powers and resources they need. The UK government has two live opportunities to start addressing the regulation of AI: in the King’s Speech and the Data Protection and Digital Information Bill. If the Government seizes these opportunities, it will be a significant step forward for making AI work for people and society.”
Jack Stilgoe, professor of science and technology policy at University College London said that government and civil society must not leave technology firms to operate in isolation going forward.
“If the tech industry just said ‘OK, we’ve showed that we can engage in these discussions now, let us go away and get on with it, trust us’, then that would be a huge mistake,” he told PoliticsHome. “The consensus has been that we can’t trust the industry to self regulate, that that myth has been busted. But the question of what regulation should look like, and whether the UK can catch up with the EU and US approaches, I think remains open.”