Meanwhile the Scottish Government has indicated that it is ‘deeply disappointed’ at the lack of invite to the event this week and points out its strategy predates the Westminster equivalent
As the government concludes the international AI Safety Summit, fears about the risks posed by artificial intelligence have been voiced by the general public and one of the technology sector’s foremost leaders.
At the invite-only event hosted at Bletchley Park this week, Elon Musk told news agency PA that AI is “one of the biggest threats” facing humanity, and represents and “existential risk”.
“We have, for the first time, the situation where we have something that is going to be far smarter than the smartest human,” he said. “We’re not stronger or faster than other creatures, but we are more intelligent, and here we are for the first time, really in human history, with something that is going to be far more intelligent than us.”
The chief executive of SpaceX, Tesla, and X (formerly Twitter), praised prime minister Rishi Sunak for hosting the summit, at which major nations – including the US, UK, China and the EU – announced a cooperation agreement dedicated to jointly tackle the possible “catastrophic” risks of the technology. Musk agreed there is a need for “third-party referee” with a global remit to “observe what leading AI companies are doing and at least sound the alarm if they have concerns”.
Concerns about the future of AI – and, in particular, the excessive influence of major tech firms in shaping it – was revealed by a survey conducted by PublicTechnology sister publication PoliticsHome.
Published to coincide with the summit, the study found that 55% of respondents believed that big tech had too much power in setting public policy.
- GOV.UK Chat – government tests AI from ChatGPT firm to answer online users’ questions
- Artificial intelligence to empower public services, says minister
- AI minister turns to ‘very helpful’ ChatGPT to summarise legislation
However, only 18% of research participants said that policymakers should primarily address the risks created by the technology – compared with 25% who want the focus to be placed largely on the opportunities. Some 46% believe attention should be evenly split.
Not in attendance at the AI Safety Summit – which formally concluded yesterday – were any representatives of the devolved administrations of Scotland, Northern Ireland, or Wales, according to a letter sent by Scottish innovation minister Richard Lochhead to UK technology secretary Michelle Donelan. The Holyrood minister claimed that, while AI regulation will be set from London, rather than Edinburgh, the rules will have a big impact on “devolved policy areas”.
“Like my Welsh and Northern Irish colleagues, I reiterated my disappointment at UK Nations not being invited to take part in the AI Safety Summit,” he wrote. “Scotland is a leader in several AI areas, and I believe could have made a valuable direct contribution to this global conversation. Our National AI Strategy pre-dates that of the UK, and the Scottish AI Alliance’s Leadership Group is currently carrying out an independent review to ensure that we remain at the forefront of AI policy and technology development.”
Also seeking to get in on the artificial intelligence act this week was dictionary publisher Collins, which used the summit as a backdrop to announcing its word of the year for 2023: AI.
The firm claimed that use of the term – which it defines as “the modelling of human mental functions by computer programs” – has risen fourfold this year. In being named word of the year, it beat off competition from other recently in-vogue terms such as ‘nepo baby’, ‘greedflation, ‘ULEZ’, and ‘debanking’.
Collins managing director Alex Beecroft said: “We know that AI has been a big focus this year in the way that it has developed and has quickly become as ubiquitous and embedded in our lives as email, streaming or any other once futuristic, now everyday technology.”