DSIT minister Viscount Camrose acknowledges that automated technology can often ‘tread a line between fun and amazing, and serious and scary’ – but identifies positive AI uses in his own role
The minister for AI has said that he has found the popular ChatGPT tool “very helpful” in summarising key legislation.
Viscount Camrose – a hereditary peer whose ministerial post sits within the Department for Science, Innovation and Technology – said that AI technologies currently in use often “tread a line between fun and amazing, and serious and scary”.
Among the more positive uses identified by the minister are asking generative AI – as exemplified by the likes of OpenAI’s ChatGPT and Google Bard – to scan and summarise lengthy documents.
In an interview with PublicTechnology sister publication The House, Viscount Camrose said that he had deployed generative AI to create a summary of the Online Safety Bill currently making its way through parliament.
“What’s so brilliant with ChatGPT and the summarisation is that it is so much better than people at taking huge chunks of text and giving me a summary,” he said. “That’s very helpful. Particularly in this job where the key is being able to absorb huge amounts of information really quickly.”
- Government guidance bans civil servants from using ChatGPT to write policy papers
- Artificial intelligence to empower public services, says minister
- Department for Education ‘assessing risks’ of ChatGPT
The minister said that working with the likes of Google and OpenAI will provide government “part of the solution… [but] not the whole of the solution”.
“It goes without saying, if you only talked to very large AI labs, you would have a strong set of views, but in one direction,” he said.
The minister and will shortly welcome senior managers of major tech firms – as well as leading academic experts and overseas government officials – to the AI Safety Summit. Viscount Camrose said that government has several main ambitions for the event, which is taking place in Bletchley Park and will host about 100 hand-picked attendees.
“Think of it like a cake: the base layer is producing a shared statement of the risks because I think that gives us a lot of difficult definitions – what is AI? What are the risks? What are the benefits? A shared philosophical agreement on all of those things is not trivial but that’s the minimum I think we need to have,” he said. “The layer above that is what should we going forward as governments, as creators of AI, as companies, be doing about this differently? It’s not going to answer that question but it’s going to put in place the steps that get us to the answer. The third thing, which I think is really very important too, [and] sits on top, is a demonstration of AI as a force for good.”