After examining current public service uses of artificial intelligence, as well as potential risks and rewards, a panel of experts addresses the key question: what is all the fuss about?
Attendees of PublicTechnology Live last month spent the day hearing about various manifestations of the digital transformation of the public sector, including the UK’s digital strategy, what leaders can do to facilitate change, and the effect on the largest and smallest public bodies.
Throughout the day, while navigating the opportunities and pitfalls of digitisation, there was an elephant in the room. One that reared its head occasionally .
But it was not until the final discussion of the day that the topic of artificial intelligence took centre stage, with panellists tackling the question: should the public sector believe the AI hype?
Participants (pictured above) started by discussing the biggest myths surrounding the technology.
Yasmin Ibison, senior policy advisor at the Joseph Rowntree Foundation – a think tank focused on eradicating poverty – began with the idea that AI is a “silver bullet: the folklore-derived idiom that refers to a simple and all-encompassing solution to an otherwise knotty problem.
Ibison wishes to challenge “The narrative that any and all types of AI embedded everywhere in the public sector are going to lead to economic growth and societal benefits with few negative externalities.”
“We need to see AI as a function of a broad system… [and] what happens if you are able to diagnose more accurately, where does the demand shift to the rest of the NHS – and what’s the capacity of the NHS to deal with the increase of demand elsewhere?”
Yasmin Ibison, Joseph Rowntree Foundation
“Silver Bullet is used by senior government officials” she adds. But work is needed to “unpick this top-line phrase… challenge that narrative” and “be really critical about where we’re going to embed it and where a problem may not be best optimised by it”.
An online search of “AI is a silver bullet” brings up a succession of sites damning the phrase, swiftly followed by a tweet from the-hen deputy prime minister Oliver Dowden that begins: “AI is genuinely a silver bullet.”
The panel reminds us that, while government is apt to talk up something it is investing in, there is a long road ahead, with many careful considerations still to be made about where AI will be useful and where it will not.
Dr Jonathan Bright, head of AI for public services and head of online safety at the Alan Turing Institute, believes that the key area in which AI will truly thrive is in “doing the really boring simple stuff that everyone can do, but no one wants to do”.
Bright is arguing against the notion that AI is “some kind of amazing technology that can do things that humans couldn’t possibly comprehend”.
He believes that “approaching AI as this low-level helper – instead of a God-like machine – is a helpful way of starting.”
David Knott, the UK government’s chief technology officer – a role which sits within the Cabinet Office’s Central Digital and Data Office – reiterates this point: “AI will deliver value, but we have some time before you have to use it to avoid falling behind.”
He also points to the importance – particularly for government entities – of taking the time and stringency to explore the technology’s efficacy.
“I see loads of people doing experiments,” he says. “And I think proper structured, thoughtful, and rigorous experimentation… is incredibly valuable. Because it helps people discover the boundaries of value, and it also helps people discover techniques to actually make models behave in the ways you want them to behave. So I think there is a myth that, if you’re not [dropping] everything to do this, then you’re falling behind; most people – including the private sector – don’t have large scale functional implementations yet, as I think people are still feeling this out. We have a little time. And I don’t feel bad about taking that time.”
An National Audit Office report published in March reveals that, of the departments that took that were surveyed, 37% had deployed AI and another 37% are actively planning to do so. The NAO assessment said that “government bodies are at an early stage in developing their own AI strategies… [as] only 21% of the 87 bodies responding to our survey said they had an AI strategy”.
Mathew Evans, director of markets at techUK, believes that a level of hype “can be quite useful” for “generating political capital” – but goes on to illustrate how lots of attention comes from the misunderstanding that AI is brand new.
“I think some people think AI arrived with ChatGPT – it really didn’t; it’s been around for many years and has been used in the public sector.”
This is exemplified by MYCIN – a tool first developed in the 1970s, and designed to use AI to help diagnose bacterial infections.
David Pool, portfolio director for data Science and AI, at digital skills specialist QA, joins other panellists in questioning the myth, “that government is somehow lagging behind the private sector”, pointing to some “incredible use-cases” in the public sector.
For example, he highlights work taking place to derive insights from population-level information – “incredibly high-dimensional, complex data sets that you can really put powerful models to work on”.
These “statistical models looking at pattern detection, change-point detection, anomaly detection… come up with incredibly surprising results”, Pool adds
Capabilities and caution
Addressing automated technology’s potential for healthcare and diagnostics, Ibison suggests more focus is needed on “AI as a function of a broad system… [and] what then happens if you are able to diagnose more accurately, where does the demand shift to the rest of the NHS – and what’s the capacity of the NHS to deal with the increase of demand elsewhere?”
She adds: “We have to see the capabilities of AI, again, not as a silver bullet that’s going to fix all the issues in the NHS but how does it interact with the rest of the flow of the system in general.”
Evans sounds another note of caution, stating that organisations should “really look at the challenge you’re trying to solve, because you may not need a generative AI solution to crack that”.
He adds: “A good old robotic process automation is probably going to do 80-90% of an operational task and that might be a lot cheaper or easier to implement than it is training a model to deliver the level of results.”
The NAO report perhaps illustrates this point, finding that one of the most common current uses of AI in the public sector is a non-generative “digital assistant to… help customers complete tasks or find the information they are looking for, [or[, if it is unable to help, it links customers to an adviser through webchat”
Another example is document comparison systems, which the report describes as “an AI tool to support case workers by automatically identifying differences between application forms and other registration documents”.
The panel moves on to the implications for digital literacy, as Bright from the Turing Institute raises the subject of training your team to use AI systems.
“There is a myth that, if you’re not [dropping] everything to do this, then you’re falling behind; most people – including the private sector – don’t have large scale functional implementations yet, as I think people are still feeling this out. We have a little time. And I don’t feel bad about taking that time.”
David Knott, government chief technology officer
“In this new world, if you’re looking to develop AI machine learning projects, there is this thing called ‘organisational meta learning’, which is teaching the organisation how to learn – because that has to happen as well, you can’t just train individuals. This is something you’re going to hear a lot more about in the future. It’s about how you internalize your knowledge, processes, outcomes, and projects. This is really critical and something that everyone needs to think about.”
David Pool adds: “[This] needs to be a fundamental part of training: people need reskilling. You need a series of skills, and those skills build into competencies that enable you to do certain things. And that’s very much the way organisations are looking now – Can we build a team that can deliver a project successfully? Then having acquired that meta learning you can go do other things and work on more and more complicated projects moving forwards.”
A question from the audience asks: will AI amplify digital exclusion?
Ibison responds: “There is a risk that these tools may exacerbate the divide – there are about 11 million people that are currently living in digital poverty or are digitally excluded and that does correlate with economic poverty as well.”
https://mintfin.tistory.com/tag/넷마블20오목20설치
https://mintfin.tistory.com/tag/강진군20채용정보
https://www.pornhub.com/view_video.php?viewkey=732384174
http://alones.kr/2024/01/19/3827.html
https://sportscom.co.kr/withyoon/
https://sportscom.co.kr/withyoon/
https://chotiple.tistory.com/tag/https20차단20우회
https://chotiple.tistory.com/tag/KG이니시스20홈페이지