Changing the culture: What government must do to make the most of AI
Artificial intelligence could transform the public sector, but adoption has been slow. As the Government Office for Science publishes its report into the subject, Richard Sargent, director of ASI and former performance lead at the Government Digital Service, says leadership and culture will be crucial for success.
Leadership, collaboration and culture change will beckon AI success - Photo credit: Fotolia
Earlier this year, Deep Mind’s artificial intelligence programme defeated one of the world’s best Go players, in an event that could be seen as being both exciting and irrelevant.
Exciting because it was a major milestone in our efforts to understand and replicate human intelligence. Irrelevant because it can be hard to see how this breakthrough will change our daily lives.
This is particularly true with the public sector. Although data science and AI cause endless interest and excitement in Downing Street, it can seem far removed from the day-to-day efforts of those on the front line: those who manage tight budgets and are trying to deliver the health, education, welfare and other services people expect and need.
Of course, regular readers will know this is a simplistic picture.
AI is already starting to impinge on our daily lives: through virtual reality assistants on our phone, or through targeted adverts that effortlessly follow us around the web.
Meanwhile in government, pioneers are starting to use AI to address serious problems that used to need thousands of hours of grind, or couldn’t be tackled at all.
iCouncil: Do robots have what it takes for local government?
Digital transformation ‘struggling to meet ambition’, as automation threatens public sector jobs
Government told to up its game on artificial intelligence
At my company ASI, we get an unusually broad view of these problems through our fellowship programme. Three times a year, we take around twenty of the best performing physics, engineering, and maths PhD students and place them on eight-week projects with top companies and, increasingly, with central and local government.
Our latest fellowship just ended, and included projects that automated fraud detection, created a diagnostic engine in health, and modelled local teacher supply. They are now being taken and used by the relevant departments and local authorities.
We tackled these problems in just two months: and this is a signal that AI is coming of age. In the next few years, I would expect most of the public sector to start using data science, machine learning and AI to make their services more efficient and effective.
But where should you start? There are a number of straightforward opportunities you could look at.
Companies like Mastercard or PayPal have used AI to detect fraud for several years, and we now know a lot about how to develop the right algorithms.
Meanwhile fraud in the public sector is rife – it’s estimated at over £20bn each year. This is not only in tax and benefits, where the victims are taxpayers, but in areas like housing where fraudulent landlords are making tenants’ lives miserable and dangerous.
Too much fraud detection in the public sector is still manual, or depends on black box solutions sold by IT companies. This is a core part of government’s capability, and the techniques are not difficult. Using simple AI techniques would free up people’s time and effort, as well as saving hundreds of millions of pounds.
If filling in forms is a headache, spare a thought for the thousands of staff who have to process those forms.
There are a number of areas where applications can be simplified and the processing of those applications automated, which will save huge amounts of money. For example driving licenses, parking permits, government grants, planning applications and passports.
Not to mention awarding farm subsidies – the Department for the Environment, Food and Rural Affairs has been fined more than £600m over the last eight years for incorrect processing of these, so a solution to simplify the process offers a clear benefit.
Roughly every three years, the amount of medical data on the planet doubles in size. By 2020, it is expected to double every 73 days.
Companies like Babylon and Zebra Medical are using that data to diagnose and help patients: if a patient has a rash and is wondering whether to go to the pharmacist or the hospital, AI can now provide an answer which is even more reliable than a nurse with decades of experience.
In education, companies like Duolingo and Mathletics are developing tools that automatically mark student work, and provide feedback in real time.
This gives the teacher a stream of really useful information, reduces their workload, and is a source of frequent encouragement for the student. One of our fellowship companies is even using AI to measure the rate at which individual students forget things to personalise the curriculum balance of revision and progression.
Why is take-up so slow?
These examples all address a real need in government, and are designed around the needs of users. So why aren’t they already widespread?
First, there are understandable concerns about the jobs that might be replaced, and the accountability of decisions made by computers.
In our fellowships and consultancy work, we have usually found that AI doesn’t replace jobs, it re-targets them.
Much of the work that AI replaces is laborious and repetitive – it is no one’s dream job to compile endless lists. Calculators and software have changed the tasks of accountants, but they haven’t removed the need for them.
Instead, they focus on more interesting and complex issues that often require human interaction, creativity, and expert judgement.
There are clear issues of regulation and protection that need to be worked out with government: but these should not prevent us from delivering better services.
Second, most people with the skills to create or even commission AI are not ending up in the public sector. We need to recruit and train differently.
And finally, I would argue that we are not always well structured for this new age.
Government is usually organised by service. Education and health are in separate departments, and underneath them subdivisions – each of which look after an area of policy or delivery.
But this is not sensible in an AI age: if an agency is good at running one kind of digital service, chances are they’ll be good at running a bunch of them.
When I helped set up the Government Digital Service, I defined and ran the government’s digital service standard that everyone had to pass before they could launch the service publicly.
We spoke with hundreds of service teams from dozens of departments and agencies. The most important factor in determining whether they succeeded wasn’t their knowledge of their departmental subject matter, but whether they had the organisational leadership and culture to develop and run digital services.
And the departmental silos continue to make it very hard for datasets to work together.
Why don’t we check benefit records against the death register to avoid paying benefits to people who are dead? Because they are run by separate departments.
Why don’t we have one consistent list of companies in the UK? Because HMRC and Companies House maintain their own separate lists.
The quality of machine learning and AI is heavily dependent on the quality and volume of data. By having secure but accessible lists of information that many services and organisations can access, the possibility for applications would increase.
The number of AI applications is increasing exponentially, and the potential for the public sector productivity and service quality is astonishing.
If we can reform our data architecture, recruit the best, and rethink how AI can be used across department and local authority lines, the gains for the taxpayer, user, and public sector worker would be enormous.
CyberArk, our sponsor for PublicTechnology Cyber Week, writes about how industry and government are working together to meet Australia’s cyber challenges
The perimeter security programme is already protecting thousands of NHS services and wants to work with more trusts, according to Rosie Underwood
Customer service head Angela MacDonald is promoted
Fake online shops, malware, phishing emails and ransomware attacks on hospitals have been among the scams perpetrated by bad actors during the pandemic
PublicTechnology talks to Rich Turner about why organisations need to adopt a ‘risk-based approach’ to security – but first make sure they get the basics right
CyberArk's David Higgins explores the cyber risks of hiring independent contractors
HPE shows why organisations are increasingly seeking to understand and consider the environmental impacts of their IT purchasing decisions