AI Week: How government learned to stop worrying and love AI
A few years ago, artificial intelligence was barely on the public sector’s radar. Now, it is one of the most-discussed topics in all of government. To kick off our dedicated AI Week, PublicTechnology examines where we are, and how we got here
Credit: OpenClipart-Vectors from Pixabay
It is often the case that, if you look a little too closely at the next big thing in the technology industry, it may start to resemble the last big thing. Or perhaps even one of the many big things that came before that.
When many were congratulating Apple for inventing the tablet market with the release of the iPad in 2010, many more were pointing to the touchscreen devices manufactured over the previous two decades by the likes of Palm, FSC, and Nokia – not to mention Apple itself.
When the concept of cloud computing began to go mainstream, there were plenty of onlookers who wondered whether this exciting new concept was really just a synonym for the internet, or virtualisation, or software as a service or, simply, ‘someone else’s computer’.
Artificial intelligence – perhaps the biggest and nextest of the current next big things – is also nothing new, many would argue.
In fact, a commonly cited first example of AI dates back as far as 400 BCE: the giant bronze automaton Talos was a Greek mythological character who circumambulated Crete protecting the island from external threat.
Of course, it takes any new technology a few years for the buzz to catch up to the bleeding edge. But two and a half millennia seems like an unusually long incubation period.
But, however long it took to get here, it seems clear that, for artificial intelligence, the time is now.
In the last few years, the technology has found itself increasingly in the public eye – for good and for ill.
And the data clearly suggests that government has been among those turning its attention to AI.
The ‘Announcements’ section of GOV.UK typically publishes between 8,000 and 12,000 articles each year, including press releases, ministerial speeches, public information, and transparency data.
Of the 9,780 government announcements made in 2012, only one – a speech by then education secretary Michael Gove – made any reference to artificial intelligence.
In each of 2013 and 2014 there were five mentions of AI in official government releases, rising to six in 2015. Several of these, again, stemmed from Michael Gove and related to potential uses of the technology in the education sector.
But AI was also referenced by the Treasury several times, including in its 2015 appointment of Demis Hassabis to the newly created National Infrastructure Commission. Hassabis, who founded DeepMind and has remained one of the company’s senior executives since its acquisition by Google, was last year appointed as the government’s chief AI adviser.
The increase in government’s interest in the technology accelerated in 2016, with 25 mentions, and continued in 2017, with 108 references to AI. These came from a wide range of departments, including BEIS, DCMS, numerous notices from the Ministry of Defence, and even the prime minister’s office – with Theresa May flagging up the potential for automation to be deployed by social networks in identifying and removing content inciting terror or hatred.
By May 2018, the PM was confidently predicting that the use of AI would, over the course of the next 15 years, help prevent an additional 22,000 each year dying from cancer.
“The UK will use data, artificial intelligence and innovation to transform the prevention, early diagnosis and treatment of diseases like cancer, diabetes, heart disease and dementia by 2030,” she said. “The development of smart technologies to analyse great quantities of data quickly and with a higher degree of accuracy than is possible by human beings opens up a whole new field of medical research and gives us a new weapon in our armoury in the fight against disease.”
That speech was one of the 225 articles published by government referencing AI – including its £1bn AI Sector Deal. The 166 released so far this year means that, since the beginning of 2018, 2.5% of all government announcements – on any topic – have discussed artificial intelligence.
The AI Sector Deal – which provided £406m of government backing for the recruitment of an additional 8,000 computer science teachers, and the creation of a National Centre for Computing – is one of a number of big-ticket announcements made in the last 18 months.
"The question there isn't so much can you write that algorithm – because you can... the more important question is whether you should write that algorithm in the first place or not?"
Dr Cosmina Dorobantu, Alan Turing Institute
The Office for Artificial Intelligence is new government entity jointly run by Departments for Business, Energy and Industrial Strategy, and Digital, Culture, Media and Sport.
Another new organisation under the watch of DCMS is the Centre for Data Ethics and Innovation, which has a remit to “develop the right governance regime for data-driven technologies” – chiefly AI. Its first two areas of focus are bias and targeting.
In June, the Office for AI and the Government Digital Service jointly published the Guide to using artificial intelligence in the public sector.
The creation of the guide – which covers how to assess, plan, and manage the use of AI, before moving onto how to do so ethically and safely and, finally, cites existing government examples – was created in partnership with the Alan Turing Institute: a network of 13 universities that work together to form the UK’s national academic institution for data science and AI.
The Turing provided the section of the guide dedicated to ethics and safety.
Dr Cosmina Dorobantu, deputy director of the institute’s public policy programme, tells PublicTechnology: “We're seeing, more and more, that government departments or policymakers don't come to us just with technical questions, but also with ethical questions. So, for example, local authorities around the country, because of budget cuts over the past few years, have struggled to have enough social workers to identify children who are at risk. So, they're looking more and more towards implementing a machine-learning algorithm that would be able to identify those children.”
She adds: “The question there isn't so much can you write that algorithm – because you can; [although there is] a debate as to how accurate those algorithms can be. But the more important question is whether you should write that algorithm in the first place or not? So, we are seeing a lot of those questions, and I don't think many of those organisations are equipped to deal with them. In local authorities, for example, it's usually the person who administers the IT system – and they're not in a position to answer.”
Planning and preparation
The guidance for assessing, planning, and managing the use of artificial intelligence is split into four areas: understanding AI; assessing if it is the right solution; planning and preparing for implementation; and managing an AI project.
The first of these sections addresses a question that many people – including those of us who may have what seems like thousands of articles about the technology – may still feel totally unqualified to answer: what, actually, is artificial intelligence?
The government’s guide defines it thus: “At its core, AI is a research field spanning philosophy, logic, statistics, computer science, mathematics, neuroscience, linguistics, cognitive psychology and economics. AI can be defined as the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence. AI is constantly evolving, but generally it: involves machines using statistics to find patterns in large amounts of data; is the ability to perform repetitive tasks with data without the need for constant human guidance.”
Over the course of researching this week, PublicTechnology asked a number of people to provide their own definition for AI.
Eleonora Harwich, director of research at think tank Reform, says that it is “a broad category, with lots of different things”. But she offers a succinct one-line definition: “An agent that is able to intelligently respond to its environment – and, by intelligence, it is responding to stimulus.”
Annual number of deaths from cancer then prime minister Theresa May predicted could be prevented by AI
Amount of money dedicated to the recruitment of an additional 8,000 computer science teachers
Number of governance factors the government recommends organisations should take into account when managing an AI project: safety; purpose; accountability; testing and monitoring; public narrative; and quality assurance
Since the beginning of last year, amount of government announcements – on any topic – that have mentioned artificial intelligence
Having – hopefully – established a definition for AI, the government guide goes on to offer advice on how to judge whether the technology is “the right solution” for the needs of the problem or task at hand.
The government makes five recommendations for what should be considered.
The first is whether or not “data containing the information you need” exists, and the second is whether it is “ethical and safe” to use it.
The third consideration is to assess if there is “a large quantity of data for the model to learn from”, and the fourth is whether “the task is large scale and repetitive enough that a human would struggle to carry it out”.
The final factor to consider is the extent to which the results “would provide information a team could use to achieve outcomes in the real world”.
Once an organisation is ready to plan for implementation, it should progress along the lines of any agile project, beginning with a discovery phase, followed by alpha and then beta.
During discovery, organisations should first assess their needs and the data they access to, before building a team. Roles they are likely to require include data architects, scientists, and engineers, as well as ethicists and domain experts.
Managing infrastructure and suppliers is another step that should be dealt with during the discovery phase, as is preparing and securing data, and trying to ensure its diversity.
During an alpha phase, public sector entities are encouraged by the government to undertake six steps: “split the data; create a baseline model; build a prototype of the model and service; test the model and service; evaluate the model; assess and refine performance”.
Once a project has moved into beta, the first step should be to “performance-test the model with live data and integrate it within the decision-making workflow”. The model should be subject to “continuous evaluation” and, all the while, the organisation should strive to “make sure users feel confident in using, interpreting, and challenging any outputs or insights generated”.
When AI has been implemented and is up and running, the government guidelines flag up six governance factors to take into account: safety; purpose; accountability; testing and monitoring; public narrative; and quality assurance.
All of which adds up to a lot of work that is needed before government and the wider public sector can begin reaping the benefits of artificial intelligence.
But, in the foreword of the AI guidelines, minister for implementation Oliver Dowden (pictured right) was clear that the effort brings potentially transformative benefits for all parties.
“There are huge opportunities for government to capitalise on this exciting new technology to improve lives,” he says. We can deliver more for less, and give a better experience as we do so. For citizens, the application of AI technologies will result in a more personalised and efficient experience. For people working in the public sector it means a reduction in the hours they spend on basic tasks, which will give them more time to spend on innovative ways to improve services. When government and citizens benefit, so does the economy.”
Dowden adds: “We want the public sector to understand AI and embrace the opportunities here.”
This article forms part of PublicTechnology’s dedicated AI Week, in association with UiPath. Look out over the coming days for lots more content – including an exclusive webinar in which experts from the public and private sector will discuss all the major issues.
Tomorrow, we will bring you case studies of how two of the public sector’s biggest organisations – the Department for Work and Pensions and HM Revenue and Customs are using artificial intelligence in their operations. On Wednesday, an exclusive webinar discussion – in which a panel of private and public sector experts will debate all the major issues related to government's use of AI – will be available to view on demand. Click here to register to do so – free of charge.
The relationship between artificial intelligence and the law is receiving ever greater focus – while somehow becoming less clear. PublicTechnology looks at the role that regulators and...
Webinar discussion – which is available to view for free – covers ethics, technical barriers, and key use cases of artificial intelligence
The potential for technology to embed and amplify systemic biases is seen as one of the biggest inherent risks of deploying AI and automation at scale. PublicTechnology talks to experts...
Joel Cherkis from UiPath examines why governnments should not be thinking about whether to deploy either robotics or AI – but how the two can work in unison to deliver greater benefits
Security can help you grow whilst protecting the very core of your organisation, writes BT
BT looks at how to secure your SD-WAN services, starting with security by design
Nigel Hawthorn looks at how to review cloud use, report on risks and apply policies to reduce likely data loss incidents in this latest insight from BT
New network technology creates new risk, but the same technology is driving a step-change in how we think about security, writes BT