AI Week: Turing Institute on why government should use data science to ‘make better policy’
Helen Margetts and Cosmina Dorobantu from the Turing’s public policy programme talk to PublicTechnology about ethics, explainability, and why government has ‘unique expertise’ it can benefit from
The British Library on London’s Euston Road is probably best known for its oldest items.
The longest-surviving pieces among its 200 million-strong collection are Chinese oracle bones believed to date from about 1,500 BC.
Other notable items in its ownership include one of Leonardo da Vinci’s notebooks and, fittingly for the world’s largest library, a copy of what it is recognised as the world’s oldest mechanically printed book – the Gutenberg Bible.
But for all its ties to the past, the library (pictured above) also houses a growing movement towards the future.
Founded in 2015, The Alan Turing Institute is the UK’s national research institution for data science and – since 2017 – artificial intelligence. It was founded by a union of five universities: Oxford; Cambridge; Edinburgh; Warwick; and University College London.
Last year, a further eight joined the fray: Leeds; Manchester; Newcastle; Birmingham; Exeter; Bristol, Southampton; and Queen Mary University of London.
Among its various programmes of work, the Turing last year launched a public policy programme, led by professor Helen Margetts. She tells PublicTechnology the initiative was set up “to work with policymakers to research and develop ways to use AI and data science to make better policy”. There is “a lot of variation” in the ways and extent to which the institute works with government’s various departments, she says.
But the programme has already engaged with many people across Whitehall.
Year in which the Alan Turing Institute was established
Number of universities that make up the institute
Number of new projects launched in the 2018/19 year
Grant money awarded to the Turing in 2018/19 by government bodies, universities, and industry partners
Year in which Alan Turing published a celebrated research paper imagining ‘machines that think’
“We started off just by talking to policy – we wanted to see how much interest there was, and what were the issues,” Margetts says. “So, we talked to hundreds of policymakers, right across the government. And got some idea of the technical issues and the ethical issues – and there are just as many ethical issues as technical. We have developed individual relationships with departments. With some departments, we have partnerships and agreements. With others, we might play a kind of convening role.”
Examining the ethics of using machine learning in children’s social care is an example of one of the research projects being run by the public policy programme. Another project is looking at possible uses of AI across the criminal justice system – from identifying offenders to improving prisons.
The Turing is also collaborating with regulators, including working with the Information Commissioner’s Office to deliver Project ExplAIn which, according to the interim report published in June, seeks to offer “practical guidance to assist organisations with explaining AI decisions to the individuals affected”.
A detailed “explainability framework” is due to be published imminently.
“Over the last year we ran a series of citizen juries, which were really interesting, and it was fascinating for us to get their views on what type of explanation will they want from an AI system that is informing a decision about them,” says Dr Cosmina Dorobantu, deputy director of the programme.
Margetts is leading a project examining the role that AI could play in tackling hate speech. Most previous work in this area has, she says, been done by online platforms themselves, who “are doing it in a very kind of reactive sort of way – they just don't want it to give them a bad reputation”.
She says: “The way research in the past has developed in this area is that there are lots of tools to tackle one sort of hate speech, at one point in time, on one platform, targeted at one category of person. And then these tools are built, and somebody writes a paper about it, and then they sort of chuck it over the wall. And there's a big pile of papers on the other side of the wall!”
One problem, Margetts explains, is the quality and availability of data which could inform the machine the algorithms of these tools.
She adds: “There's a huge shortage of data, and there's a huge shortage of training data; you've got literally thousands of tools based on, say, 25 data sets – which are in themselves not that great. So, as well as developing our own classifiers, we hope to be able to do some synthesis work to make those tools available to researchers and policymakers, so that they can actually know which are any good, and which can be used.”
Leading the way
Margetts continues that, although government has a tendency to be “left behind” by emerging technology, AI presents “possibilities for innovation in which government has unique expertise”.
However, despite the huge number of interactions government has with citizens, it has not always made use of the data that is a necessary by-product.
“I think the focus should be on seeing AI as an opportunity to provide better public services and to improve policymaking, as opposed to as a way to cut costs."
Dr Cosmina Dorobantu, The Alan Turing Institute
“It has massive resources of transactional data and, traditionally, all governments haven't been very good at using [that] to make policy. They've relied on surveys and other forms of data. If all your data about citizens is kept in huge filing cabinets, it's very difficult and very labour-intensive to process that data. But now digital data is a by-product of government. Because there's no history of using that transactional data, public agencies tend not to use it – but there's huge potential now for them to do so.”
Over the last decade, more judicious investment in IT has often been characterised by government as a way of saving taxpayer money. But Dorobantu says that this is an unhelpful way to approach AI.
“If you're looking at it as a way to save money, you're not going to be investing enough in it to actually make it work the way it should,” she says. “I think the focus should be on seeing it as an opportunity to provide better public services and to improve policymaking, as opposed to as a way to cut costs.”
“We're moving in that direction – but I think the resources need to follow the enthusiasm.”
This article forms part of PublicTechnology’s dedicated AI Week, in association with UiPath. Look out over the coming days for lots more content. An exclusive webinar discussion – in which a panel of private and public sector experts will debate all the major issues related to government's use of AI – is now available to view on demand. Click here to register to do so – free of charge.
Webinar discussion – which is available to view for free – covers ethics, technical barriers, and key use cases of artificial intelligence
The relationship between artificial intelligence and the law is receiving ever greater focus – while somehow becoming less clear. PublicTechnology looks at the role that regulators and...
Joel Cherkis from UiPath examines why governnments should not be thinking about whether to deploy either robotics or AI – but how the two can work in unison to deliver greater benefits
The potential for technology to embed and amplify systemic biases is seen as one of the biggest inherent risks of deploying AI and automation at scale. PublicTechnology talks to experts...
Security can help you grow whilst protecting the very core of your organisation, writes BT
BT looks at how to secure your SD-WAN services, starting with security by design
Nigel Hawthorn looks at how to review cloud use, report on risks and apply policies to reduce likely data loss incidents in this latest insight from BT
New network technology creates new risk, but the same technology is driving a step-change in how we think about security, writes BT