Chair of CDEI also encourages public sector to ‘use AI in ways that delight the citizen’
The government’s recently established data ethics and artificial intelligence advisory body will focus its early efforts on examining the impact of bias and micro-targeting.
The Centre for Data Ethics and Innovation (CDEI) – which will next month publish a full strategy document outlining plans for its initial pre-statutory phase – has already identified those two areas as the issues that most require its attention, according to chair Roger Taylor.
The centre will look at the risk – and potential impact – of systems reinforcing bias in five areas: human resources; criminal justice; policing; healthcare and social care; and financial services.
Taylor, who was speaking at the Public Sector AI summit hosted in London last week by PublicTechnology and our parent company Dods, told attendees that CDEI will examine how best to test AI systems to ensure that the public “believe that the governance of the system is in line with societal values”.
Its work on examining micro-targeting, meanwhile, will look at the effect on the political landscape of data-driven personalised advertising.
“In all of those areas we will work with partners – we want to hear from people,” Taylor said. “Our role is to be additive. We will build on the work that others are doing.”
- No plans for ‘right to challenge’ laws for algorithm decisions
- Lords urge government to develop national AI strategy
- Bursting the bubble – the ethics of political campaigning in an algorithmic age
The centre was first announced in the Budget of November 2017, and recruited its board members over the course of last year – who met for the first time in December. In time, the government intends to give CDEI statutory powers of some sort, but for the time being its role is “to identify where there are moral issues, and where existing regulation can deal with it”, according to Taylor – a former journalist who founded the healthcare-focused data analytics firm Dr Foster.
“Our role is not to decide whether something is unethical or not. Our approach is very much informed by a need to get from principles to practice,” he said. “Part of our objective is to establish that this is a useful thing to do. We do not have powers, we cannot go out and demand people go and do things. We can look at the landscape and make recommendations for what could help.”
The centre is needed because artificial intelligence and data analytics bring both huge opportunities and major dangers, Taylor said.
“If we get this right, the ability to transform healthcare, education, social care… has the potential to bring huge benefits to people,” he added. “But there are enormous risks and, if we get it wrong, we will either fail to deploy the technology and people will lose access to services – or we will deploy it wrongly, and people will suffer consequences that are unforgivable.”
Addressing a room filled primarily with public sector digital and technology professionals, Taylor advised attendees that, if it wishes to reap the benefits of AI, the government needs to earn and maintain the public’s trust or “we will not get permission to use these technologies”. He added that examining how AI is deployed – and marketed – by commercial companies could be useful in this regard.
“The public are right to withhold their trust until they have had answers to questions that worry them. And, in many cases, we have not provided them with the answers they would like. It can increase their sense of powerlessness,” Taylor said. “Much of the trust that goes to the private sector… is because, in marketing speak, they delight their customers. We like Google because their results are good. Facial recognition is cool. We get a lot of great stuff [from private sector technology].”
He added: “In the public sector, we do a lot less of that. We need to think of opportunities to use AI in ways that delight the citizen. Much of the time we are thinking about how AI can be used in back-office systems.”