‘Often companies want to do the right thing – but don’t know what that looks like’ – inside government’s data ethics hub

The Centre for Data Ethics and Innovation was created with a remit to support opportunity in AI and data, while tackling the risks. PublicTechnology talks to executive director Felicity Burch.

“We start from the premise that there is lots of opportunity from data- and AI-driven innovation – but there are also very real risks,” says Felicity Burch, executive director of the Centre for Data Ethics and Innovation.

The starting point for CDEI itself was the budget delivered in November 2017 by then chancellor Philip Hammond. The red book announced government’s intent to create the centre ,which began operating in earnest in early 2019 and now employs 45 civil servants, in an organisation that is based in the Department for Science, Innovation and Technology.

Burch tells PublicTechnology that there are various stakeholders – across industry, civil society and academia – with an important role to play in making sure innovation in data science and artificial intelligence does not come at the expense of ethical concerns.

But the establishment of CDEI speaks to the fact “there is a specific role for government” in ensuring that consideration of issues including bias, privacy, and transparency is embedded into policymaking and legislation.

“The CDEI itself is not a policy team, but we work really closely with the policy teams in DSIT – both the data and the AI policy teams, in particular, [in] their thinking about the regulatory landscape,” she adds.

“Where it has a societal or personal benefit, people are much more likely to be willing to share their data. Articulating that and telling that story really matters.”

Burch cites the recently launched Fairness Innovation Challenge – which is supported by two watchdogs; the Equality and Human Rights Commission; and the Information Commissioner’s Office – as an exemplar of how the centre supports the development of policy and regulation. The launch of the challenge in June, which followed a similar exercise in relation to privacy-enhancing technologies, came several months after the government published a white paper setting out its intended approach to the regulation of AI.

That document put forward five guiding principles: safety, security, and robustness; transparency and explainability;, accountability and governance; contestability and redress; and fairness.

The challenge focuses on how the latter can best be protected by regulators.

“Discrimination is already illegal but, when it comes to whether AI systems themselves are fair, there are quite a few issues with testing that and figuring that out,” Burch explains. “One of them is: how do you get the data to do the testing in the first place? If you’re testing for fairness, you’re looking at people’s demographic characteristics; do you have accurate, robust data on that? Do people want to share their data with you? And, also, how do you measure it, and what statistical techniques do you use to verify whether something is fair?”

She adds: “There are practical challenges with putting policy into practice – and that’s really where we see ourselves playing a role. We are convening with policy, with the regulators, and with the private sector.”

The first stage of the challenge will seek to better understand issues that could lead to bias or discrimination in the operation of AI systems. These are likely to include access to the right data, the application of appropriate statistical methods, and ensuring that any measures to mitigate potential bias are legal and ethical.

“The second phase, once we’ve got those real life challenges, will be to run an innovation challenge to bring together the tech community, the regulatory community, and the academic community to try and actually address those problems,” Burch says.

Assure thing
The fairness challenge forms part of a broader programme of work in the area of AI assurance. The CDEI chief compares the role of assurance as similar to that of kitemarks and standards used to certify providers and boost consumer confidence in other sectors, such as food and drink or financial services.

“Assurance is a really important parallel track alongside regulation,” Burch says. “It helps organisations both to understand if the AI products that they’re buying actually work, but also whether they work in the way that they’re intended.”


March 2019
Date of publication of CDEI’s first strategy, 16 months after its creation was announced

45
Number of staff currently working for CDEI

53%
Proportion of respondents that agree that data-powered services could benefit them – compared with 15% that disagree

14
Current number of case studies of AI assurance techniques in CDEI’s online repository


To support those developing automated systems to demonstrate this kind of efficacy, the CDEI this summer made available its portfolio of AI assurance techniques. Published on GOV.UK, the searchable online repository – developed in collaboration with industry body techUK – currently consists of 14 case studies, each of which details methods being used by organisations “to support the development of trustworthy AI”. The portfolio includes examples of procedures for bias and compliance audits, impact and risk assessments, performance testing, and certification.

“I think that this is a really important step from us, because quite often we hear from companies that want to do the right thing – but don’t know what the right thing looks like,” Burch says. “So what we thought would be really helpful would be to share what current practice in industry looks like. And that is something that we will be evolving over time as well.”

Tools and transparency
Alongside this work to help support the inherent trustworthiness of the technology, the CDEI is also seeking to enable the public to put their trust in AI systems via an initiative that Burch characterises as one of the centre’s most significant areas of work: the Algorithmic Transparency Recording Standard.

First unveiled in November 2021, the standard provides a consistent framework through which public bodies can provide information on their use of algorithmic tools and the decision-making contexts in which they are being used.

Alongside the standard itself and online guidance for its use, the centre has also published on GOV.UK a collection of completed reports on existing uses of algorithms in the public sector. These include a tool used by the Food Standards Agency to help local authorities prioritise hygiene inspections by predicting the likely rating of restaurants and shops, and a program deployed by West Midlands Police to help understand how various factors may impact the success or failure of convictions brought for rape and other serious sexual offences.

“Being transparent when you are using algorithms is a good way to build trust, and to demonstrate – to the general public and the people who the algorithms are affecting – that you’ve got the right governance processes in place,” says the CDEI director (pictured right). “But, again, people were coming back to us and saying: what does that look like in practice? The Algorithmic Transparency Recording Standard is exactly that: a way of recording how you are being transparent that is consistent across the public sector. And we’re very keen to see that rolled out far and far and wide.”

Six completed transparency reports have been published to date. Some notably contentious uses of algorithms – including those that have been subject to high-profile criticism, controversy, or even legal challenge – are also notable by their absence.

For example, PublicTechnology last year reported on concerns about the potential for bias to be perpetuated by automated tools being used by the Department for Work and Pensions to help detect potential fraud in Universal Credit claims. The DWP – which is planning to invest £70m in ramping up its use of algorithms and analytics to help tackle benefit fraud – continues to resist calls from civil society groups to use the standard to release transparency information on the operation of such tools.

We ask Burch whether she would like to see a marked increase in the coming months in the number of organisations using the standard to publish information, and whether more incentives – or consequences – would be helpful in ensuring that they do so.

“It’s a really important question, and the answer to whether we want more organisations to use this is: yes, absolutely,” she responds. “And, indeed, we are working with a number of organisations at the moment to get them ready to publish their records. We have done this in quite an iterative way and are working with our colleagues in the public sector to make this a tool that’s easy and practical for them to use. We want to [continue] that iterative rollout – but I would absolutely like to see this scale up.”

Burch adds: “Right now we’re really pushing on making this as easy as possible for organisations, and driving the point that there are a lot of advantages to them in doing this.”

Increasing uptake of the standard – particularly beyond central government – is singled out by Burch, alongside delivery of the Fairness Innovation Challenge, as her top objectives for the CDEI for this year and next.

“I’m really keen to get out and talk to colleagues in public sector about what they need from us – that is a personal priority over the next over the next few months,” she adds.

The centre’s work will continue to be supported and informed by regular surveys to track public attitudes to the use of data and AI.

“Being transparent when you are using algorithms is a good way to build trust, and to demonstrate – to the general public and the people who the algorithms are affecting – that you’ve got the right governance processes in place.”

The latest edition of the tracker, based on research conducted in summer 2022, showed that 53% of the 4,000-plus respondents agreed that data can be used to create services that would benefit them – compared with only 15% that disagreed. But the study also found creeping concerns about the potential risks of data use – in particular the possibility that sensitive personal information could be stolen or sold.

Burch says: “What has always really struck me is that there are things that organisations can do to bring the public along with them and to build trust. And there are a few really key determinants of that. One of which is about having the right governance structures in place. Another is that people care about what you’re using the data for – so, where it has a societal or personal benefit, people are much more likely to be willing to share their data. Articulating that and telling that story really matters.”

PublicTechnology asks the CDEI leader – who, before joining government in August 2021, previously held senior roles at the Confederation of British Industry and the manufacturing trade body now known as Make UK – whether the insights gained in her professional life have changed her own relationship with data and tech.

“I’m definitely careful about what data I share about myself, and I don’t think you can do a job like this and not be aware – it does make me interested in other organisations’ approaches to privacy,” she says. “But, at the same time, I think working with the public and private sector and seeing some of the quite exciting things that organisations are trying to do with data and AI, also makes me really quite optimistic that there are lots and lots of opportunities out there for these technologies as well.”

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere