Government must earn public trust that AI is being used safely and responsibly

Leaders from two of government’s core digital and data units – the CDDO and CDEI – introduce new guidelines intended to promote transparency in the public sector’s use of algorithms

Credit: Arek Socha/Pixabay

Algorithms have the potential to improve so much of what we do in the public sector, from the delivery of frontline public services to informing policy development across every sector. From first responders to first permanent secretaries, artificial intelligence has the potential to enable individuals to make better and more informed decisions.

In order to realise that potential over the long term, however, it is vital that we earn the public’s trust that AI is being used in a way that is safe and responsible.

One way to build that trust is transparency. That is why today, we’re delighted to announce the launch of the Algorithmic Transparency Recording Standard (the Standard), a world-leading, simple and clear format to help public sector organisations to record the algorithmic tools they use. The Standard has been endorsed by the Data Standards Authority, which recommends the standards, guidance and other resources government departments should follow when working on data projects.

Enabling transparent public sector use of algorithms and AI is vital for a number of reasons.

The public wants to understand more about how their information is being used: the Standard is a crucial step to enable this.

Firstly, transparency can support innovation in organisations, whether that is helping senior leaders to engage with how their teams are using AI, sharing best practice across organisations or even just doing both of those things better or more consistently than done previously. The Information Commissioner’s Office took part in the piloting of the Standard and they have noted how it “encourages different parts of an organisation to work together and consider ethical aspects from a range of perspective”, as well as how it “helps different teams… within an organisation – who may not typically work together – learn about each other’s work”.

Secondly, transparency can help to improve engagement with the public, and reduce the risk of people opting out of services – where that is an option. If a significant proportion of the public opt out, this can mean that the information the algorithms use is not representative of the wider public and risks perpetuating bias. Transparency can also facilitate greater accountability: enabling citizens to understand or, if necessary, challenge a decision.

Finally, transparency is a gateway to enabling other goals in data ethics that increase justified public trust in algorithms and AI.

For example, the team at The National Archives described the benefit of using the Standard as a “checklist of things to think about” when procuring algorithmic systems, and the Thames Valley Police team who piloted the Standard emphasised how transparency could “prompt the development of more understandable models”.

Ensuring meaningful transparency
We know that communicating with the public on topics such as statistical models and machine learning isn’t easy. This is why the Public Attitudes team at the Centre for Data Ethics and Innovation conducted deliberative research with the public to ensure that the Standard enables meaningful transparency.

Over three weeks, the team gradually built up participants’ understanding and knowledge about algorithm use, which culminated in an in-depth, co-design session. Here, participants reviewed and edited prototype versions of the Standard to develop a practical approach to transparency that reflected their expectations.

This resulted in the two-tier structure you see now in the Standard.

Tier 1 includes a simple, short explanation for the general public of how and why the algorithmic tool is being used and instructions on how to find out more information. For those needing more information, Tier 2 provides more detail, including on the technical specification and decision-making process.

We’ve also ensured that our policymaking has been as open as possible, so that we are benefiting from all the knowledge and sources available. The CDEI and the Central Digital and Data Office worked closely with civil society organisations – including with experts at the Ada Lovelace Institute and the Alan Turing Institute – when designing the Standard and have piloted it with various organisations across the public sector.

What next?
We believe we’re on the right track, and are pleased that the Standard has been recognised internationally by the OECD and the Open Government Partnership. However, we want to continue building on this strong foundation.

The CDDO and CDEI teams remain available to support organisations completing algorithmic transparency reports, and we’d welcome any questions sent to algorithmic.transparency@cdei.gov.uk.

We expect to see more teams using the Standard and uploading their transparency reports to the collection on GOV.UK, and are extremely grateful to those who have so far. Currently, we have published the first six transparency reports which are available on GOV.UK, with more to follow.

We’re excited to see how the Standard continues to develop in 2023 and how we will work with public sector partners in the UK and internationally to implement it.

The public wants to understand more about how their information is being used: the Standard is a crucial step to enable this.

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *