Government hopes transparency standard can build trust in departments’ use of algorithms

Written by Sam Trendall on 30 November 2021 in News
News

New guidelines created by Central Digital and Data Office

Credit: Terry Johnston/CC BY 2.0

The government has published a new transparency standard intended to inform how algorithms and built and deployed across the public sector.

The guidance was created by the Cabinet Office-based government Central Digital and Data Office, working alongside the Centre for Data Ethics and Innovation, part of DCMS. It consists of a data standard for departments to refer to and a transparency template through which they are encouraged to provide information on their use of algorithms.

The former maps out a comprehensive range of transparency data that departments might seek to provide for an algorithmic tool they have deployed. This includes almost 40 pieces of information related to issues such as the purpose of the automated tool, the data being collected and processed, which organisations have access to data, and which public body and named senior manager is ultimately accountable for the system. 

The template offers public sector entities a blueprint for providing all this information to the CDDO, which can then publish the information provided by departments in a centralised online collection.

Use of the standard will be “piloted by several government departments and public sector bodies in the coming months”, the government said.


Related content


“Following the piloting phase, CDDO will review the standard based on feedback gathered and seek formal endorsement from the Data Standards Authority in 2022,” it added.

The launch of the standard does not represent the introduction of a mandatory register or a requirement for public bodies to publish information on the algorithms they use to support operations and service delivery, or inform policymaking.

But, according to Imogen Parker of the Ada Lovelace Institute – which has long called for such a requirement – the standard is “an important step towards achieving this objective, and a valuable contribution to the wider conversation on algorithmic accountability in the public sector”.

“We look forward to seeing trials, tests and iterations, followed by government departments and public sector bodies publishing completed standards to support modelling and development of good practice,” she added.

Departments taking part in the pilot are encouraged to provide details of all algorithms used by their organisation, although the CDDO indicated that “in the initial phase… we’ll prioritise publishing information about tools that either: engage directly with the public – for example a chatbot; [or] meet at least one criteria” in one of the three areas that define whether a program is in scope of the standard: technical specifications; potential public effect; and impact on decision making.

Algorithms will fall under the scope of the standard if their technical design includes complex statistical or data analysis, or the use of machine learning. 

The public-effect test, meanwhile, places in scope all systems that have “a potential legal, economic, or similar impact on individuals or populations; affects procedural or substantive rights; [or] affects eligibility, receipt or denial of a programme – for example receiving benefits”.

The standard will also apply to algorithms that replace, assist or add to human decision-making.

The government claims that the guidelines represent “one of the world’s first national standards for algorithmic transparency”.

Lord Theodore Agnew, minister for efficiency and transformation – a role split between Cabinet Office and HM Treasury – said: “Algorithms can be harnessed by public sector organisations to help them make fairer decisions, improve the efficiency of public services and lower the cost associated with delivery. However, they must be used in decision-making processes in a way that manages risks, upholds the highest standards of transparency and accountability, and builds clear evidence of impact.”

Adrian Weller, AI programme director at the Alan Turing Institute and board member of the CDEI, said: “This is a pioneering move by the UK government, which will not only help to build appropriate trust in the use of algorithmic decision-making by the public sector, but will also act as a lever to raise transparency standards in the private sector.”

 

About the author

Sam Trendall is editor of PublicTechnology. He can be reached on sam.trendall@dodsgroup.com.

Tags

Share this page

Tags

Categories

CONTRIBUTIONS FROM READERS

Please login to post a comment or register for a free account.

Related Articles

ICO chief: “We are not ‘going easy’ on government”
29 November 2022

Commissioner claims that fining public bodies simply creates a ‘money-go-round’

National Crime Agency plans digital ‘front door’ for tech firms to report child abuse
23 November 2022

New system will enable agency and online platforms to fulfil respective obligations outlined in Online Safety Bill

Police given new guidelines on gathering evidence from rape victims’ phones
18 October 2022

Minister claims that new Code of practice outlines that investigations must ‘focus on the suspect, not the victim’

ICO reprimands Home Office after anti-terror documents left at London venue
10 October 2022

Department is censured for the second time in 10 days after probe reveals it took seven months to notify watchdog of breach