Why government is ‘failing’ on AI openness

The body dedicated to upholding ethical standards across the public sector has published a major report examining how to ensure those standards are not threatened by AI and automation

Credit: Arek Socha from Pixabay 

Selflessness, integrity, objectivity, accountability, openness, honesty, and leadership.

These are the seven characteristics that should guide and inform the work of all holders of public office – from cabinet ministers and permanent secretaries, through to front-line social workers and nurses. Known as the Nolan Principles, this septet of tenets was codified in 1995 by the then-newly established Committee on Standards in Public Life.

In the intervening 25 years, the committee – an independent body sponsored by the Cabinet Office – has continued its work to monitor ethical standards in public life, and advise the prime minister on how to best ensure they are upheld.

All the while, those seven standards have remained constant.

But, as government increasingly adopts automation, machine learning, and artificial intelligence technologies, the nature of public service – and public servants – is changing around those unchanging rules. 

“We positively think a new AI regulator is not a good idea… they cannot possibly manage it all in the universal way it is going to be used”
Dame Shirley Peace, CSPL member

It is hard to make the case that automated decision-making software can behave selflessly, or that a robot can be held accountable.

Which is why the CSPL’s latest piece of work was a major investigation of how AI might affect standards in public life.

The resultant report was published last week. Across its 78 pages, it makes 15 recommendations covering processes, ethics, and regulation. 

But the headline finding, according to committee chair Lord Jonathan Evans, was that three of the seven Nolan principles are likely to be at least somewhat impacted by the rise of AI.

Speaking at the report’s launch event, Lord Evans said that new technologies posed potential challenges to the public sector’s openness, accountability, and objectivity.

In the latter case, Evans says that, if the data being used is not comprehensive enough, or the algorithms are not correctly tuned, “it is widely recognised that… you can end up embedding and enlarging existing biases, and the governance of these systems is critical, if we are to ensure that they are objective”.

A lack of governance is also an issue in the area of accountability, he adds, where greater protection of citizens and their rights is needed.

But it is perhaps the principle of openness that, as it stands, is most threatened by AI.

Evans says that, when commencing work on the report, the first thing the committee set about doing was “to find out where this technology is being used”.

“Which was easier said than done,” he says. “We talked to some investigative journalists, and they could not find out either… we do not think there is a conspiracy, but we do think there is a failing. [Openness] is part of giving people confidence and part of giving people redress.”.

He adds: “It is very difficult for us to be certain where artificial intelligence and algorithms are being used in public services. In our view, that is a failure. People need to know where and how it is being used, and where they can get redress if they feel it has not been used fairly.”

Regulation and procurement
The first of the 15 recommendations is that the government should “identify, endorse and promote” appropriate ethical principles and guidance. The committee notes that there are several sets of guidelines already in existence, and that clarity is needed on where and how each of these ought to be applied.

The second measure recommended is that, before using the technology in the delivery of citizen services, “all public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulation”.

Thirdly, the committee believes that the Equality and Human Rights Commission should work alongside the Alan Turing Institute and the Centre for Data Ethics and Innovation to create guidance on “how public bodies should best comply with” equalities legislation.

The fourth recommendation is that a “regulatory assurance body” should be established to “identify gaps in the regulatory landscape”. 


1995
Year in which the Nolan principles were published

 

Openness, accountability, and objectivity
The three principles that CSPL believes will be impacted by AI

15

Number of recommendations made by the committee
 

£90m

Estimated value of upcoming dynamic purchasing system for AI


But the committee is decidedly not recommending the creation of a dedicated regulator for artificial intelligence.

Indeed, according to committee member Dame Shirley Pearce, “we positively think it is not a good idea [to create an entity] where everybody thinks they are managing all [the regulatory issues] – because they cannot possibly manage it all in the universal way it is going to be used”.

“I think all existing regulators need to think about how they operate and the impact of AI and automated decision making into their processes,” she added. “Professional bodies also need to be thinking about how they should set educational standards. This is going to be a very big piece of work for those regulators and professional bodies. But we hope the Centre for Data Ethics and Innovation will play a part in that.”

The fifth and sixth recommendations relate to procurement process. Firstly, the committee recommends that the requirement for suppliers to adhere to ethical standards should be “explicitly written into tenders and contractual arrangements”. Secondly, the Crown Commercial Service – prior to the launch this year of a £90m dynamic purchasing system for AI products and services – should embed tools into the Digital Marketplace platform that help public sector buyers “find AI products and services that meet their ethical requirements”. 

PublicTechnology understands that, as per CSPL’s recommendations, CCS has prepared standards which suppliers will need to adhere to in order to gain a spot on the incoming AI procurement vehicle. The content of these rules will be rubber-stamped and made public in the coming weeks, but it is understood it will oblige providers to address “things they do not currently have to think about” – likely to cover ethical considerations, and the ability for algorithmic tools to demonstrate how their decisions were reached.

Evans said: “A lot of the AI that is likely to be used will be purchased in the market. Government has significant potential market power, and can ensure that they, not only get the best price, but ensure the best possible standards. Talking to some people in the private sector, they have told us ‘we have never been asked to provide any information on the ethics or explainability’. If we say we want these features, then it is likely to encourage the market to develop those features.”

Pearce added: “There are real risks from data bias… we need to turn this on its head – many of these challenges can be addressed in procurement… [by] having these issues upfront in deciding what we will and what we won’t buy. We think CCS is important in providing practical advice as well.”

The seventh recommendation is that a mandatory impact assessment should be conducted before all public sector AI deployments, and then released publicly.

Guidelines should also be published establishing best practice for “the declaration and disclosure” of what AI and automation systems are used by public bodies.

Rapid response required
While the first eight recommendations address measures that could be undertaken by government, regulators, and other national organisations, the remaining seven relate to providers of front-line services – including both public- and private sector entities.

The committee firstly recommends that an assessment of any potential impact on public standards should be embedded at the design stage of all relevant projects.

All service providers should also “consciously tackle issues of bias and discrimination by ensuring they have taken into account a diverse range of behaviours, backgrounds and points of view”.

Responsibility for the operation of AI systems must be “clearly allocated and documented”, CSPL recommends, and regular operational monitoring and evaluation should also take place.

All organisations providing citizen services should “set oversight mechanisms that allow for their AI systems to be properly scrutinised”, while also making sure citizens are informed of their right to appeal algorithmic decisions, and how they can do so.

The report’s final recommendation is that: “Providers of public services, both public and private, should ensure their employees working with AI systems undergo continuous training and education.”

With the report now published, the onus is on the government to digest its content and respond – something Evans (pictured right) hopes will happen “rapidly”. He adds that, to ensure such a response has the biggest impact, it likely needs to be led by an established Whitehall department – “probably DCMS or BEIS”.

Although the report’s recommendations largely concern safeguards and governance, Evans is clear that the committee is “absolutely not trying to slow down or impede” the public sector’s use of AI and automation.

He is also clear that, whatever happens next, the report ought to be seen as the beginning of a conversation, rather than the end.

“This is not the final word,” he says. “We are not laying down holy writ which we will come back in 20 years to see how it has got on.”

 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Newsletter Signup
Receive the top stories from the UK’s leading public sector digital and data publication direct to your inbox every lunchtime.
ErrorHere