‘A lack of transparency and accountability’ – DWP urged to shed light on fraud algorithm

The DWP has revealed it is using artificial intelligence to help detect fraud. But, as PublicTechnology discovers, some are concerned that it needs to reveal a lot more about how it does so.

Credit: Pexels/Crown Copyright/Open Government Licence v3.0    Image has been remixed

“Algorithms can be harnessed by public sector organisations to help them make fairer decisions, improve the efficiency of public services and lower the cost associated with delivery. However, they must be used in decision-making processes in a way that manages risks, upholds the highest standards of transparency and accountability, and builds clear evidence of impact.”

These were the words of Lord Agnew, the minister charged last year with unveiling a new algorithmic transparency standard developed by government’s Central Digital and Data Office.

The standard provides public bodies with a framework through which they can publish information on the operation of algorithms used to support operations or inform decisions. This is intended to allow such systems to be scrutinised: by civil society groups; tech experts; and the wider public – a growing number of whom are, in turn, subject to scrutiny by algorithms.

Since the standard was released, a handful of agencies have used the guidelines to publish details of tools used in their work. 

This includes the Department of Health and Social Care’s QCovid system for predicting coronavirus risk, the Domestic Abuse Risk Assessment Tool used by Hampshire and Thames Valley Police, and a hygiene-rating algorithm deployed by the Food Standards Agency.

Not included on this list is an automated tool being used by the Department for Work and Pensions to help analyse the possibility of fraud in claims made for Universal Credit advances. 

“What is concerning, is that there is very little information on how they operate and whether they are operate fairly and effectively. What we have found is that public bodies in the UK tend to have an approach of secrecy by default.”
Ariane Adam, Public Law Project

 

This analysis is performed by a machine learning algorithm,” the department said in its annual report and accounts for the 2021/22 year. “The algorithm builds a model based on historic fraud and error data in order to make predictions, without being explicitly programmed by a human being.”

The report added: “The department is aware of the potential for such a model to generate biased outcomes that could have an adverse impact on certain claimants. For instance, it is unavoidable that some cases fagged as potentially fraudulent will turn out to be legitimate claims. If the model were to disproportionately identify a group with a protected characteristic as more likely to commit fraud, the model could inadvertently obstruct fair access to benefits.”

The DWP said that it was mitigating against the risk of such obstruction via a system of “pre-launch testing and continuous monitoring” by departmental officials.  Caseworkers – who are not given the specifics of why a claim has been flagged for review – retain the final decision on the whether a claim is legitimate or fraudulent.

The department added that it has also undertaken a “fairness analysis”, but said that: “so far, this analysis has only been performed for three groups and the results are inconclusive”. The groups in question relate to age, gender, and pregnancy, it is understood.

Regardless of what further analysis may reveal, the department has been warned of the dangers of not allowing others to make their own conclusions about the fairness and efficacy of the technology it is using. 

‘Secrecy by default’
The Public Law Project (PLP)– a legal charity dedicated to helping those at risk of being marginalised or disadvantaged by public bodies – campaigns for greater transparency in the use of algorithms by government. 

As part of a parliamentary inquiry into the DWP’s annual accounts, the organisation has called on MPs on the Public Accounts Committee to ask the department to release – “as a matter of urgency” a range of information on its use of automated decision making (ADM) in the administration of Universal Credit and other services.

Such information – that is currently unavailable to the public – includes Data Protection Impact Assessments, Equalities Impact Assessments, and details set out by the algorithmic transparency standard.

In evidence submitted to PAC this summer, the Public Law Project said: “Our central concern about the department’s use of ADM systems is the lack of transparency. Without transparency, there can be no evaluation of reliability, efficiency, and lawfulness. Moreover, there are no dedicated mechanisms allowing for quick and easy redress when the systems malfunction – despite the heightened risk that the department’s use of these systems could lead to discrimination and other rights violations.”

The charity raised particular concerns about the advances fraud algorithm.

“Given the lack of transparency and accountability, and especially the department’s failure to publish equality analyses of any of its automated tools, the rollout of this model is premature,” it said. “We strongly recommend that the department sets out its plans to improve transparency and, in particular, we recommend that the Department
publish any Equality Impact Assessments, Data Protection Impact Assessments and other evaluations completed in relation to its automated tools.”

 

Talking to PublicTechnology, Ariane Adam, legal director of PLP, says that the organisation’s work in relation to government’s use of algorithms – which has largely focused on systems deployed by either the DWP or the Home Office – has found “opacity” to be a recurrent problem.

“In many ways, an algorithm that is used to inform decision-making is not that different from a piece of written guidance for decision makers,” she says. “But, what is concerning, is that there is very little information on how they operate, and whether they operate fairly and effectively.”

She adds: “What we have found is that public bodies in the UK tend to have an approach of secrecy by default.”

Details related to the operation of algorithms are invariably withheld on the basis of exemptions provided for in the Freedom of Information Act, which allow for non-disclosure in cases where releasing the information could impact security or create risk to public-service provision.


2.8% to 12.4%
The – rather wide – range of estimated overpayment in UC advances in 2021, equating to a total of £20m to £85m

 

Six
Number of reports published by public bodies so far using the algorithmic transparency stand

 

AU$1.8bn
Amount to be paid by the Australian government in refunds, legal fees and compensation over the disastrous ‘Robodebt’ scheme


When contacted by PublicTechnology, the DWP indicated that it considered that publishing details of its anti-fraud tools would compromise their effectiveness – and pointed out that the Algorithmic Transparency Standard did not recommend publication in such cases.

PLP believes that this – familiar – argument is not well supported by the evidence of legal and academic study on the potential for ‘gaming’ of algorithms by those who know how they work.

Among the foremost papers on the matter is Strategic Games and Algorithmic Secrecy, written by US legal professors Ignacio Cofone and Katherine Strandburg, who examine in detail how and why studied manipulation of an algorithm is, invariably, not made any easier by transparency.

They conclude: “Our analysis suggests that, from a social perspective, the threat from ‘gaming’ is overstated by its invocation as a blanket argument against disclosure. The consequential over-secrecy deprives society not only of the benefits of disclosure to decision subjects, but also of the improvements in decision quality that could result when disclosure improves accountability.”

Adam from the PLP shares the frustration of the paper’s authors. She says that section 31 of the Freedom of Information of Act – the stated aim of which is to prevent the “release of information that may prejudice the prevention or detection of crime” – is what public bodies in the UK often use to keep information under wraps.

“The issue that we have is that this is not a blanket exemption,” she says. “Departments need to give a reason, and perform a public-interest test measuring the harm and benefit of disclosure.”

Adam adds: “Often, they have not engaged in the harms of non-disclosure.” 

The possible harm to those who are subject to automated decisions are clear – and recognised by the DWP, in its acknowledgment of the “potential for such a model to generate biased outcomes”.

There is little evidence – that is publicly available – with which to make a determination on the size of the risk of such outcomes being generated.

But, in further evidence submitted to the Public Accounts Committee, PLP cited work it has done with another civil society group, the Work Rights Centre, that provides a worrying snapshot of the possible impact of the algorithm.

“The Work Rights Centre have told us that, since August 2022, they have been contacted by 37 service users who reported having their Universal Credit payments suspended,” the evidence said. “Even though the charity advises a range of migrant communities, including Romanian, Ukrainian, Polish, and Spanish speakers, as many as 32 of the service users who reported having their payments suspended were Bulgarian, with four Polish and one Romanian-Ukrainian dual national. This may suggest that the automated tool has a disproportionate impact on people of certain nationalities.”

Conducting an equalities impact assessment – and publishing the results – is the primary means through which public bodies comply with their duties under the Equality Act. In its evidence given to the MPs, the DWP has previously indicated that such an assessment has been undertaken. Permanent secretary Peter Schofield told MPs that it the department “should look at publishing what we can publish” in order to help create public trust.

Any such assessment – as well as the DWP’s own “inconclusive” fairness analysis of any possible impact based on age, gender, pregnancy – remains unpublished.

Adam from PLP says: “The DWP has stated that it has undertaken a fairness analysis; we are particularly frustrated that that they will not disclose the relevant Equality Impact Assessments. We also want to understand what are the data sets that were used to train the model.”

The risks presented by a lack of algorithmic transparency are not just limited to citizens, a point which Adam illustrates by pointing PublicTechnology in the direction of the Australian government’s Online Compliance Intervention programme – otherwise known as Robodebt.

Administered by Services Australia – a body whose duties broadly line up with those of the DWP in the UK – the scheme used an automated tool to calculate overpayments and other money owed by benefit recipients. A lack of expert oversight and human input are recognised as being among the major flaws with a system was first put in place in 2016 and, in its four years in existence, issued almost 500,000 incorrect debt notices. Following a court ruling last year, the Australian state must now pay AU$1.8bn (£1bn) in refunds, legal fees, and compensation.

‘Meaningful human involvement
Unlike its Australian counterpart, the DWP’s system leaves the final decision to a human decision maker – a point which a spokesperson for the department stressed, in a comment sent to PublicTechnology

“The algorithm builds a model based on historic fraud and error data in order to make predictions…. cases scored as potentially fraudulent by the model are flagged to caseworkers, who then prioritise the review and processing of such cases accordingly.”
DWP accounts

 

DWP does not use artificial intelligence to replace human judgement to determine or deny a payment to a claimant. A final decision in these circumstances always involves a human agent,” they said. “DWP is always careful to process data lawfully and proportionately with meaningful human input and safeguards for the protection of individuals,” the spokesperson said. “The department has robust processes to ensure data protection and ethical use of data is continuously monitored and are re-evaluated as we learn from using such technologies.” 

In response to the call for more transparency information, the DWP added that all applicable projects are assessed as required under equalities and data-protection laws.

“The department is conscious to take into account impact of decisions on protected groups under the Equality Act and carry out Data Protection Impact Assessments for large-scale transformative initiatives that involve personal data, aligned with data-ethics frameworks, codes of practice, and working principles. We have also considered and incorporated advice from independent organisations.”

 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere