What is the cost of DWP’s UC anti-fraud algorithm?


The tech tool has been subject to criticism and controversy since being launched by the department five years ago. PublicTechnology investigates recent disclosures about its technical operation and financial return.

“This case is illustrative of the need for departments to be upfront and transparent where machine learning techniques are being used, in order to secure public confidence in these AI tools as, without the public’s trust, government is doomed to fail in this area.”

The case in question – described above by Public Accounts Committee chair Sir Geoffrey Clifton-Brown – is that of the Department for Work and Pensions and its use of algorithms to help detect fraud, which began in 2021 with a program intended to tackle false claims for Universal Credit advance payments. Five years on, the technology remains in use. But also still present are the controversy and criticism which have long surrounded the automated tech system.

Such scrutiny seems unlikely to abate after the disclosure of figures which reveal that the tool has delivered only minimal financial return, as well an admission by the DWP that, after nearly five years in operation, the tool requires retraining as it “is not working as effectively as we would expect”, and is disproportionately flagging as high-risk applications by overseas claimants. The department has also freshly released operational details which – although finally made public after years of campaigning – have been heavily redacted in areas related to data collection, processing and sharing.

A recent evidence submission to PAC made by social justice charity the Public Law Project noted that: “DWP has rarely been proactive in making public information about its use of machine learning to detect and prevent fraud.”

The genesis for this use case was a 2020 report from the National Audit Office which – some 18 months after the DWP opened online applications for advance payments for those awaiting receipt of UC – found that the digital service had proven “vulnerable” to fraud. This was evidenced by a suspected tally during its first year and a half of 100,000 fraudulent claims worth a cumulative total of between £98m and £147m, the NAO said.

This prompted the department to “put a range of controls and protections in place”, according to correspondence sent to PAC last month by DWP permanent secretary Sir Peter Schofield.

This includes the algorithm, which departmental documents describe as “a supervised machine learning classifier designed to risk assess requests for advances ahead of payment”. Applications that are deemed by the algorithm to present the highest risk of fraud are “referred to a DWP employee, who reviews all available and relevant information, to decide whether to approve or decline the request”.

“DWP may have effectively created algorithmic systems that are untestable for discrimination, while claiming this limitation absolves them of their legal duty to ensure equal treatment.”
Amnesty International

Before the introduction of the automated assessment tool, all applicants for UC advances – a repayable upfront payment to help citizens pay bills and meet other living costs while awaiting their first regular instalment of UC – where required to attend a face-to-face meeting.

Following the potential £100m-a-year losses cited by the NAO, the algorithm forms part of a suite of measures that have collectively reduced annual fraud to the range of £20m to £85m by the 2021/22 year, and down to £0 and £60m by 2024/25, the perm sec wrote in his missive to MPs.

But, in the first public disclosure of quantifiable figures concerning the impact of the machine learning tool, Schofield also revealed that only a small fraction of the reduction has come as a result of technology – which has flagged up only a very limited number of fraudulent claims, when compared with the scale of the problem as identified by the NAO.

“Our UC advances machine learning model… has directly saved less than £5m over the last three years, by identifying around 7,000 high-risk advance requests that were rejected by a DWP agent following a review of the request,” the letter said. “Our overall control improvements have indirectly prevented a considerably larger volume and value of fraudulent advance claims.”

The cost of developing and maintaining the algorithm is not known, and the DWP did not clarify, when asked by PublicTechnology, whether the returns delivered – which equate to about £1.6m a year, at most – have exceeded the money spent on the tool to date.

In response to our enquiries, a spokesperson for the department said: “We’re using cutting-edge technology to crack down on fraud, with the UC Advances model three times better at identifying fraud risk – and on track to slash fraud rates in half by 2029. Real people always make the final call, with safeguards in place to protect genuine claimants.”

‘Raises questions’
Caroline Selman, senior research fellow at the Public Law Project, told PublicTechnology that the DWP’s revelation about the limited gains enabled by the platform “raises questions about the proportionality of that tool”.

Such questions are amplified by the fact that the number of claims correctly flagged by the algorithm – which, over a three-year period, equates to 2,300 rejections annually –  represents less than 0.2% of all applications made, based on with the DWP’s own data for 2024/25, which show 1.4 million UC advances paid out.

“In the DWP’s fairness analysis, the justification that’s being put forward – in terms of why this tool is viewed as necessary and proportionate – in part points to the concerns about the size of fraud and then the effectiveness of this tool,” she said.

That analysis, published by the department on GOV.UK this summer, perhaps shines a light on another potential cost of the algorithm – beyond that which can be quantified in pounds and pence.

The fairness assessment – which outlines the DWP’s own analysis of the tool – acknowledges that “the likelihood of non-UK nationals being referred by the model was higher than UK nationals, however the likelihood that the advance was rejected by a human decision maker following a referral was equivalent to the likelihood for UK nationals”.

The department also admits that, for claimants aged 35 and upwards, there is “an increased likelihood of being referred by the model [that] is inconsistent with the reduced likelihood of those referrals being correct” on human review.

All of which means that “the evidence suggests the model is not working as effectively as we would expect”.


£147m
Potential scale of UC advances fraud over an 18-month period, according to a 2020 NAO report

Less than £5m
Amount directly saved by the anti-fraud algorithm over the past three years

100,000
Number of potential fraudulent claims over an 18-month period, according to the NAO

7,000
Number of fraudulent or erroneous claims successfully flagged by the algorithm over the past three years

17
Number of fields redacted in DWP’s transparency record for the anti-fraud algorithm


Moreover, the assessment reiterates that there is “limited availability of protected characteristic data for the high-risk cohort”, with age being the only area where the potential for discrimination against a protected characteristic has been fully measured.

In a recent PAC evidence submission from Amnesty International, the global human rights charity noted that the algorithm had “identified bias on some characteristics such as age but were unable to test for bias on other dimensions, [such as] race or gender, because of the design of the data systems”.

“DWP may have effectively created algorithmic systems that are untestable for discrimination, while claiming this limitation absolves them of their legal duty to ensure equal treatment,” the submission added.

Based on the – limited – evidence of discrepancies that is available, the department’s fairness assessment commits to recalibrating the tool, and then conducting further analysis.

“To try and reduce measured inconsistencies between referral and outcomes metrics, the model will be re-trained and further fairness analyses conducted to measure the impact of this action on reducing age and nationality related disparities,” the assessment says.

PublicTechnology understands that details of the retraining and changes made will be provided in an “Effectiveness Assessment” of the algorithm to be published by the DWP at some point in 2026.

Transparency records
Campaign groups had lobbied for the public release of a full fairness assessment for some time before the department eventually published the document in July.

Similarly, there have been long-standing calls for the DWP to release operational details of the UC advances anti-fraud model via the government’s Algorithmic Transparency Recording Standard (ATRS), a mechanism which was introduced by ministers in 2021. Following a mandatory publication directive implemented across government in 2024, scores of records have since been released by many public bodies in Whitehall and beyond – including 12 published by the DWP.

Among these is, finally, a record related to the advances fraud model. But the long-awaited document largely reiterates previously released claims and information – and redacts detail from 17 of record’s stipulated fields, across a wide range of specifications regarding data, including: quantities of data involved in the tool’s operation; collection and processing methods; access and storage arrangements; and sharing agreements with third parties.

All of these redactions cite exemptions offered by Section 31 of the Freedom of Information Act, which enables public bodies to withhold data which, if released, could hamper “the prevention or detection of crime”.

PublicTechnology understands that the DWP’s position is that it believes the information it has omitted from transparency releases could be helpful to those wishing to defraud the department – and can thus be withheld from publication under FOI provisions.

“The law itself needs to evolve – because it was developed with human decision-makers in mind. And some of its existing concepts and principles do not always translate perfectly across to this new world of decision-making.”
Caroline Selman, Public Law Project

Albeit in somewhat diminished form, the submission of the UC advances algorithm has been among many that have contributed to rapid recent growth of the number of records released under ATRS – a figure which rose significantly in 2025, after more than two years of sitting almost entirely dormant. Of the 125 entries now listed on GOV.UK, all but nine were released in the past 13 months.

Selman acknowledges this progress – including that made by the DWP.  But, with use of algorithms and AI becoming ever-more prevalent, the PLP researcher would now like to see public bodies using the development process of automated tools to “do some of the working out – so that it doesn’t need two years of scrutiny to get to the position where they work out what can and cannot be put in the public domain”.

She also called for greater governance of the process by which departments are exempted from releasing ATRS records – which is mandated via government guidance, although is not yet a legal requirement.

“The ATRS is using the same exemptions that the Freedom of Information Act regime uses,” she says. “But, if somebody applies an exemption of the Freedom of Information Act, there are complaints and enforcement routes for you to use if you think that it is an unjustified use of that exemption.”

Selman adds: “But, for the ATRS, you have not got those same routes. So, to an extent, there’s a risk that departments are marking their own homework on the use of that exemption.”

For its part, PLP will retain a close interest the use of algorithms by the DWP – and the wider public sector.

“The way that certain automated tools are being adopted – including ones that use aspects of artificial intelligence – is potentially having a fundamental impact and causing a shift in terms of how decisions are taken about us,” Selman says. “What we want to make sure is that, first of all, these things are being informed by existing public law principles about fair, lawful and non-discriminatory decision making. But we also think the law itself needs to evolve – because it was developed with human decision-makers in mind. And some of its existing concepts and principles do not always translate perfectly across to this new world of decision-making.”

In the months ahead, PLP’s aim will be to “support and advocate for improved compliance with ATRS – but also [considering] what is actually being published and shared: is the right information being shared in the right way, and what are the levers” available if it is not, according to Selman. The charity will also look to better help citizens whose rights have been bolstered by the Data Use and Access Act that passed into law last year.

“We also have a focus on improved information that’s given to individuals who are decision subjects,” she adds. “If you’re a decision subject, you should know if automated decision making tools are being used in some way in a decision that’s about you – and you need to know that information in order to be able to exercise your rights.”

Sam Trendall

Learn More →