DWP commits £70m to algorithms and analytics to tackle benefit fraud

Department’s accounts reveal that, after initial use to assess claims for UC advances, machine learning has been deployed to analyse benefit claims in other areas considered to be higher risk

The Department for Work and Pensions has ramped up its use of machine learning and analytics to tackle fraud and error in the UK’s benefits system and revealed plans to invest tens of millions of pounds to further expand the deployment of technology.

In the 2021/22 year the department began using an algorithm to assess and flag for further human investigation suspected fraudulent claims for Universal Credit advances. The tool processes data from “historical claimant data and fraud referrals, which enables the model to make predictions about which new benefit claims are likely to contain fraud and error”, according to the DWP’s freshly published accounts for the year to March 2023.

The annual document reveals that the department has built on this initial deployment and, during 2022/23, “developed and piloted four similar models designed to prevent fraud in the key areas of risk in Universal Credit [claims]: people living together, self-employment, capital, and housing”.

The use of automated technology is set to be expanded further still in the coming months, with £70m set to be invested in the period from April 2022 to March 2025 in “advanced analytics to tackle fraud and error”.

“These analytics include a variety of sophisticated techniques,” the report added. “One of these is the use of machine learning to identify patterns in claims that could suggest fraud or error, so that these claims can be reviewed either by relevant DWP teams, such as the Enhanced Review Team, before the claim enters payment, or the Targeted Case Reviews agents, if it is already in payment.”

The department believes that using technology in this way will enable it to “generate savings of around £1.6bn by 2030-31”.

After it was revealed that machine learning was being used to help detect possible fraud in claims for UC advances, civil society groups expressed concern that such systems could embed and perpetuate bias against certain groups and characteristics – and called for greater transparency on the operation of algorithms and their impact. Evidence compiled by charity the Work Rights Centre suggests that Bulgarian nationals, in particular, may have been disproportionately affected by the technology.

The department’s most recent annual report acknowledges that “when using machine learning to prioritise reviews, there is an inherent risk that the algorithms are biased towards selecting claims for review from certain vulnerable people or groups with protected characteristics”.

It adds that the DWP’s “ability to test for unfair impacts across protected characteristics is currently limited” and that the results of its own intern fairness assessments undertaken to date have been “largely inconclusive”.

The department has thus far resisted calls to publish more information on its operation of machine learning using the mechanisms set out in the Cabinet Office’s Algorithmic Transparency Standard. But it has “committed to report annually to parliament on its assessment of the impact of data analytics on protected groups and vulnerable claimants”, the yearly report said.

MPs on the Public Accounts Committee will also be kept in the loop on the potential impact of its use of automated systems, according to the auditor’s report contained in the department’s annual accounts.

“DWP… is working to develop its capability to perform a more comprehensive fairness analysis across a wider range of protected characteristics and would respond to the PAC with a reporting plan by November 2023,” the report said. “So far, DWP’s focus has been on monitoring for bias in the selection of cases to review. DWP could also helpfully provide assurance that whichever cases it chooses to review there are no adverse impacts on customer service – such as delays to first benefit payment.”

The department indicated that it has already participated in an investigation by the Information Commissioner’s Office “into the use of artificial intelligence and algorithms in the welfare system”.

“The ICO found that there was no evidence to suggest that people in the benefits and welfare system are subjected to any undue harm or financial detriment as a result of the algorithms used,” the annual report said. “In addition, the ICO found that there was sufficient and meaningful human involvement in the processes examined.”



Join PublicTechnology editor Sam Trendall for a free webinar discussion at 3pm today, alongside the prime minister’s anti-fraud champion Anthony Browne MP, Civil Service World editor Jessica Bowie and public sector fraud expert Neil McCallum. Panellists will discuss government’s use of technology and data to tackle fraud and will take questions from the audience. Registration is free and open to all.

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere