The UK government could be sleepwalking into an AI disaster

The Department for Work and Pensions must learn lessons from the Australian government’s Robodebt scheme – or risk repeating its catastrophic consequences, according to Mia Leslie from the Public Law Project

Australia’s Royal Commission into the Robodebt scandal is a stark warning to the British government that, unless the UK changes course on regulation of AI and its use within automated decision-making now, policymakers are poised to repeat many of the same mistakes. 

The Australian Robodebt scheme used automation to crack down on welfare fraud and overpayment.

A flaw in the method for conducting income-averaging calculations saw AU$750m (£393m) wrongly recovered from more than half a million benefit claimants accused of fraud. The impact on many of their lives was catastrophic.

Recipients told the Royal Commission of financial suffering, mental health struggles, and “severe and long-lasting effects”.  With the stakes this high, it is imperative that all governments learn the lessons from failures elsewhere.

“If a Robodebt-like scandal were to happen in the UK, the current framework would not guarantee people clear access to information about how such an automated system worked”

The Royal Commission report, published this summer, labelled the Robodebt scandal a “massive systemic failure” and said the need for reform of the complex and incohesive legislative landscape that governs automated decision-making was “beyond argument”.  This begs the question: What is the UK government doing to avoid a similar catastrophe from occurring?

The short answer is, not enough.

There are striking similarities between the origins of the Robodebt scheme – and the policy design underpinning its development – with those set out by the Department for Work and Pensions in relation to its growing use of automation and machine learning in its recent Fraud Plan and Annual Report and Accounts.

Around the same time the Royal Commission report was published, the DWP announced its ambitious new target to increase the savings achieved by its “counter fraud and error resource”, in part by expanding its use of machine learning.

The department has already been using a machine learning algorithm to prioritise the review of potentially fraudulent Universal Credit advance claims for at least a year, but new plans will see it develop similar models for the main risk areas of Universal Credit. The annual report acknowledges the risk of bias within machine learning technologies, and also acknowledges that the department’s ability to test for unfair impacts across protected characteristics is currently “limited” due to insufficient collection of user data. 

Whilst hindsight allows recommendations to be targeted to the specific wrongs experienced by the victims of the Robodebt scandal, the UK government, and specifically the DWP, is in a position to proactively implement safeguards regarding its use of new technology for identifying fraud and error before such harm occurs.

The Royal Commission recommended that the Australian government strengthen the public service more broadly, and increase the powers of agencies to review, audit and scrutinise the adoption of automated systems.  The most pertinent to the current UK policy landscape is the recommendation that the Australian Government introduce a statutory basis to effectively govern public authority use of automated decision-making. The Royal Commission emphasised that a statutory basis should ensure a clear path for those affected by automated decisions to seek a review of those decisions, oblige government departments to publish details of the use of automated decision-making on their website, and explain how they work in plain language.

But the government’s current approach to regulation of AI – in particular the state’s own use – falls short of the mark.

‘Falling on deaf ears’
Earlier this year, the UK government published its AI regulation white paper, floating the idea of a statutory intervention, but committing only to assess whether its introduction would be necessary after AI users have tested the existing regulatory framework. Whilst we continue to wait for the formal outcome of the consultation, many respondents have already made their feedback public, and it is clear that almost identical recommendations to those made by the Royal Commission are being urged upon the UK government in advance of any similar scandal. 

But these recommendations, and warnings from the Robodebt catastrophe, appear to be falling on deaf ears. Without the adoption of effective regulation that secures transparency and accountability mechanisms, such as those recommended by the Royal Commission, the risk is that the UK will – as the UN Special Rapporteur on Extreme Poverty and Human Rights put it – steer itself “zombie-like, into a digital welfare dystopia”.   

At present, automated decision-making systems are being rolled out in the UK without independent expert scrutiny. Individuals are not informed that decisions regarding benefit denial or suspension, or the instigation of fraud investigations, are reached with the assistance of such technology, and government departments are under no requirement to publish information illuminating the use of automated decision-making or explain how such processes work.

If a Robodebt-like scandal were to happen in the UK, the current framework would not guarantee people clear access to information about how such an automated system worked. Without transparency about how decisions are reached, and how systems work, there can be no meaningful mechanism for seeking redress – a factor that those in Australia say left them feeling “vilified and worn-down”.

But the UK government is in a position to prevent this becoming a reality.

Mia Leslie is a research fellow at the Public Law Project

Mia Leslie

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere