Operational document indicates that the department has made good on a pledge made by its leader in January to use new tech to help better identify and support vulnerable users
The Department for Work and Pensions has implemented an algorithm for the Universal Credit journal service which “predicts whether a message from a customer may indicate a risk of harm based on the words contained in the message”.
Earlier this year, DWP permanent secretary Peter Scholfield told the Public Accounts Committee about the department’s current and future plans to use automated tech to better support vulnerable users by “identifying serious harm risks”.
In a newly published transparency record, the department has provided details of an “algorithmic tool [that] is integrated into the digital communication channel for Universal Credit”, via the journal service that allows recipients to send messages to their case manager. About two million messages are sent through the platform each month.
Case managers each have a dashboard displaying details of their current caseload – which “urgent journal messages are highlighted in red as a visual flag for them to take the appropriate action”, which may include checking in with the UC recipient or taking some form of intervention to prevent harm.
The DWP indicates that “regular expression searches are used to identify phrases and words in journal messages” that might indicate a pressing risk.
“These [words and phrases] are multiplied with weights to predict the probability of the message being urgent,” the record says. “If the weight is over the chosen threshold, the urgent flag is applied to the journal message, marked as being applied by the model. An agent flag can also be applied for retraining purposes. New weights and regular expression terms can be included when the model is retrained.”
Before the introduction of the algorithm, Universal Credit teams relied solely on “the manual flagging of journal messages by advanced case managers [in a] process took over 24 hours”.
Related content
- DWP picks IBM for potential £30m deal to use AI to meet ‘key business challenges and opportunities’
- One in three cannot access online benefits without help, DWP study finds
- EXCL: DWP explores data science to inform Jobcentre ‘interventions’ for benefit claimants
Whether messages are flagged by human experts or the automated tool, the intention is to enable frontline caseworkers to prioritise responses and expedite those “where contact from a case manager may help mitigate a risk of harm”.
The record reveals that the tool has a precision rate of 15.3% – meaning that, of every 100 messages flagged by the system, about 15 have been correctly identified as risky. But the recall rate is much higher: 79.7%. This means that, of all messages received by the platform that should be flagged, the algorithm catches about four-fifths.
“The accuracy and precision of the model is monitored in a dashboard,” the record says. “The model can be retrained if accuracy or precision become unacceptable.”
Feedback and training
The tool is now used to screen all two million journal messages each month – all of which are also then read by the relevant human case manager, regardless of whether they have been flagged.
Although number of messages incorrectly deemed as urgent represents a very high proportion of all flags, the transparency document says that mitigations against this inaccuracy include “collecting feedback to improve the model, appropriate training for users of the urgent flags, [and that] any missed messages will be replied to using business as usual processes”.
Some of these incorrect determinations may be the result of “non-standard spelling”, the record indicates – but the tool is designed to assess only what is written by UC recipients, so as to minimise the amount of personal data to which it is potentially exposed.
The record says: “Using just the text, and no context, to journal messages may lead to some inaccurate predictions, however, this reduces the amount of sensitive information that is processed by the tool.”
The model was developed using a training process that incorporated data from historical UC-claimant messages – 53,635 of which were urgent, and 5.35 million of which were non-urgent.
Now it is in widespread use, “the tool is maintained by trained data scientists and developers”, according to the record.
“Guidance and learning is updated when any new feature is released that impacts case managers,” the document adds. “All agents are trained in responding to urgent contact that indicates a risk of harm.”
Although steps have been taken to minimise data use both the messages used for training and those processed by the tool in operation may still contain sensitive info.
“As journal messages allow Universal Credit claimants to provide information in free text, it is possible they may provide some sensitive information. This is the prerogative of the claimant,” the record says. “Data is stored in a secure analytical environment. Access is restricted to Universal Credit data scientists and other analysts using Universal Credit data. DWP will retain data in line with the data retention policy post claim closure for research and statistical purposes. Role-based accesses are enforced to ensure that data scientists and analysts can only access information required for this specific use-case. Data protection impact assessments are in place to ensure that data protection risks relating to matters such as sharing and access to sensitive data have been considered.”

