Labour MP: ‘Trust in AI is built through competence – not glossy PR’


Member with tech policy background has argued for the creation of rules that address specific tasks performed by artificial intelligence, rather than just the sentiment attached to such use cases

Crucial public trust in artificial intelligence systems must be built not through “glossy PR campaigns”, but through rigorous standards that demonstrate the efficacy of the technology, according to a Labour MP.

Dan Aldridge – the member for Weston-super-Mare, who joined parliament last year from his previous job as head of policy at the British Computer Society – described the current state of generative AI as being “in a strange limbo where chatbots can write essays, fake court transcripts, even emotionally manipulate users – and developers claim they’re ‘correctly aligned’ because someone ticked a checklist”.

In a piece for PublicTechnology sister publication The House, Aldridge cited a need for agreed metrics that “measure how well an AI performs on its intended task, across different groups and conditions”

“We have mountains of frameworks discussing bias, robustness, and trust as abstract concepts, with little acknowledgment that AI systems are built to do specific things,” wrote.


Related content


The current lack of such targeted standards means that “you’re not measuring safety – you’re measuring sentiment”.

“It’s like focusing on a car’s paintwork without checking whether the brakes work,” Aldridge said. “When AI fails, it often fails quietly and badly. A misquote. A mislabelled name. An exam flagged as plagiarised because a neurodiverse student thinks differently. This isn’t sci-fi. It is real-world harm caused by systems that haven’t been tested where it counts. And when systems fail more often for certain groups – because of poor training data or incomplete testing – that’s not just a technical issue. That’s an ethical failure. Bias isn’t a philosophical concept; it’s a broken system. Until we treat it like one, we’ll keep hiding accountability behind complexity.”

The MP added: “Trust in AI isn’t built through glossy PR. It’s built through competence. When an AI system makes decisions about healthcare, finance, policing or education, people deserve to know how it works, whether it works for them, and what happens when it doesn’t. That means testing AI at the task level – repeatedly, and in the messy real world, not just in sterile labs.”

Elsewhere in his piece, which can be read in full here, Aldridge said that the Labour administration is intent on “putting accountability at the heart of AI regulation” via the new Data Use and Access Bill.

PublicTechnology staff

Learn More →