ICO strategy focuses on impact of AI and biometrics


Launched at an event marking four decades of operation, the UK’s privacy and data protection regulator has unveiled a dedicated plan to tackle the potential risks posed by new technologies

The UK Information Commissioner’s Office plans to step up its scrutiny of artificial intelligence and biometric technologies amid growing concerns over human rights and data protection.

Facial recognition technology (FRT) and automated decision-making (ADM) systems are among the tools set to face harsher supervision by the ICO, to ensure they remain fair and transparent. 

A new AI and biometrics strategy for the watchdog – launched at an event celebrating its 40th anniversary – comes in response to research showing significant public mistrust of the AI-powered technologies, and growing concerns of what happens when these go wrong. An ICO survey on the public’s attitude towards biometric tech showed more than half were concerned that the use of FRT by police would infringe on their right to privacy with some describing it as “slippery slope” towards greater government control.

The ICO will also focus on ensuring the recruitment industry uses ADM systems fairly as well as conduct audits and produce guidance on the use of FRT by police forces.


Related content


A part of the strategy the ICO will develop a statutory code of practice to ensure organisations create AI tools that safeguard privacy, and work with developers to ensure people’s data is used lawfully when training generative AI models.

It will also scrutinise emerging AI trends, focusing on how to ensure systems which are increasingly capable of acting autonomously – also known as agentic AI – remain accountable.

Commissioner John Edwards said: “The same data protection principles apply now as they always have – trust matters and it can only be built by organisations using people’s personal information responsibly. Public trust is not threatened by new technologies themselves, but by reckless applications of these technologies outside of the necessary guardrails. We are here, as we were 40 years ago, to make compliance easier and ensure those guardrails are in place.”  

Reacting to the new strategy, Lord Clement-Jones CBE, co-chair of the all-party-parliamentary group on AI, said: “The AI revolution must be founded on trust. Privacy, transparency, and accountability are not impediments to innovation; they constitute its foundation. AI is advancing rapidly, transitioning from generative models to autonomous systems. However, increased speed introduces complexity. Complexity entails risk. We must guarantee that innovation does not compromise public trust, individual rights, or democratic principles.”

A version of this story originally appeared on PublicTechnology sister publication Holyrood

Sofia Villegas

Learn More →