After a rise in artificially created illegal images, including those depicting the very youngest children, incoming measures hope to enable technical and child protection experts to better scrutinise AI systems
Government has released legislation intended to allow authorities to work with the tech industry and child-protection organisations to ensure artificial intelligence models cannot be misused to create synthetic child sexual abuse content.
The new law comes after reports of AI-generated child sexual abuse material more than doubled in the past year, rising from 199 in 2024 to 426 in 2025, according to the Internet Watch Foundation (IWF). There has also been a rise in depictions of the youngest infants, with images of 0–2-year-olds surging from five in 2024 to 92 in 2025.
Under the new legislation, designated bodies like AI developers and child protection organisations, such as the IWF, will be allowed to scrutinise AI models for illegal material. This is to ensure safeguards are in place to prevent them generating or proliferating child sexual abuse material, including indecent images and videos of children.
Currently, developers can be held criminally liable if they create and possess this kind of material, even if it is intended to carry out safety testing on AI models. Because of this, images can only be removed after they have been created and shared online. This measure will enable designated bodies to test an AI system’s safeguards from the start, ensuring that these systems will be incapable of producing child sexual abuse material.
To ensure that the testing work is carried out safely and securely, the government intends to bring together a group of experts in AI and child safety for oversight. The group will ensure that sensitive data is protected, prevent any risk of illegal child sexual abuse content being leaked and support the wellbeing of researchers involved, who could be emotionally affected by the testing process.
Related content
- AI researchers sound child-abuse warning
- Home secretary unveils new tech tools to help police combat child abuse
- National Crime Agency plans digital ‘front door’ for tech firms to report child abuse
“We must make sure children are kept safe online and that our laws keep up with the latest threats,” said Jess Phillips, minister for safeguarding and violence against women and girls. “This new measure will mean legitimate AI tools cannot be manipulated into creating vile material and more children will be protected from predators as a result.”
Technology secretary Liz Kendall added: “These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk. By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.”
Data from the IWF also shows the severity of the material identified online has intensified over the past year. The number of incidents of category A content, defined as images involving penetrative sexual activity, images involving sexual activity with an animal, or sadism, rose from 2,621 to 3,086 items. These images now account for 56% of all illegal material reported, compared to 41% last year. Girls have been overwhelmingly targeted by this content, making up 94% of illegal AI images in 2025.
“AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material,” said Kerry Smith, chief executive of the IWF. “For three decades, we have been at the forefront of preventing the spread of this imagery online – we look forward to using our expertise to help further the fight against this new threat.”

A version of this story originally appeared on PublicTechnology sister publication Holyrood


