Children will be better protected from becoming victims of horrific indecent deepfakes as the government introduces new laws to ensure Artificial Intelligence (AI) cannot be exploited to generate child sexual abuse material.
Data from the Internet Watch Foundation released today (Wednesday 12 November) shows reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025, News Cover reports, citing UK government.
There has also been a disturbing rise in depictions of infants, with images of 0–2-year-olds surging from 5 in 2024 to 92 in 2025. (note)
Under stringent new legislation, designated bodies like AI developers and child protection organisations, such as the Internet Watch Foundation (IWF), will be empowered to scrutinise AI models, and ensure safeguards are in place to prevent them generating or proliferating child sexual abuse material, including indecent images and videos of children.
Currently, criminal liability to create and possess this material means developers can’t carry out safety testing on AI models, and images can only be removed after they have been created and shared online. This measure, one of the first of its kind in the world, ensures AI systems’ safeguards can be robustly tested from the start, to limit its production in the first place.
The laws will also enable organisations to check models have protections against extreme pornography, and non-consensual intimate images.
While possessing and generating child sexual abuse material is already illegal under UK law, both real and synthetically produced by AI, improving AI image and video capabilities present a growing challenge.
We know that offenders who seek to create this heinous material often do so using images of real children - both those known to them and those found online - and attempt to circumnavigate safeguards designed to prevent this.
This measure aims to make such actions more difficult by empowering companies to ensure their safeguards are effective and to develop innovative, robust methods to prevent model misuse.