Tech firms and child safety agencies will receive permission to assess whether artificial intelligence systems can generate child exploitation images under recently introduced British laws.
The announcement coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will permit approved AI developers and child safety organizations to inspect AI systems – the foundational systems for conversational AI and image generators – and verify they have adequate safeguards to prevent them from creating depictions of child exploitation.
"Fundamentally about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Specialists, under strict protocols, can now detect the risk in AI systems promptly."
The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a evaluation process. Until now, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that issue by enabling to halt the creation of those images at source.
The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, creating or distributing AI systems designed to create child sexual abuse material.
This recently, the minister visited the London base of Childline and heard a simulated conversation to counsellors featuring a report of AI-based exploitation. The call portrayed a teenager requesting help after facing extortion using a explicit deepfake of himself, created using AI.
"When I hear about young people facing blackmail online, it is a source of intense anger in me and rightful anger amongst families," he said.
A leading internet monitoring foundation stated that cases of AI-generated abuse content – such as webpages that may include numerous files – had more than doubled so far this year.
Instances of category A content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
The law change could "constitute a vital step to ensure AI tools are secure before they are launched," commented the chief executive of the online safety organization.
"AI tools have made it so survivors can be victimised all over again with just a few clicks, giving offenders the capability to make possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Material which additionally exploits survivors' trauma, and makes children, particularly female children, more vulnerable on and off line."
The children's helpline also released details of support sessions where AI has been mentioned. AI-related risks discussed in the conversations comprise:
During April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellness, encompassing using AI assistants for assistance and AI therapeutic apps.
A passionate local guide with over 10 years of experience in sharing Naples' hidden gems and rich history with travelers from around the world.