Tech firms and child safety agencies will be granted authority to evaluate whether artificial intelligence systems can produce child exploitation images under recently introduced British legislation.
The declaration coincided with revelations from a safety monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Under the changes, the government will allow approved AI developers and child protection organizations to examine AI models – the underlying technology for conversational AI and image generators – and ensure they have adequate protective measures to stop them from producing images of child exploitation.
"Fundamentally about stopping abuse before it happens," stated the minister for AI and online safety, noting: "Experts, under rigorous protocols, can now identify the danger in AI models promptly."
The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a evaluation process. Until now, officials had to wait until AI-generated CSAM was published online before addressing it.
This legislation is aimed at preventing that issue by enabling to halt the creation of those images at source.
The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or sharing AI systems designed to create exploitative content.
This week, the minister visited the London base of a children's helpline and heard a simulated conversation to counsellors featuring a account of AI-based abuse. The call depicted a teenager requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about children experiencing extortion online, it is a source of intense frustration in me and justified concern amongst families," he stated.
A prominent online safety foundation stated that instances of AI-generated exploitation content – such as webpages that may contain numerous images – had significantly increased so far this year.
Cases of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
The legislative amendment could "represent a vital step to guarantee AI products are safe before they are released," commented the chief executive of the online safety foundation.
"AI tools have made it so victims can be victimised repeatedly with just a few clicks, providing criminals the capability to make potentially endless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Content which further commodifies survivors' suffering, and renders children, particularly female children, less safe both online and offline."
Childline also released information of support sessions where AI has been referenced. AI-related risks mentioned in the conversations include:
During April and September this year, the helpline delivered 367 support interactions where AI, chatbots and related topics were discussed, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for assistance and AI therapy applications.
A passionate gamer and casino enthusiast with years of experience in online gaming strategies and reviews.