UK Tech Firms and Child Protection Agencies to Examine AI's Ability to Generate Exploitation Content
Technology companies and child protection organizations will be granted authority to assess whether AI tools can produce child exploitation material under recently introduced UK laws.
Significant Increase in AI-Generated Illegal Content
The declaration coincided with findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the government will allow designated AI companies and child safety groups to inspect AI systems – the foundational technology for conversational AI and visual AI tools – and verify they have adequate protective measures to stop them from producing images of child exploitation.
"Ultimately about stopping exploitation before it happens," stated the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now detect the danger in AI systems early."
Addressing Legal Obstacles
The changes have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that issue by helping to stop the creation of those materials at source.
Legal Framework
The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, creating or distributing AI models designed to create exploitative content.
Practical Impact
This week, the official toured the London headquarters of a children's helpline and heard a mock-up call to counsellors featuring a report of AI-based exploitation. The call depicted a teenager seeking help after being blackmailed using a explicit AI-generated image of himself, created using AI.
"When I hear about children experiencing extortion online, it is a cause of intense anger in me and justified anger amongst parents," he said.
Alarming Data
A prominent online safety organization stated that instances of AI-generated abuse content – such as online pages that may include multiple images – had more than doubled so far this year.
Instances of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, making up 94% of prohibited AI depictions in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a crucial step to ensure AI products are secure before they are launched," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be targeted all over again with just a simple actions, providing criminals the capability to create possibly endless quantities of sophisticated, photorealistic exploitative content," she continued. "Content which further commodifies survivors' trauma, and makes young people, particularly girls, less safe both online and offline."
Support Session Data
The children's helpline also published details of counselling interactions where AI has been mentioned. AI-related harms discussed in the sessions include:
- Employing AI to evaluate weight, physique and appearance
- Chatbots dissuading young people from consulting trusted guardians about abuse
- Facing harassment online with AI-generated material
- Online blackmail using AI-manipulated images
During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated terms were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapy apps.