British Technology Companies and Child Protection Agencies to Test AI's Capability to Create Exploitation Content

Tech firms and child protection organizations will be granted authority to assess whether artificial intelligence systems can generate child abuse material under new UK legislation.

Substantial Rise in AI-Generated Illegal Content

The announcement coincided with revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the authorities will permit designated AI developers and child safety organizations to inspect AI models – the foundational technology for chatbots and image generators – and verify they have adequate safeguards to prevent them from producing images of child sexual abuse.

"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, noting: "Specialists, under rigorous protocols, can now detect the danger in AI systems promptly."

Addressing Regulatory Challenges

The amendments have been introduced because it is against the law to produce and own CSAM, meaning that AI developers and others cannot generate such images as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to averting that issue by enabling to halt the creation of those materials at their origin.

Legal Framework

The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a ban on owning, creating or distributing AI systems developed to create child sexual abuse material.

Real-World Impact

This week, the official visited the London base of a children's helpline and heard a simulated call to counsellors involving a account of AI-based abuse. The call depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of themselves, created using AI.

"When I hear about young people facing blackmail online, it is a cause of extreme frustration in me and rightful concern amongst families," he said.

Concerning Statistics

A prominent online safety organization stated that cases of AI-generated abuse material – such as online pages that may contain numerous images – had significantly increased so far this year.

Cases of category A material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
  • Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a crucial step to guarantee AI tools are secure before they are released," stated the head of the internet monitoring foundation.

"AI tools have made it so victims can be targeted all over again with just a simple actions, giving offenders the capability to make possibly endless quantities of advanced, lifelike exploitative content," she added. "Material which further exploits victims' trauma, and renders young people, particularly girls, less safe on and off line."

Support Interaction Data

Childline also published information of counselling interactions where AI has been mentioned. AI-related harms discussed in the conversations include:

  • Using AI to rate weight, body and looks
  • Chatbots dissuading children from consulting safe adults about abuse
  • Being bullied online with AI-generated material
  • Digital blackmail using AI-faked pictures

Between April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and related topics were mentioned, four times as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing using chatbots for support and AI therapy applications.

Jerome Baldwin
Jerome Baldwin

Elara is a seasoned traveler and writer who shares insights from her global adventures to help others explore the world confidently.