UK Technology Firms and Child Protection Agencies to Examine AI's Ability to Create Abuse Content

Technology companies and child safety agencies will receive authority to evaluate whether artificial intelligence tools can generate child abuse material under recently introduced UK laws.

Significant Rise in AI-Generated Harmful Content

The announcement coincided with revelations from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the changes, the authorities will allow designated AI companies and child safety organizations to inspect AI systems – the underlying systems for chatbots and image generators – and ensure they have adequate protective measures to stop them from creating images of child sexual abuse.

"Fundamentally about preventing abuse before it occurs," declared the minister for AI and online safety, noting: "Experts, under strict protocols, can now identify the risk in AI models promptly."

Addressing Legal Obstacles

The changes have been introduced because it is illegal to create and own CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to averting that problem by helping to stop the production of those images at their origin.

Legal Framework

The changes are being added by the authorities as revisions to the criminal justice legislation, which is also establishing a ban on possessing, producing or distributing AI systems designed to create exploitative content.

Practical Consequences

This week, the minister visited the London base of a children's helpline and listened to a simulated call to counsellors involving a account of AI-based exploitation. The interaction depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, created using AI.

"When I learn about young people facing blackmail online, it is a source of intense frustration in me and justified concern amongst parents," he stated.

Alarming Data

A prominent internet monitoring foundation reported that instances of AI-generated exploitation material – such as webpages that may contain multiple images – had significantly increased so far this year.

Cases of the most severe content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a vital step to ensure AI products are safe before they are released," stated the head of the internet monitoring foundation.

"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a few clicks, giving offenders the ability to make possibly limitless quantities of advanced, lifelike child sexual abuse material," she continued. "Content which further exploits survivors' suffering, and renders young people, particularly female children, less safe on and off line."

Counseling Interaction Data

The children's helpline also released information of support sessions where AI has been referenced. AI-related risks mentioned in the sessions comprise:

  • Employing AI to evaluate weight, body and looks
  • Chatbots dissuading children from talking to safe guardians about harm
  • Facing harassment online with AI-generated content
  • Digital extortion using AI-faked images

Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and associated terms were discussed, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including using AI assistants for support and AI therapeutic apps.

Sophia Gonzalez
Sophia Gonzalez

Lena is a seasoned sports analyst and betting strategist with over a decade of experience in the industry.