🔗 Share this article British Technology Firms and Child Protection Agencies to Examine AI's Ability to Create Exploitation Content Technology companies and child safety agencies will receive authority to evaluate whether AI tools can generate child exploitation material under new UK legislation. Significant Increase in AI-Generated Harmful Material The declaration came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025. New Legal Framework Under the changes, the government will allow approved AI developers and child protection groups to inspect AI systems – the underlying technology for conversational AI and visual AI tools – and ensure they have adequate protective measures to stop them from producing depictions of child exploitation. "Ultimately about preventing abuse before it happens," declared Kanishka Narayan, noting: "Experts, under strict conditions, can now identify the risk in AI models early." Addressing Legal Challenges The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot create such content as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it. This law is designed to preventing that issue by helping to halt the creation of those images at their origin. Legislative Framework The amendments are being added by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or distributing AI models developed to generate child sexual abuse material. Practical Impact This recently, the official visited the London headquarters of a children's helpline and heard a mock-up call to counsellors featuring a report of AI-based abuse. The call portrayed a adolescent requesting help after facing extortion using a explicit AI-generated image of himself, created using AI. "When I learn about young people experiencing extortion online, it is a source of extreme anger in me and rightful concern amongst parents," he stated. Alarming Data A leading internet monitoring foundation reported that instances of AI-generated exploitation material – such as online pages that may contain multiple files – had significantly increased so far this year. Instances of the most severe content – the most serious form of exploitation – rose from 2,621 images or videos to 3,086. Female children were predominantly targeted, making up 94% of prohibited AI depictions in 2025 Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025 Industry Reaction The law change could "constitute a crucial step to guarantee AI products are secure before they are launched," stated the head of the internet monitoring organization. "Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, providing offenders the capability to create potentially endless amounts of advanced, photorealistic exploitative content," she added. "Content which further commodifies survivors' trauma, and renders young people, particularly female children, more vulnerable on and off line." Counseling Interaction Data Childline also released information of support sessions where AI has been mentioned. AI-related risks discussed in the conversations include: Employing AI to evaluate weight, body and looks AI assistants discouraging children from talking to trusted adults about harm Being bullied online with AI-generated material Digital extortion using AI-manipulated images Between April and September this year, Childline delivered 367 support sessions where AI, chatbots and related terms were mentioned, significantly more as many as in the equivalent timeframe last year. Half of the references of AI in the 2025 interactions were connected with mental health and wellness, including using AI assistants for support and AI therapeutic applications.