AI-generated realistic images and videos increased by 14% in 2025

The realistic content about child sexual abuse generated with artificial intelligence (IA) increased by 14 percent during the past year 2025with a total of 8,029 images and videos identified, among which 65 percent of the videos have obtained the most extreme classification.

This is clear from the latest report prepared by the Internet Watch Foundation (IWF) titled ‘Harm without limits: AI-generated child sexual abuse material from our analysts’ perspective’, which shows that since the UK-based organization began monitoring AI in early 2023, they have observed a “alarming progress” in the ability to artificially generate images of this type of criminal content.

Specifically, the IWF has analyzed 8,029 pieces of AI-generated content that depicted realistic child sexual abuse and were found both on the ‘dark web’ and on commercial platforms conventions of the open web during the past year 2025representing a 14 percent increase in creating this type of content.

Among the report’s conclusions, it highlights how AI videos are “increasingly more extreme and sophisticated.” So much so that, of the 3,443 identified child sexual abuse videos generated by AI, the 65 percent (a total of 2,233 pieces) They qualified with category A.

This is the most extreme classification that the IWF uses to refer to child sexual abuse material, since it refers to crimes such as rape and sexual torture, based on British legislation.

When compared to the non-AI-generated criminal videos identified by the IWF in 2025, 43 percent were Category A rated pieces. This difference in percentages shows that criminals are taking advantage of AI to create content more violent than in reality.

Furthermore, the total number of videos identified has increased more than 260 times compared to the 13 videos found in the previous year 2024.

FACILITIES TO CREATE ABUSIVE CONTENT WITH AI

In this framework, although the proportion of material generated by AI remains “relatively small” within the volume of child sexual abuse material processed each year by the IWF, the report concludes that the Quantity and severity of AI-generated images have “increased exponentially” due to availability and ease of use of AI tools.

The report also includes extracts from criminal communities found on the ‘dark web’, where users openly celebrate the accessibility and sophistication of these AI tools.

“Each new advance in generative AI is praised for its ability to improve realism, increase gravity or make any imaginable sexual scenario with a minor more immersive,” they have detailed from the IWF, while warning that this realism can be achieved through “the adding audio to video, the ability to represent multiple people interacting or even successfully manipulating images of a real minor known to the aggressor”.

An example of the ease of generating images of child sexual abuse has recently been seen on the social network Grok shared around three million sexualized images on the platformincluding 23,000 representing children, responding to requests from users who required this type of content.

The organization’s analysts have even collected criminals talking about the possibilities of using this technology in the future. Specifically, using an “automated AI” that “in one or two years” can create films of abuse “introducing a slogan to an uncensored AI agent.”

SECURITY APPROACH NECESSARY IN AI COMPANIES AND LEGISLATIONS

As the executive director of IWF, Kerry Smith, has clarified in a statement, the Technological advances “should never be detrimental to the safety and well-being of a child.” Therefore, he added that while AI can offer many benefits, “it is terrifying to think that its power could be used to ruin a child’s life. This material is dangerous.”

Following this line, he has underlined the urgent need for the governments and technology companies that recognize the damage caused by these types of practices of child sexual abuse generated by AI, as well as the tools used to create it. “There must be zero tolerance,” he declared.

In this regard, he has urged the Companies to adopt a “by design” approach to security that guarantees child protection integrated into the development of products. That is, for example, it is the chatbots themselves that prevent the creation of these images.

The report has been published after the European Parliament approved a temporary extension of the ePrivacy Directive, extending the exemption to privacy legislation that allows voluntary detection of online child abuse material until August 3, 2027instead of next April 3 as planned.

Specifically, it is a partial repeal of data protection regulations in the electronic communications sector that allows providers of communications services – such as courier services – use specific technologies for the processing of personal and other data in order to detect sexual abuse of minors on the Internet.

By Editor

One thought on “AI-generated realistic images and videos increased by 14% in 2025”

Leave a Reply