The number of images and videos of sexual assaults on children generated by artificial intelligence (AI) is increasing in a “chilling” way on the Internet, a British organization in charge of detecting and eliminating this content warned on Friday.
Many of these photographs or videos that show minors “being attacked and abused are so realistic that it is almost impossible to distinguish them from images of real children,” the Internet Watch Foundation (IWF) emphasizes in a statement. of the main associations of the sector in Europe.
The IWF, which had received 70 complaints between April 2023 and March 2024, already counted 74 in the space of six months, between April and the end of September of this year.
Almost all of these images were found on open sites, easily accessible to the general public, mainly from Russia (36%), the United States (22%) and Japan (11%).
More than three quarters of these complaints were made directly to the association by Internet users, after having seen the images in “forums or photo galleries or AI videos.”
According to an analyst from the organization, who preferred to remain anonymous for security reasons, the increase in these complaints is “chilling and gives the impression that we have reached a critical point, with the risk that organizations like ours and the police “They are overwhelmed by hundreds of new images, not knowing if a child really needs help somewhere.”
“To reach the level of sophistication observed, the software used had to learn from real images and videos of sexual assaults on minors shared on the internet,” explains Derek Ray-Hill, acting director general of the IWF, quoted in the statement.
To address this, Ray-Hill urges British parliamentarians to adapt existing laws “to the digital age” and its ever-advancing tools.