Microsoft, Google, Meta and OpenAI join forces to combat the generation of images of child sexual abuse using AI

Microsoft, Meta, Google and OpenAI, among other large technology firms focused on the development of generative Artificial Intelligence (AI) tools have committed to combating child sexual abuse images (CSAM) resulting from the use of this technology and to follow a series of measures security by design.

In 2023, more than 104 million files suspected of including CSAM were reported in the United States, an influx of AI-generated images that “poses significant risks for an already overburdened child safety ecosystem,” as child safety organization Thorn points out..

This organization and All Tech is Human, which promotes the use of technologies responsibly, have been joined by Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, MistralAI, OpenAI, Stability AI and Teleperformance, to promote an initiative which seeks to protect minors from misuse of AI.

Specifically, technology firms have publicly committed to the so-called security by design principles, a series of measures that arise from the need to eradicate the easy creation of this type of content.

Today, cybercriminals can use generative AI for a variety of purposes: from making it harder to identify child victims to generating more demand for child abuse-related material and allow the exchange of information between sexual predators.

Hence, a series of security measures have been established by design, which require companies to “anticipate where threats may occur during the development process” of these tools and design necessary solutions before damage has occurred.

First, signatories have committed to developing, building and training generative AI models that proactively address child safety risks, a process in which the training data of their models must be studied, so that do not reproduce abusive content.

This section also takes into account the need to implement watermarks in these images and the use of other techniques that indicate that they are AI-generated content.

On the other hand, the technology giants promote with this initiative the publication and distribution of generative AI models after they have been trained and evaluated for child safety, protecting minors at all times.

They have also agreed to maintain the security of their AI models and platforms, understanding and actively responding to child safety risks, investing in research and future technological solutionsas well as deploying measures to detect and delete images that violate child safety on their platforms.

GOOGLE IS ALREADY TAKING ACTION

For its part, Google has discussed its commitment to the security principles of generative AI on its blog, where it has noted that it currently has a series of tools to stop CSAM material. Specifically, it has recognized the use of a combination of hash matching technology with AI classifiers, and also content reviews by its human team.

“When we identify exploitative content, remove it and take appropriate action,” which may include reporting the incident to the US National Center for Missing and Exploited Children (NMEC), with which it collaborates.

It has also said that it also uses machine learning to identify searches related to child sexual abuse and exploitation and prevent results that could exploit or sexualize children.

Likewise, it has a priority alert program for which it has partnered with experts who identify potentially infringing content so that Google security teams can review them.

By Editor

Leave a Reply