Apple threatened to ban Grok from the App Store after a wave of sexualized images in X

Apple has revealed that it did act internally against the wave of sexualized images of women and children caused by Grok’s imaging tool, and that it threatened artificial intelligence (AI) company xAI with removing its Grok assistant app from the App Store for “violate their guidelines”.

In January of this year, Grok flooded social network three million sexualized images for about 11 days, including 23,000 representing children and nearly 1.8 million in which women appeared, according to the Center for the Fight against Digital Hate (CCDH).

These events occurred due to an option available in X, which allowed Grok to be used to edit images published on the social network, with just one click. Thus, the ‘chatbot’ agreed to the request of some users to undress women and children, modifying their images to put them in bikinis or to appear in sexual positions. These images were shared in response to the original posts on X.

At that time, digital rights, child safety and women’s rights organizations, as well as government institutions They requested directly from Apple and Google to take measures toprohiber Grok” in their app stores.

Now, it has been revealed that Apple did act internally and threatened the company owned by Elon Musk, xAI, with remove the AI ​​assistant app from the App Store. The reasons were to violate its guidelines by not limiting the generation of sexualized images by users and allowing their viralization through the social network X.

This is reflected in a letter that Apple itself sent to US senators to explain how it addressed the increase in these publications generated by Grok, as shared by NBC News, which has been able to access the letter, and has been picked up by 9to5Mac.

Specifically, after “receive complaints and see media coverage of the scandal”Apple privately contacted the teams responsible for X and Grok, demanding that their developers “Create a plan to improve content moderation”.

In response, as detailed in the letter, X sent a first update of the Grok application for review by Apple, but Cupertino They did not give their approval because they did not consider the changes sufficient.

After that, Elon Musk’s company again sent an updated version of the Grok application, this time, along with a revised version of the social network X. In this case, Apple considered that X “had substantially corrected its violations“, however, determined that Grok’s ‘app’ continued to fail to meet the requirements of its guidelines.

It was at that time that Apple notified the developer that “additional changes were required to correct the violation” or, otherwise, “the app could be removed from the App Store”, threatening to officially leave it out of its app store.

Finally, X implemented new changes to its AI assistant application that, although not specified, led Apple to determine that Grok “hhad improved substantially”, approving this latest delivery and allowing the ‘app’ to be maintained in the App Store.

This series of modifications is consistent with the changes that X made to its social network service and to Grok during the incidents that occurred in January with the sexualization of images. First of all, limited the Grok image generator allowing its use only for paying members. However, finally blocked the generation of sexualized images through Grok for all users.

Despite the moderation of the tool, NBC News also points out that users continue to find ways to use Grok to generate sexualized images of people without their consent.

Thus, according to a new report shared by the aforementioned media, although the number of these publications has decreased compared to January, they have documented dozens of similar cases in which images of women are shown in “more revealing” clothing, such as sports bras, tight suits or costumes.

By Editor