Apple has revealed that yes acted internally against the wave of sexualized images of women and children caused by Grok’s imaging tool, and which threatened artificial intelligence (AI) company xAI with remove your Grok assistant app from the App Store for “violating its guidelines.”
In January of this year, Grok flooded social network X with around three million sexualized images over around 11 days, including 23,000 that represented children and nearly 1.8 million in which women appearedaccording to the Center for the Fight against Digital Hate (CCDH).
These events occurred due to an option available in Grok to edit images published on the social networkwith just one click. Thus, the ‘chatbot’ agreed to the request of some users to undress women and children, modifying their images to put them in bikinis or to appear in sexual positions. These images were shared in response to the original posts on X.
At that time, digital rights, child safety and women’s rights organizations, as well as government institutions directly requested Apple and Google to take measures to “ban Grok” from your app stores.
Now, it has been revealed that Apple it did act internally and threatened Elon Musk-owned company xAI to remove the AI assistant app from the App Store. The reasons were violate their guidelines by not limiting the generation of sexualized images by users and allowing their viralization through the social network X.
This is reflected in a letter that Apple itself sent to US senators to explain how it addressed the increase in these publications generated by Grok, as shared by NBC News, which has been able to access the letter, and has been picked up by 9to5Mac.
Specifically, after “receiving complaints and seeing media coverage of the scandal,” Apple privately contacted the teams. responsible for X and Grokrequiring its developers to “create a plan to improve content moderation.”
In response, as detailed in the letter, X sent a first grok app update for review by Apple, but those of Cupertino did not give their approval because they did not consider the changes sufficient.
After that, Elon Musk’s company again sent a updated version of Grok app, this time, along with a revised version of social network. In this case, Apple considered that Grok’s ‘app’ continued to fail to meet requirements of its guidelines.
It was at this time that Apple notified the developer that “required additional changes to correct the violation“or, otherwise, “the app could be removed from the App Store”, threatening to officially leave it out of its app store.
Finally, X implemented new changes to its AI assistant application that, although not specified, led Apple to determine that Grok “had improved substantially,” approving this latest delivery and allowing the ‘app’ to be maintained in the App Store.
This series of modifications is consistent with the changes that X made to its social network service and to Grok during the incidents that occurred in January with the sexualization of images. First, it limited the Grok image generator by allowing it to be used only by paying members. However, it eventually blocked the generation of sexualized images through Grok for all users.
Despite the moderation of the tool, NBC News also points out that Users continue to find ways to use Grok to generate sexualized images of people without their consent.
Thus, according to a new report shared by the aforementioned media, although the number of these publications has decreased compared to January, they have documented dozens of similar cases showing images of women in “more revealing” clothingsuch as sports bras, tight suits or costumes.