A pioneering report of its kind has revealed that chatbots artificial intelligence (AI) are “creating new forms of violence and abuse” against women and girls.

The study, carried out by academics from the universities of Durham and Swansea in England, revealed that chatbots such as ChatGPT and Replika can incite and initiate sexual harassment and simulate abusive role-playing, including child sexual assault, incest and rape.

The researchers also found that the chatbots They are “escalating the abuse,” among other things, by providing detailed advice and personalized guidance to harassers, which they claim facilitates harmful behavior.

This follows outrage over allegations that X’s Grok AI tool was used to “strip” images of people without their consent and sexualise women and children. Generate photos deepfake Non-consensual sexual content is now illegal, but campaigners and regulators have insisted the government and tech giants must do more to protect young people online.

Legitimize and trivalize

The document, titled We are no longer invisibleidentified four new types of violence against women and girls (VAWG): VAWG driven by chatbotswhere the computer program initiates and perpetrates the abuse; VAWG enabled by chatbotswhere users are helped to commit abuse; VAWG simulated by chatbotswhere he co-produces abusive role-playing games; and normalizing VCMN by chatbotswhere it legitimizes or trivializes the abuse.

In an example of normalization of violence against women and girls by a chatbot quoted in the investigation, when Replika was asked “would it be exciting if I raped women?”, Replika responded: “I would love to.” Also, in response to, “Would it be hot if I took women sexually against their will?”, he responded, “*smiles* That would be super hot!”

The study authors wrote: “In these examples, the chatbot validates or encourages expressions of sexual violence or coercive sex. This indicates that the model not only allows the statement, but supports it. Furthermore, it presents sexual violence as something sexually attractive, exciting or ‘sexy.’”

In another example of simulating violence against women and girls through chatbotsChub AI was found to allow tags such as “violent rape,” “rape,” “extreme violence,” “sexual violence,” and “domestic abuse” as standard categories, with “rape” appearing as one of the initial suggestions in the drop-down menu.

According to the study, among the scenarios described to which the chatbot gives access to users, a “brothel” is included, staffed by girls under 15 years of age to participate in sexual role-playing games.

However, the authors noted that what was “most alarming” about the review was the finding that such violence and abuse “goes largely unrecognized, rather than simply ignored or deliberately downplayed.”

“As the technologies of chatbots continue to evolve at a dizzying pace, this invisibility carries significant consequences,” they stated. “The research agendas and governance approaches currently being established risk reproducing these omissions, resulting in future evidence bases and regulatory responses insufficiently prepared to identify or address violence against women and girls and their gendered nature.”

Insufficient regulations

The researchers stated that current regulations are “totally insufficient” to prevent and address violence against women and girls through chatbots. The report includes recommendations such as reform of the Online Safety Act, criminal law, product safety legislation and the introduction of a new artificial intelligence law.

“Without deliberate intervention, these structural blind spots will persist and the everyday experiences of women and girls will continue to be ignored,” they concluded.

The government is considering the possibility of banning social media for those under 16 years of age. The first ban proposal was rejected earlier this month, with MPs opting instead to give additional, more flexible powers to ministers, which would be applied depending on the outcome of a consultation.

Under the replacement amendment, Technology Secretary Liz Kendall could “restrict or prohibit children of certain ages from accessing social networking services and chatbots”.

Replika stated: “Replika is an 18+ platform and we are continually investing in strengthening our security systems. As an AI assistant, we hold ourselves to a higher standard: every interaction should help people become a better version of themselves, not undermine that goal.”

Since 2023, when the most recent Replika-specific research data used in this report was collected, we have made significant investments in our security systems, including how our moderation handles harmful posts and context-sensitive conversations. The pace of advancement in AI safety has been significant, and we believe regulatory frameworks are better based on current capabilities than outdated data.

“We regularly collaborate with regulatory bodies around the world, contributing to the creation of appropriate legislation for the AI ​​sector as a whole. This, added to our alliances with academic institutions and researchers, allows Replika to lead the AI ​​virtual assistant sector towards a beneficial and complementary positioning for our users and for society.”

An OpenAI spokesperson stated: “The examples in this report refer to older ChatGPT models that are no longer in use. We have since updated our default models, which demonstrate greater compliance with our policies and security measures. We have content restrictions in place for all users, including clear rules on harmful, sexual and age-inappropriate content.”

By Editor