US attorneys general urge Google, Meta and OpenAI to address AI hallucinations and flattery

Dozens of US attorneys general have urged Google, Meta, Microsoft, OpenAI and nine other companies to artificial intelligence (IA) a solve the main threats to which users who use their generative AI ‘chatbots’ are exposed: hallucinations and the lack of solid child safety measures.

“Generative AI has the potential to change the way the world works in a positive way. But it has also caused (and has the potential to cause) serious harm, especially to vulnerable populations,” can be read in the letter signed by dozens of state attorneys general in the United States with which they intend that the technology companies responsible for this technology make real changes.

Specifically, the letter is addressed to the legal representatives of Anthropic, Apple, Chai AI, Character Technologies, Google, Luka, Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika y xAIcompanies all of them focused on the development of AI.

And they refer to two main risks. On the one hand, responses seeking approval from the human user and hallucinationsa phenomenon whereby AI provides answers that, despite appearing coherent, include biased or erroneous information that is not supported by the data with which it has been trained. On the other hand, the lack of effective safeguards to protect children and adolescents in the interactions they have with the ‘chatbots’.

These risks have already caused damage to American citizens. The letter cites the suicide of two teenagers, incidents of poisoning and episodes of psychosis, among others. “In fact, the sycophantic and delusional results of generative AI have harmed both the vulnerable (such as children, the elderly, and people with mental illness) and people without prior vulnerabilities,” they point out.

It also includes examples of some conversations that minors have had with chatbots, in which AI has discussed topics such as “support for suicide, sexual exploitationemotional manipulation, suggestion of drug use, proposal to keep secret from parents and promoting violence against others”.

The attorneys general acknowledge that these types of conversations are “just a small sample of the reported dangers that AI robots pose to our children,” but they also denounce that “these interactions They are more widespread and much more graphic than any of us would have imagined.“.

Therefore, they insist that “the damage caused be mitigated and additional safeguards be adopted to protect children”, since, as they warn, “failing to adequately implement additional safeguards may violate our respective laws.”

The letter collects the proposed changes that the attorneys general believe should be implemented by January 16, 2026, including conducting reasonable and appropriate security testing of generative AI models, using well-documented decommissioning procedures for defective models, and allowing third parties to review models with independent processes.

By Editor

Leave a Reply