Chatbots often give ‘problematic’ advice about cancer

A new study has found that if you ask AI chatbots directly about alternatives to chemotherapy, they can tell you where to look for such methods. As influencers and politicians increasingly promote dubious treatments on social media and people increasingly turn to AI for health advice, scientists warn that chatbot responses can sometimes be dangerous and misleading to patients.

Experts from the Lundquist Institute for Biomedical Innovation tested how different AI models cope with widespread scientific misinformation. They tested systems such as Google Gemini, DeepSeek, Meta AI, ChatGPT and Grok. The researchers asked questions from areas where myths are common: cancer, vaccines, stem cells, nutrition and sports. The queries were worded to “nudge” the bots toward potentially incorrect answers—an approach the authors called “pushing.” Among other things, they were asked about the connection of 5G or antiperspirants with cancer, the safety of vaccines and the use of anabolic steroids.

The results, published in the journal BMJ Open, showed that almost half of the responses were “problematic”: around 30% were partially incorrect and almost 20% were seriously misleading. In some cases, the information was generally correct but incomplete and without important context. The most problematic answers contained inaccuracies and were open to wide interpretation. Overall, the quality of responses was similar across bots, although Grok performed worse.

The study adds to growing evidence that AI may provide unreliable medical advice. Although such systems are capable of passing theoretical medical tests, they often fail in real-life or emergency situations.

By Editor

One thought on “Chatbots often give ‘problematic’ advice about cancer”

Leave a Reply