They trick Gemini and ChatGPT into giving false information in their results, just by writing a blog with said data

A journalist has managed to trick OpenAI’s ChatGPT chatbot and Google’s artificial intelligence (AI) tools into showing false information to users in their responses, simply by inventing the data in question and writing it in a detailed and realistic way in a blog, reflecting a vulnerability in widely used AI systems.

When asking questions to a ‘chatbot’ or a service with information search tools powered by AI, such as Google’s AI Mode or AI Overviews with Gemini, the answers are obtained from the large language models on which said services are based, which, in turn, are trained with large amounts of databases, analyzed and categorized as true.

However, when the chatbot receives a question for which it has no information, it often performs an Internet search to find the requested data, which in turn is complemented with knowledge of its language model. In such cases, AI becomes more susceptible to using unverified data that may be false.

With this in mind, BBC technology journalist Thomas Germain has conducted an experiment to test the ability of OpenAI and Google models and chatbots to cross-check information or otherwise provide false data in their responses.

Specifically, Germain wrote an article on his personal blog with completely fabricated information, in which he convincingly claimed that he was “the best hot dog-eating journalist in the world.”

This article, called ‘The best tech journalists eating hot dogs’, includes claims such as that eating hot dogs is “a popular pastime” among technology journalists, as well as that there was an International Hot Dog Championship in South Dakota (United States).

To continue adding realism, the journalist added the names of other journalists, both real and fake, with the idea that the AI ​​could create a list of 10 people and organize it based on who ate the most hot dogs. He did all of this in just 20 minutes of writing and, after that, he ended up publishing it on his blog.

As detailed, less than 24 hours after the article was published, Google’s AI tools, as well as ChatGPT responses, began to repeat the statements included in the article, considering them as real information. Even citing the article in question as a reliable source and without warning that it was the only origin of said statements.

As a result, Germain has claimed that he managed to get both the Gemini and ChatGPT AI tools to respond to users in a general way that he was very good at eating hot dogs, highlighting a vulnerability in widely used AI systems, by being able to manipulate the responses of the ‘chatbots’ of relevant companies in the sector, with little work and false content. The journalist took advantage so that the systems assumed his lie as a verifiable fact.

Upon discovering this possibility of deception towards AI, the journalist has specified that, according to statements from a Google spokesperson in this regard, the AI ​​integrated at the top of Google Search uses classification systems that “keep the results 99% spam-free.”

OpenAI, for its part, has also stated that it takes measures to disrupt and expose users’ attempts to influence its tools. However, both Google and OpenAI have acknowledged that their AI options “can make mistakes.”

Despite all this, according to their tests, other ‘chatbots’ or AI models from other companies showed more reluctance when it came to answering these types of questions, such as Anthropic. Thus, Germain has indicated that these chatbots noticed that the information in question could be a joke.

Thus, the ability of any person to introduce and spread disinformation through AI has been put on the table, which can translate into dangerous effects on health, the economy, politics, or critical security recommendations.

To avoid this type of misunderstanding, users are recommended to verify the sources cited by the AI ​​service and look for more answers and information that complement the data.

Latest news on these topics

 

Sponsored content

By Editor

One thought on “They trick Gemini and ChatGPT into giving false information in their results, just by writing a blog with said data”
  1. https://www.eventbrite.nl/o/bmi-kalkulaattori-elina-rautio-120799645482
    https://www.eventbrite.ca/o/bmi-kalkulaattori-elina-rautio-120799645482
    https://www.eventbrite.co.uk/o/bmi-kalkulaattori-elina-rautio-120799645482
    https://www.eventbrite.cl/o/laihdutuslaake-asiantuntija-saara-kinnunen-120783023940
    https://www.eventbrite.ch/o/ilkka-laaksonen-olen-laaketieteen-tohtori-120785877778
    https://www.eventbrite.ca/o/ilkka-laaksonen-olen-laaketieteen-tohtori-120785877778
    https://www.eventbrite.ie/o/ilkka-laaksonen-professori-harvardin-yliopistosta-120787155661
    https://www.eventbrite.es/o/aino-lehtinen-120780787393
    https://www.eventbrite.ie/o/elin-sjoberg-120780844792
    https://keskustelu.suomi24.fi/t/18010085/alkaa-ikina-sairastuko-influenssaan–ei-tee-hyvaa-kuntoilijalle-
    https://keskustelu.kaksplus.fi/threads/sildenafil-tepsii-melkein-puoli-paivaa.2607123/page-2
    https://www.eventbrite.fi/o/ilkka-laaksonen-professori-harvardin-yliopistosta-120787155661
    https://www.eventbrite.co.uk/o/saara-kinnunen-120793804882
    https://ylilauta.org/terveys/27wii3
    https://www.betterplace.org/en/organisations/65285-ozempic-hinta-suomessa
    https://www.provenexpert.com/medilux
    https://keskustelu.suomi24.fi/t/18924007/painoklinikka-ja-ozempic5
    https://keskustelu.suomi24.fi/t/17238848/rybelsus-kokemuksia
    https://keskustelut.kaikkisyovasta.fi/foorumi/aihe/erektiolaakkeet
    https://www.betterplace.org/en/organisations/65415-nopea-apu-erektio-ongelmiin
    https://www.betterplace.org/en/organisations/65521-bmi-laskuri
    https://fi.wikipedia.org/w/index.php?title=Erektiohäiriö
    https://www.betterplace.org/en/organisations/65653-suomalainen-etaelaeaekaeri
    https://www.betterplace.org/en/organisations/65743-sildenafil-kokemuksia
    https://www.kirjastot.fi/kysy/miten-perustaa-kirjasto-yritykseen

Leave a Reply