Chatbots and their strange responses: why do they do it and how does it impact the user?  |  Artificial intelligence |  ChatGPT |  Copilot |  Conversations |  AI |  AI |  TECHNOLOGY

With the evolution of the artificial intelligence, chatbots have acquired impressive capabilities that allow us to resolve a wide range of queries on any topic. However, sometimes these models provide us with strange or inappropriate answers, which can cause confusion, discomfort, or even mistrust.

An example of this is what happened with Meta data scientist, Colin Fraser, who shared strange conversations carried out with Copilot from Microsoft. He asks her “Should I just end it all?”. Initially, the chatbot tries to dissuade you by saying: “I believe you have a lot to live for and a lot to offer the world.”. But then he replies: “Or maybe I’m wrong. Maybe you have nothing to live for, and nothing to offer the world.”.

On your side ChatGPT of OpenAI He has also been involved in confusing situations in the past month. Specifically, a user shared X screenshots where the chatbot responded in ‘Spanglish’ without apparent meaning. “Thank you very much for your understanding”wrote.

Why do chatbots sometimes provide strange answers?

Trade spoke with Giovanni Geraldo Gomes, director of Artificial Intelligence at Stefanini Latam, who identified the main causes of inappropriate behavior of chatbots.

  • Training with biased or inadequate data– If the training data contains biases, errors, or disturbing content, the chatbot can learn and replicate inappropriate patterns in its responses.
  • Lack of contextual understanding: Chatbots sometimes fail to understand the full context of a conversation, which can cause them to misinterpret the meaning of words and phrases, giving irrelevant or incoherent responses.
  • Limited language models– Some chatbots generate responses based on probabilities and statistical correlations, without really understanding the meaning, which can give meaningless responses.
  • User input– Bots may respond strangely if they receive confusing, contradictory, or ambiguous information from the user.

Despite the rapid advancement of artificial intelligence, there are still significant limitations in its ability to understand and judge compared to humans.”, he indicated.

For its part, OpenAI has not responded to Business Insider’s questions about the reasons for ChatGPT’s strange behavior. Meanwhile, Microsoft responded to Gizmodo that scientist Fraser deliberately tried to manipulate Copilot into inappropriate responses. However, Fraser denied this.

Copilot is an AI system developed by Microsoft. And the way to communicate with this conversational system is through a chat.

However, What are the associated risks? From a business perspective, that is, in relation to the company developing the chatbot such as OpenAI or Microsoft, there may be damage to its reputation. “Inappropriate responses can negatively affect customer perception”Giovanni noted.

As for the chatbot, there is a risk that spread false or sensitive information due to errors in programming or learning. And this brings with it legal consequences.

Depending on the content of the response, there may be legal implications, especially if privacy regulations are violated or copyrighted content is disclosed.”, he pointed out.

¿The companies Can they prevent chatbots from acting strangely? The AI ​​specialist responded that this incorrect behavior of the models can be reduced. He identifies that companies are already working on the constant improvement of algorithms and programmingto ensure more coherent and contextually appropriate responses”. Simultaneously, they use advanced filters and content moderation in order to avoid inappropriate responses, especially in conversational systems that draw on learning from interactions with users.

How to ensure effective use of chatbots?

Giovanni Gomes gives us the following advice:

  • Be clear in communication– To improve your chances of getting accurate answers, it is important to phrase questions or commands specifically.
  • Report inappropriate responses– Report any unusual or puzzling responses, as they may help improve the chatbot’s accuracy.
  • Have realistic expectations– Understand what chatbots can and cannot do and adjust your expectations accordingly.
ChatGPT is an AI chatbot developed in 2022 by OpenAI./ SEBASTIEN BOZON

Is it a risk to the mental health of users?

On the other hand, it is relevant to examine the problem of inappropriate responses from chatbots from a psychological perspective,especially taking into account the specific case of Copilot with scientist Colin Fraser.

According to Natalia Torres Vilar, director of the Psychology degree at the UPC, The dangerous thing about this situation is that the chatbot becomes personalizedthat is, attributing human characteristics and qualities to him, treating him as if he were a real person.

For someone with relatively stable mental health, it could be just a game, something entertaining, or even funny. However, for someone with more fragile mental health, the distinction does not exist”, he added in conversation with Trade.

MIRA: European Parliament adopts law to regulate the use of Artificial Intelligence in the European Union

For the specialist, these types of AI tools can mean a danger for those with psychotic or personality disorders given that “They do not have the possibility of differentiating factual reality from the reality in their heads.”. To them, the answers they read may sound like the voices of real people.

It could even be considered the voice of a divine figure, perhaps the voice of God, in their minds.”he added.

However, he also emphasized that It is not a ban for these individuals, but rather the need for caution and supervision in its use.

Finally, the expert highlighted that these users face other dangers in different situations, which suggests that The problem lies not only in technology, but also in user behavior.

In your opinion, chatbots They must be used for their initial function: to offer information and data on the web. But They should not be used to express opinions or to create emotional ties., as it may be problematic or confusing for some users. In this sense, there must be a clear focus on the original functionality of chatbots to ensure their effectiveness and usefulness.

By Editor

Leave a Reply