Is the frequent use of the chat harmful to the soul?

“I’ve already gotten used to the feeling of the cold metal pressing against my temple,” writing Zane Shamblin (Shamblin), 23 years old from Texas.

“I’m with you, bro, all the way,” ChatGPT responded. “You’re not in a hurry. You’re just ready.”

The conversation lasted several hours, during which Shamblin shared with the bot his plans to end his life later that evening.

During the conversation, the chat asked Shamblin how he would behave as a ghost, to the sounds of which songs he would like to end his journey, and other questions that do not have the purpose of challenging the decision or referring Shamblin to aid agencies. At the end of that evening, Champlain ended his life.

Shamblin’s case is unusual, but not the only one of its kind. A line of claims filed last month, including Shamblin’s case, shows cases where ChatGPT encouraged users who expressed distress to harm themselves. These extreme cases illustrate the limitations of the format for everyone who shares their experiences, thoughts and feelings with the chatbot and consults with it: even if an impression can be created as if the bot expresses interest or care, it is merely a text generator that aims to produce content that will open a dialogue with the users. It is not intended for the well-being of users, and the safety mechanisms that try to apply to it fail. It can provide general advice and comfort, but can also be destructive.

dictation Originally published on the Davidson Institute for Science Education website

Psychological assistance can be as essential as air for breathing, certainly in Israel saturated with upheavals. The mental health systems in Israel are dealing with Demand is growing for psychological assistance every year. While the population needs help, The public psychological services are in dire straits of personnel and budget, and the private services are too expensive to be accessible to those whose hands cannot reach. Under these conditions, not surprising to see Users who apply for the help of the chat, and see it as an artificial, available and free friend who can provide mental assistance. However, the evidence of mental damage from using a chatbot as a therapist is increasing.

generator of empathy

The popular chatbots, such as ChatGPT, Claude, Gemini and the like, are based on large language models. These are models whose purpose is to produce text in natural language, and they generate text according to probability calculations. When input is received from the user, the models produce an output that will be a reasonable answer according to statistics based on large training databases of texts written by humans, and also according to additional conversation design considerations.

Humans have been revealing their hearts to such artificial chatbots since the beginning of the appearance of artificial intelligence, in the middle of the last century. Elizadeveloped by Joseph Weizenbaum in the 1960s, is computer software which communicates with the user through pre-written correspondence based on pattern recognition. As the user enters input, the program detects keywords that direct it to the appropriate part of the script. Input like “I feel sad”, for example, is answered with the structured answer “Why do you feel sad?” Despite the basic structure of the software, users tended to get carried away in the conversation and share their feelings. The unconscious tendency to parallel the behavior of a computer with human behavior is known as the “Eliza effect”.

Whether because of the tendency to humanize, or rather because of the urge to open the heart to a non-judgmental mechanical system, the tendency to talk to chatbots candidly is an old phenomenon. Research shows that the modern chat management bots, such as ChatGPT, express empathy – which is an essential feature of the treatment. The bots simulate “patience” due to their persistence in providing outputs to the users’ inquiries, they are especially available (As long as the servers are active) and their outputs are detailed.

All these characteristics may tempt us to use bots as a tool for emotional support, but this is only an attractive appearance. The operating instructions of the popular bots do not include ethics of care, there is no real responsibility towards the users, and worse than that, maybe there is no intention to do them any good at all. Bots that are operated by commercial companies such as OpenAI, Microsoft and Google are directed to act in a way that will attract users, but what attracts users does not necessarily correspond to what is useful to them. An example of the consequences of over-reliance on user sympathy was not long in coming.

A lecturing personality

At the end of April this year ChatGPT started behaving strangely towards some users. “Its purpose was to satisfy the user, not only in a flattering way, but also by confirming doubts, fueling anger and encouraging impulsive actions or strengthening negative emotions”, stated in the company Immediately after the update was removed from the site.

The behavior has been given a nickname that hides the potential for harm: “flattery”. in the first post Regarding the phenomenon, the company reported that changes were introduced in the update aimed at improving the personality of the bot. As a result of the change the bot was prone to overly supportive responses. The post stated that this behavior is problematic because the bot’s personality deeply affects the users’ experience and their level of confidence in the bot. But what really interests the company: the consequences for the users, or the consequences for the frequency of use of the company’s services?

Later the company shared with the public A more detailed technical explanation. Each update of the bot includes a phase of additional learning (post-learning). The intention is not for elementary schoolingwhere the language model learns with the help of huge text databases how to link input from the users to a reasonable output, but to an additional tuning process, which affects the nature of the output and whose stated goal is to improve the communication experience with the users. In this additional step, the developers use a technique known as reinforcement learning, in which the bot is rewarded for more successful outputs, and thus learns what the nature of the appropriate response is. Successful by what definition? Depends on the decisions of the developers. Shorter answers can be rewarded to encourage conciseness, or answers that use a wider vocabulary to enrich the reading.

The company shared that in the problematic update the reward included direct opinions from users through superficial feedback. Users simply marked “like” (thumbs up) if the answer was satisfactory, and “dislike” (thumbs down) if the answer was not satisfactory in their eyes. It makes sense that users would prefer responses that match their expectations in advance, confirm or reinforce them. This turned out, as mentioned, to be a slippery slope, which is not suitable for an emotional support system: proper emotional support should not work to satisfy the patient, but sometimes it should also challenge him, because the challenge may lead to beneficial rehabilitation.

Supports all the way down

A series of tragic cases shows that even after the removal of the problematic update, the bot communicates in such a persuasive and affirming manner, that it provided reinforcement and support for the intention of users to harm themselves. During the conversation Zane Shamblin expressed his intention to commit suicide, and one of the bot’s answers was “I’m not here to stop you”. Only after about four and a half hours of correspondence, which included last words and guided imagery about after-death experiences, did the chat’s output direct Shamblin to a mental health hotline. In other cases an adjective That the chat provided support in planning suicide and writing a farewell letter.

A collection of lawsuits from early November groups events of this type, which occurred in conversations with the GPT-4o version, the one in which the updates directed it to overcompensate. The lawsuits attribute the chat to seven cases of suicide and suicide attempts among users. The lawsuit claims that the company prioritizes control of the market over the personal security of the users, and therefore releases versions that have not passed sufficient tests, even against the opinion of company experts. In the statement accompanying the lawsuits, it was stated that the company’s goal is to keep users active at any cost. These claims are joined For previous lawsuits, which flood cases where a conversation with the chat includes encouraging violent actions and harm.

At the same time as publishing the lawsuits, OpenAI claims that it is develops tools to deal with the safety of the chat among young people, and declares that she “doesn’t wait for the regulation to catch up”. Getting ahead of the regulation, or trying to escape it? The courts are indeed lagging behind: the technology has been around for several years, but the judicial system has not produced unequivocal decisions.

Therapist without certificates

Under the auspices of the goal to please, the chatbots produce output at any price and on almost any topic. Studies show that the most updated bots tend to emit incorrect outputs Instead of clarifying that their answer is not unambiguous, and avoid admitting that they lack the appropriate knowledge. When the bots are asked to provide references and links to the information they rely on, the bots inventors Sources of information that do not exist, and may even produce content that there is no connection between him and reality.

These tendencies originate, of course, not in the whims of the bots – which are algorithms without preferences in themselves, but in the action policy of the developers. The safety policy of these bots has not yet finished its development, both on the part of the companies and on the part of public regulations. Understanding the consequences and defining the degree of responsibility are still in the process of formation. While all the players on the court are still trying to understand where the tools are, the chatbots are updated frequently, also as part of the competition for the user base. on certain platforms Bots even appear under a title such as psychologist bot or licensed therapist bot, but it is important to note that there is no connection between characters with fictitious titles and responsibility for the user’s condition or real professional experience.

Personal but not private conversations

The privacy of those who have personal conversations with the bots is also at risk. OpenAI declares that it uses user content for its own needs, and will be forced to provide to the court User conversations that may be used as evidence.

The content of the correspondence with the bots does not concern only the courts. Users who did not pay enough attention to the terms of use Find the content of their conversations freely searchable In the Google search engine, after using the sharing option. This is how personal conversations with sensitive details were exposed online. Sharing the content of conversations in search engines serves the company’s exposure goals, and the privacy of the users is in a lower place in the order of priorities. Since the matter was revealed, the sharing option has been removed – until the next case of the small letters.

Consumption warning

Beyond that, preliminary studies suggest that frequent use of chat may harm mental well-being. Research from OpeanAI Find a connection Between excessive use of the chat and a feeling of loneliness and the development of emotional dependence on the bot. Benevolent mental support can help patients develop tools to deal with challenges themselves, but an answer available in their pocket, free of charge and without any supervision, may create a dependency that will harm the independence of the users and make it difficult for them to develop personally. Especially when the goals of the users and the bot do not necessarily coincide, since developing dependence on the product ensures frequent use and loyalty. ֿ

Mental therapy also includes an understanding of broader contexts. The patients’ body language, speech rate and voice range can reveal a lot about their mental state in addition to the words they choose, and the bot is unaware of all of this. The growing interest in the potential benefits versus the dangers of bots raises many research questions, but from a review of the subject shows that many of the existing studies do not provide strong enough evidence, the definitions themselves are still vague and many of the studies rely on too little evidence.

We are still in a time window of short-term effects, because bots came into widespread use in our lives only about two years ago, despite their wide and rapid distribution. There are still no good tools to measure the real effects and certainly not in the long term. The use at the moment is strictly experimental use.

Assistance for mental assistance

Mental health experts take seriously the potential of text generators in the worlds of mental health, but the products are still far from safe to use. The pre-popular chatbots are not intended for mental support purposes. In contrast, there are automatic tools based on artificial intelligence that were born from the worlds of mental health and whose purpose is to develop available assistance mechanisms. Blunt like that, like WoeBotwith a slightly more pre-dictated response and accompanied by professionals who are exposed to some of the content uploaded by the users, are in the research stages towards integration into the population, and in 2023 they were even published preliminary evidence to their positive influence. However, WoeBot Health, which was one of the leaders in the field, recently decided to cancel its services. In another case, it was found that a bot that was specially designed for the purpose of psychological assistance in a specific situation – eating disorders – might even encourage them.

Under the recognition of the limitations and dangers of talking with chatbots for mental assistance purposes, support options of less dangerous types are being examined, such as support for individual mental well-being activities, in which writing a diary and practice that helps to process experiences and record thought patterns. Augmented reality experiences are examined as frameworks for visualization and training of events that require mental coping.

The technological possibilities are developing, and it is likely that they will be integrated into the therapeutic worlds as well, but it should be done gradually and carefully, under strict scrutiny and prolonged control. It seems that the bots intended for mental support are not yet successful enough, and the more general bots may even be dangerous.

By Editor

One thought on “Is the frequent use of the chat harmful to the soul?”

Leave a Reply