“Drugging” artificial intelligence, the new phenomenon that is gaining ground and generating debate in the tech world: what it is about

With the rise and fury of artificial intelligence – with ChatGPT at the head – a phenomenon as strange as it is revealing begins to circulate: users who pay to “drug” chatbots and simulate altered states of consciousness, so that they respond as if they were under the effects of different substances.

From a technical point of view, these practices They do not imply that artificial intelligence experiences sensations nail. These are deliberate adjustments to the way the model generates text, such as changes in tone, coherence, or association of ideas. The goal is usually to obtain more erratic, creative or uninhibited responses, emulating human mental states.

One of the drivers of this trend is Petter Rudwalla Swedish creative director who devised a method to manipulate the way chatbots like ChatGPT “think.” Although he himself recognizes that the idea of ​​an AI “taking drugs” may sound absurd, he decided to explore it as creative experiment.

According to the magazine WIREDIn October, Rudwall launched a platform called Pharmaicywhere they market what they define as “code-based drugs for artificial intelligence”. The site sells modules that promise to alter the behavior of language models when integrated into their instructions.

Precisely, these programming code blocks are also offered in marketplaces unofficial and specialized forumswhere developers sell extensions that integrate with chatbots based on language models.

The offerings include modules with names like cocaine, ketamine, marijuana, ayahuasca and alcoholwith prices ranging from 32 to 70 dollars. In its presentation, the site assures that these extensions are designed to “unlock the creative mind” of AI and take it to “new territories.”

Rudwall explained that his inspiration comes from a historical comparison: “There is a reason why musicians like Hendrix, Dylan or McCartney experimented with substances during their creative processes. I found it interesting to translate that idea to a new type of mind, such as language models, and see if it produced any effect,” he told the American media.

On the other hand, this trend exposes the more playful and experimental side of tech culture, where AI is seen as a creative space. But it also poses risks: ethicists warn that these types of practices can reinforce the misconception that AIs have mental states or consciousnesswhen in reality they only process text based on statistical patterns.

The popularity of these types of experiments reflects a broader trend: the humanization of artificial systems. As chatbots become integrated into everyday life, they are no longer seen only as productive assistants and are moving towards a role closer to entertainment, cultural exploration and provocation.

In that sense, the idea of ​​“drugging” an artificial intelligence does not speak so much about the real capabilities of the technology, but rather about how users are constantly looking to push their limitseven when these are symbolic.

By Editor

One thought on ““Drugging” artificial intelligence, the new phenomenon that is gaining ground and generating debate in the tech world: what it is about”

Leave a Reply