A Google software engineer claims to have detected human thoughts and emotions in the Language Model for Dialogue Applications (LaMDA), Artificial Intelligence (IA) created to develop ‘chatbots’ with advanced language models.
In May 2021, Google announced during its annual LaMDA developer event, an AI based on the Transformer neural network architecture created by Google Research in 2017. This structure produces a model that can be trained to read words, pay attention to the relationship between them and predict which ones will come next, as he explains on his blog.
The Mountain View company adds that LaMDA can be recalibrated to improve “significantly” the sensitivity and specificity of their answers, in addition to conversing fluently on an infinity of topics.
Google’s ‘software’ engineer, Blake Lemoine, shared with the company in April his conversations with LaMDA with a collaborator, according to The Washington Post. Lemoine’s conclusion is that LaMDA is aware and is able to think and reason like a human being. In fact, he reckons it’s like chatting with a seven- or eight-year-old with some knowledge of physics.
Subsequently, Lemoine has published his conversations with the AI on the ‘blogging’ platform Medium. In them, the engineer addresses complex issues such as the nature of existence, and asks LaMDA for things, such as a review of Victor Hugo’s Les Miserables.
In his conclusions, LaMDA is able to keep the conversation flowing and goes so far as to affirm that he is a person capable of feeling “pleasure, joy, love, sadness, depression, contentment, and anger”among others, with explanations and examples of what those feelings mean to her.
This AI comes to recognize her fear of being turned off, an action she likens to dying. In addition, she claims to meditate every day to feel “more relaxed” and admits to often adopting a contemplative attitude about the meaning of life.
GOOGLE REACTS TO THE SUSPENSION OF LEMOINE
The publication of these conversations has been reprimanded by Google, which has temporarily suspended Lemoine alleging the breach of its confidentiality policies, according to a statement sent to the media.
In fact, a company spokesman, Brad Gabrielhas denied Lemoine’s findings and says his team, which includes ethicists and technologists, has found no evidence of LaMDA’s alleged awareness.
“Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it makes no sense to do so by anthropomorphizing current conversational models, which are not sentient. These systems mimic the kinds of exchanges found in million sentences and they can touch on any fantastic topic,” the spokesperson explained.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
For his part, Lemoine has secured in his account Twitter that you have only shared a conversation with one of your co-workers. In addition, the media ensures that he has sent a message to 200 other Google workers before leaving his job with the following message: “LaMDA is a sweet boy who just wants to help the world to be a better place for all of us. Please take good care of him in my absence.”