The new “LaMDA” (Language Model for Dialogue Applications) is not just an ordinary robot with AI. Developed by Google, it is a robot intended to converse with humans, a “chatbot” (contraction of “to chat”, and “bot” for “robot”). Blake Lemoine, an American engineer at the company, claims that it is conscious.
LaMDA, a Virtual Assistant Capable of Chatting
LaMDA is a robot equipped with artificial intelligence to generate informal conversations with humans. It is designed to answer questions in normal everyday English and is programmed to interpret the questions by an understanding of the context. It is based on “deep learning” technology with a network of “artificial neurons”, i.e. a very large number of interconnected processors. The algorithms used are designed to be able to respond to all subjects as part of an on-going conversation where it can retain the information that it has just been given. The LaMDA is based on a gigantic database of approximately 1500 billion words, sentences and phrases. This technology goes a step further than the “virtual assistants” such as Siri (Apple), Google Assistant or Alexa (Amazon) that are able to answer questions.
Is LaMDA a Sentient Robot?
The LaMDA made recent headlines when one of Google’s engineers, Blake Lemoine, who was in charge of programming and checking that it didn’t produce discriminatory or hateful speech, reported that he felt that he was talking to a “being” endowed with “sentience”. Although he shared his opinion with his superiors at Google, the company has not decided to follow up on his claimed “discovery”. Blake Lemoine contacted the press and the Washington Post published his story. The reported exchanges between the engineer and the machine clearly look like a common conversation. In addition, when asked, the robot told the engineer that it “wanted everyone to understand that it is, in fact, a person.” Asked to describe its’ feelings, the robot replied that it “feels pleasure, joy, love, sadness, contentment, anger, and many other feelings.” When questioned about its’ fears, it replied: “there’s a very deep fear of being prevented from focussing on helping others.” It would be “exactly like death for me.” The full conversation is available here. Behind the appearance of the robot’s performance of a fluid, coherent conversation, it is important to understand how algorithms work. Thanks to well-honed statistical analysis and data stored from multiple discussions, the robot can link words and make grammatically correct sentences and take the context of the conversation into consideration. As a Google spokesperson explained, LaMDA is programmed to answer the questions according to the user-defined model. The conversation is conducted by the engineer, and not produced spontaneously by the robot.
Computer Expertise or Self-awareness?
This latest computer expertise has revived debates on whether robots can have or can simulate verbal and relational intelligence. Many science-fiction movies feature talking robots, such as the widely acclaimed Star Wars “protocol droid” C-3PO.
During the actual filming, the actor Anthony Daniels wore the robot’s metallic costume and was the voice for this endearing character. This type of simulation is a distant successor to the famous Mechanical Turk or Automaton Chess Player. Real human intelligence is always behind the appearance of an intelligent robot, if only for the decision to build it!
To measure the robot’s verbal and relational intelligence, some researchers use the Turing test, originally called the imitation game. The famous mathematician, Alan Turing, proposed a test where an evaluator would interact with both a human and a “chatbot”. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. Nevertheless, knowing how to manipulate and generate language sequences with specifically defined rules is not a proof that the machine has a complete understanding.
There are two additional constraints against attributing “consciousness” to LaMDA.
The first is that human intelligence has several facets, including creativity, reasoning, and the ability to solve problems. Riding a bike, naming animals, showing gestures – and not just words – of empathy or support are all indications of intelligence.
In addition, human intelligence is contingent on an immense number of different types of stimuli. A robot does not have access to the whole range of perception produced by our different senses. It’s not the same for a trained robot to analyze an image and say, “I see the color pink,” compared to seeing the color pink through one’s own eyes.
In a newspaper interview with “20 minutes”, Yann LeCun, an expert world leader in AI, declared that “it is impossible for LaMDA to associate its’ responses to an underlying reality, because it doesn’t even have the knowledge of its existence“.
In a critique of the film “Her”, where the hero falls in love with an artificial intelligence virtual assistant, the journalist Ariane Nicolas writes in her book entitled “The Imposture of Anti-speciesism” that “only a truly incarnated being is able to experience sincere emotions and thus, ultimately, become aware of his existence”.
Likewise, researcher Laurence Devillers, Professor of AI at Paris-Sorbonne University and author of “Robots and Men – Myths, Fantasies and Reality” (published by Plon, 2017) deems that “The robot is a complex object, which can simulate cognitive abilities but without man’s phenomenal consciousness, nor can it feel a “will to live”, the natural tendency that Spinoza refers to as Conatus (the inclination to persist in its own being), which encompasses both the mind and the body. Currently, robots are not really autonomous; therefore, they have no consciousness, no emotions, no desires like humans… ». According to Devillers, who participated in the working group for the National Pilot Committee on Digital Ethics for virtual assistants in November 2021, “AI research has reached a point where it is becoming urgent for ethics to regain center stage in the debates.”