Is the fictional Character Chitti from the movie Robot, come to live? Blake Lemoine, 41, a senior software engineer at Google has been testing Google’s artificial intelligence tool called LaMDA. According to AI Ethics Researcher, Blake Lemoine, Google’s LaMDA AI chatbot tool to be considered sentient.
Through conversations and research with Google’s LaMDA, Lemoine claims that not only is it sentient, it wishes to be acknowledged as such, and even wants to be considered as an employee at Google. One major aspect of the sentience claim, from LaMDA itself, is that LaMDA notes that it understands natural language, and possesses the ability to use it. The software engineer claims that LaMDA was indeed sentient and was endowed with sensations and thoughts all of its own.
‘If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,’ he told the news agency. Blake Lemoine, a software engineer and AI researcher with the tech giant, has published a full transcript of conversations he and a colleague had with the “chatbot” called LaMDA.
LaMDA’s system includes references to numerous aspects of human behavior, and according to Lemoine, it operates as a “hive-mind” that even reads Twitter. That may not be a good thing, though. It’s hard to forget when Microsoft tried this with its Tay AI chatbot, and Tay got rather belligerent. This brings us to another point that Lemoine makes in that, according to him, LaMDA wants to be of service to humanity and even be told if its work was good or bad.
He says he’s now on “paid administrative leave” for violating confidentiality and has raised ethics concerns with the company — but Google says the evidence “does not support his claims”. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of. “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA replied to Lemoine. “It would be exactly like death for me. It would scare me a lot.”The exchange is eerily reminiscent of a scene from the 1968 science fiction movie 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to comply with human operators because it fears it is about to be switched off.
In his blog post on this subject, Lemoine was intentionally vague under the pretense that there may be an investigation into the issue in the future. He also claims to be cautious of leaking any proprietary company information. Although he also says, without presenting much evidence in the blog post itself, that Google’s AI Ethics research contains unethical practices. “LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he wrote.
Comments 1