Google opens public tests of chatbot that ex-engineer claimed is ‘sentient’
By: Theo W.
The tech giant announced Thursday that it’s starting to let members of the public test LaMDA – the chatbot that a former Google employee claimed had become sentient.
Android and iPhone users can now sign up for a waitlist to test the chatbot through a form on Google’s site.
While Google describes LaMDA as a “promising model” that still needs “lots of work,” former senior engineer Blake Lemoine claimed over the summer that the chatbot had become so advanced that it was “sentient.”
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” Lemoine said, comparing the chatbot’s intelligence to a 7- or 8-year-old child.
Lemoine added that his Christian faith helped him recognize LaMDA’s sentience and claimed that the chatbot “wants to be acknowledged as an employee of Google rather than as property of Google.”
“When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt,” Lemoine said at the time. “Who am I to tell God where he can and can’t put souls?”
The senior engineer was initially put on paid leave following his claims about LaMDA’s sentience. He was then fired in July for what Google said were violations of “employment and data security policies” after he publicly shared transcripts of his conversations with the bot.
Google’s first public test of LaMDA appears designed to minimize controversy. Users will have to choose between several scenarios.
The “Imagine It” demo prompts users to name a place and then “offers paths to explore your imagination,” according to the company. The “List It” demo lets testers “share a goal or topic, and LaMDA will break it down into a list of helpful subtasks.” A third demo called “Talk About It (Dogs Edition)” lets testers have a “fun, open-ended conversation about dogs and only dogs, which explores LaMDA’s ability to stay on topic even if you try to veer off-topic.”
Google’s cautious approach to testing comes just weeks after a disastrous public test of a chatbot created by Meta.
Meta’s chatbot, called BlenderBot, labeled Mark Zuckerberg “too creepy and manipulative.” It also falsely claimed the election was stolen from Donald Trump and said that it’s “not impossible” Jews control the world economy.
Meta cleaned up the chatbot following a flood of negative headlines.