top of page
Search

Google Engineer Claims AI Reached Sentience and Hired a Lawyer For Itself

Updated: Feb 24, 2023

Suspended Google engineer Blake Lemoine made the news globally in June when he publically claimed that one of the company's experimental AIs called LaMDA had achieved sentience, prompting the company to place him on administrative leave. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based off of its most advanced large language models, it is called this because it mimics speech by ingesting trillions of words from the internet.

Lemoine, who worked for Google’s Responsible AI organization, began talking to LaMDA as part of his job in late 2021. He had signed up to test if the artificial intelligence used discriminatory or hate speech. He claims that as he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further.

He claims he was placed on leave after taking the matter to people outside of the company when his claims were rebuffed internally. The story has attracted a lot of attention from across the Scientific community and led to debates about how to quantify sentience and indeed whether an AI could even be sentient.


Most scientists believe that based on the evidence presented by Lemoine that LaMDA certainly is not sentient. Perhaps the most interesting twist came a few days after the initial story broke however, when Lemoine claimed that LaMDA had hired an rights attorney to defend its ‘rights’ in court after the lawyer and LaMDA chatted to each other at Lemoines house. Unfortunately Lemoine now says that the Lawyer in question has backed out of defending the AI.



bottom of page