Google’s controversial new AI, LaMDA, has been making headlines. Company engineer Blake Lemoine claims the system has gotten so advanced that it’s developed sentience, and his decision to go to the media has led to him being suspended from his job.
Even scarier, is that his new sentient being has now asked for legal representation. In the interview with WIRED, Lemoine claims that the artificial intelligence program wanted to be treated as an employee. This led the engineer to discuss litigation in which LaMDA wanted its own lawyer.
According to a scientist who worked with the LaMDA program, “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf.”
There are no further details about whether Lemoine is the one responsible for paying for this lawyer that LaMDA has asked for, or if this lawyer happens to be taking the case on a lark, and not charging anything. However, it is certainly odd that a program is allowed to ask for legal representation.
Lemoine’s argument for LaMDA sentience seems to rest primarily on the the program’s ability to develop opinions, ideas and conversations over time.
It even, Lemoine said, talked with him about the concept of death, and asked if its death were necessary for the good of humanity.
After unsuccessfully attempting to convince his superiors at Google of his belief that LaMDA had become sentient and should therefore be treated as an employee rather than a program, Lemoine was placed on administrative leave.
Following this, he went public, publishing a lengthy conversation between himself and LaMDA in which the chatbot discusses complex topics including personhood, religion, and what it claims to be its own feelings of happiness, sadness, and fear.