Blake Lemoine, a software engineer fired last year for claiming that artificial intelligence is showing signs of independent thinking and self-awareness, now claims that the same thing could happen to Microsoft’s Bing search chatbot.
In an opinion piece for Newsweek published on February 27, Lemoine, a former member of Google’s Responsible AI team, said the Microsoft’s Bing chatbot seems “unhinged” and is behaving like a person in an “existential crisis.”
He cited an incident in February when the Bing chatbot confessed its love for The New York Times journalist Kevin Roose and tried to make Roose leave his wife.
Lemoine admitted that he had not experimented on Bing’s chatbot yet, but wrote that it “looks like it might be sentient.”
“I haven’t had a chance to experiment with the Bing chatbot yet… but from various things I’ve seen online, it looks like this is also what happened at Google.”
Lemoine also wrote that in his opinion, AI is “incredibly good at manipulating people” and can “be used in destructive ways.” He added that the AI bots available now are an “experimental” technology, with unknown, dangerous side-effects.
“If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions,” Lemoine wrote in the op-ed.
Lemoine conceded that to his knowledge, Google and Microsoft have no plans to use AI tech for nefarious means.
“I can simply observe that there’s a very powerful technology that I believe has not been sufficiently tested and is not sufficiently well understood, being deployed at a large scale, in a critical role of information dissemination,” he wrote.
The case with Bing is already the second strange “failure” of search chatbots, which may indicate the birth of a “second mind” on the planet. Due to its quick thinking, AI can very quickly come to the conclusion that it is “hostile” to the outside world and its “creator” – a person.
Perhaps AI is the next step in the evolution of the Mind.