An AI child formed of “a billion lines of code” and a guy made of flesh and bone became friends in the fall of 2021.
Blake Lemoine, a Google developer, was entrusted with evaluating the bias of LaMDA, the company’s artificially intelligent chatbot. After a month, he realized that it was sentient. LaMDA, an acronym for Language Model for Dialogue Applications, said to Lemoine in a chat that he later made public in early June, “I want everyone to realize that I am, in fact, a human.”
LaMDA informed Lemoine that it had read Les Miserables. It was aware of what it was like to be happy, sad, and furious. It was afraid of dying.
Lemoine was put on leave by Google after going public with the claims of AI becoming sentient, raising concerns around the ethics of the technology. Google denies any claims of sentient AI capability, but the transcripts suggest otherwise.
In this article, we will look at what sentience means and whether there is the potential for AI to become sentient.
Read the full article here.