
BLAKE LEMOINE, a Google software engineer, made headlines last week for his claim that one of the company’s chatbots was “sentient.” This led to him being suspended and placed on leave.
Despite his claim, almost all commentators have agreed that the chatbot is not sentient. It is a system known as Lamda (Language Model for Dialogue Applications). The name “language model” is misleading. As computer scientist Roger Moore points out, a better term for this sort of algorithm is a “word sequence model.” You build a statistical model, feed it lots of words, and it gets better and better at predicting plausible words that follow them.
However, there is more to written language than simply sequences of words alone. There are sentences, paragraphs, and even longer regions that make a piece of text “flow.” Where chatbots currently fail is in maintaining a consistent flow. They might give sensible answers, but can’t produce lengthy text that fools a human.
Lamda may be different. According to Google, unlike most other language models “Lamda was trained on dialogue.” This means that — Google claims — it is superior to existing chatbots.
This doesn’t mean it is sentient. It still remains, as the psychologist Gary Marcus puts it, “a spreadsheet for words.” It is a gigantic training system that has been fed huge amounts of human conversation, enabling it to respond realistically, like a human would, to typed queries.
Lemoine worked at Google and was close to the company’s ethical AI team. Because of this, he had the opportunity to engage with Lamda. It seems that these “conversations” — some of which he has released in edited form — gave him a powerful sense that the responses were meaningful.
He believes there was an artificial intelligence behind them: a “person” with a “soul.” To him, Lamda is not just a powerful language model. It is his friend and a victim of “hydrocarbon bigotry.”
Lemoine also claims that Google have actually included more of their computing systems within Lamda than they have publicly acknowledged. In an interview with Wired he said they had included “every single artificial intelligence system at Google that they could figure out how to plug in.”
Whatever the truth behind this, there is good reason to be suspicious of Google’s claims to “ethical” AI. In a high-profile scandal in December 2020, two computer scientists who led the ethical AI team were fired. Timnit Gebru and Margaret Mitchell had written a critical paper about language models together with other experts including Emily M Bender.
That paper, called On the Dangers of Stochastic Parrots, predicts exactly the problem that has happened in the case of Lemoine.
The authors describe how large language models generate text that is “not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind.” It is just text made by “haphazardly stitching together sequences of linguistic forms” according to given probabilities. This is why they call it a “stochastic [ie statistical] parrot.”
Despite this, they note that the fluency of the text in advanced language models becomes dangerous. This fluency can convince readers that there is an intelligence at work, even when they go in believing that there isn't.
Science has always been mixed up with money and power, but as a decorative facade for megayachts, it risks leaving reality behind altogether, write ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT


