Skip to main content
Sentience and stochastic parrots: was Google's AI alive?
Artificial Intelligence in the form of chatbots is convincing people it has feelings — it doesn't, yet the fact we are falling for it is terrifying in itself, as corporations cannot be trusted with this new weapon, write ROX MIDDLETON, LIAM SHAW and JOEL HELLEWELL
REAL-WORLD RISKS: Machine Learning

BLAKE LEMOINE, a Google software engineer, made headlines last week for his claim that one of the company’s chatbots was “sentient.” This led to him being suspended and placed on leave.
 
Despite his claim, almost all commentators have agreed that the chatbot is not sentient. It is a system known as Lamda (Language Model for Dialogue Applications). The name “language model” is misleading. As computer scientist Roger Moore points out, a better term for this sort of algorithm is a “word sequence model.” You build a statistical model, feed it lots of words, and it gets better and better at predicting plausible words that follow them.
 
However, there is more to written language than simply sequences of words alone. There are sentences, paragraphs, and even longer regions that make a piece of text “flow.” Where chatbots currently fail is in maintaining a consistent flow. They might give sensible answers, but can’t produce lengthy text that fools a human.
 
Lamda may be different. According to Google, unlike most other language models “Lamda was trained on dialogue.” This means that — Google claims — it is superior to existing chatbots.
 
This doesn’t mean it is sentient. It still remains, as the psychologist Gary Marcus puts it, “a spreadsheet for words.” It is a gigantic training system that has been fed huge amounts of human conversation, enabling it to respond realistically, like a human would, to typed queries.
 
Lemoine worked at Google and was close to the company’s ethical AI team. Because of this, he had the opportunity to engage with Lamda. It seems that these “conversations” — some of which he has released in edited form — gave him a powerful sense that the responses were meaningful.

He believes there was an artificial intelligence behind them: a “person” with a “soul.” To him, Lamda is not just a powerful language model. It is his friend and a victim of “hydrocarbon bigotry.” 
 
Lemoine also claims that Google have actually included more of their computing systems within Lamda than they have publicly acknowledged. In an interview with Wired he said they had included “every single artificial intelligence system at Google that they could figure out how to plug in.”
 
Whatever the truth behind this, there is good reason to be suspicious of Google’s claims to “ethical” AI. In a high-profile scandal in December 2020, two computer scientists who led the ethical AI team were fired. Timnit Gebru and Margaret Mitchell had written a critical paper about language models together with other experts including Emily M Bender.
 
That paper, called On the Dangers of Stochastic Parrots, predicts exactly the problem that has happened in the case of Lemoine.

The authors describe how large language models generate text that is “not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind.” It is just text made by “haphazardly stitching together sequences of linguistic forms” according to given probabilities. This is why they call it a “stochastic [ie statistical] parrot.”
 
Despite this, they note that the fluency of the text in advanced language models becomes dangerous. This fluency can convince readers that there is an intelligence at work, even when they go in believing that there isn't.

Support the Morning Star
You have reached the free limit.
Subscribe to continue reading.
More from this author
SCIENCE AND SOCIETY / 22 April 2025
22 April 2025

Science has always been mixed up with money and power, but as a decorative facade for megayachts, it risks leaving reality behind altogether, write ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT

(Left) Human embryonic stem cells; (right) A patient after i
Features / 26 March 2025
26 March 2025
A small Japanese trial has reported some positive results for stem cell therapy to treat spinal-cord injuries
MORE THAN A WATERWAY: The Agua Clara (Clear Water) locks on
Science and Society / 12 March 2025
12 March 2025
Man-made canals like Panama and Suez face unprecedented challenges from extreme weather patterns and geopolitical tensions that reveal the fragility of our global trade networks, write ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT
HOW GREEN IS GREEN? Recycling solar cells safely is a major
Science and Society / 26 February 2025
26 February 2025
It’s sunny times for the solar industry which is expected to continue to grow rapidly — but there are still major environmental issues with how solar cells are made, explain ROX MIDDLETON, LIAM SHAW and MIRIAM GAUNTLETT
Similar stories
A woman's hand pressing a key of a laptop keyboard
Britain / 7 February 2025
7 February 2025
INNOVATION/REVOLUTION? 28/09/1971. Salvador Allende, togethe
Opinion / 21 January 2025
21 January 2025
Software engineer SCOTT ALSWORTH explains to his mother
FOREVER BEYOND AI? Garcia Marquez mural in Aracataca, Columb
Opinion / 14 January 2025
14 January 2025
ANDY MIAH advocates the use of AI to assist people by expanding access to global literature and culture
Book Review / 15 November 2024
15 November 2024
JOHN HAWKINS marvels at the blithe dismissal of people as a passive mass in a new work that extols the coming merger of human intelligence with AI