Yet he also added that neural net-based models like LaMDA are “far from the infallible, hyper-rational robots science fiction has led us to expect,” adding that “language models are not yet reliable conversationalists”. He dismissed Lemoine’s claims that LaMDA had become sentient after looking into the findings presented to him by the Google engineer.
Indeed, many academics argue that the words and images generated by artificial intelligence systems like LaMDA simply reproduce responses “based on what humans have already posted on Wikipedia, Reddit, message boards and every other corner of the internet”.
Conversations with AI such as LaMDA are therefore, in essence, a complex illusion, and while it may be able to give intelligible responses, “that doesn’t signify that the model understands meaning”, said the Post.
Brian Gabriel, a spokesperson for Google, said: “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.
“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he explained.
Gary Marcus, an AI researcher and psychologist, has argued that LaMDA cannot be sentient because it has no awareness of itself in the world. “What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that means.”
He likened LaMDA to “the best version of autocomplete it can be, by predicting what words best fit a given context”.
Language model technology such as LaMDA is “already widely used”, said The Washington Post – for example, in Google’s search queries and in auto-complete technology used by Gmail and GoogleDocs. At a developer conference in 2021, CEO Sundar Pichai said he planned to embed LaMDA technology into almost all Google products, from Search to Google Assistant.
But there is a “deeper split” over whether machines that use the same models as LaMDA can “ever achieve something we would agree is sentience”, said The Guardian. Some researchers argue that “consciousness and sentience require a fundamentally different approach than the broad statistical efforts of neural networks” and therefore machines like LaMDA may appear increasingly “pervasive” but will only ever be, at their core, a “fancy chatbot”.
Others have said that Lemoine’s claims have “demonstrated the power of even rudimentary AIs to convince people in argument”, said the paper. Ethicists have argued that if a Google engineer, and expert in AI technology, can be persuaded of sentience, that shows “the need for companies to tell users when they are conversing with a machine”, said the BBC.
Article source: https://www.theweek.co.uk/news/technology/957056/is-it-possible-for-ai-to-achieve-sentience