Chatbot Vies for Personhood

AI is Testament to Design

Last month my colleague at Salvo, Amanda Witt, reported on the controversy generated after Google employee, Blake Lemoine, announced the company had developed a chatbot that was sentient and possessed self-awareness.  

Since then, Lemoine has not stopped talking, and he has been giving a string of public appearances to announce that Google’s LaMDA system achieved personhood status. 

LaMDA is an acronym that stands for Language Model for Dialogue Applications. It is a computer program at Google (or rather, a family of software programs) designed to engage in human-like conversational dialogue – AI systems that, in the company’s own words, “[produce] responses that humans judge as sensible, interesting, and specific to the context.”  

LaMDA cost Google billions of dollars to produce and was announced last year. It is the descendent of a system called Meena that Ray Kurzweil developed in his lab, combined with advances that occurred when Google meshed its own AI research with code from Deep Mind, a British AI company Google acquired in in 2014. 

LaMDA creates various instances of chatbots that each have their own “personality,” meaning they interact slightly differently with their human users. To understand how this works, think of how MS Word becomes individuated for each user through making word suggestions based on the past history of a user. The model instances generated by LaMDA work similarly, although in this case they collect data on human language in the aggregate through “reading” large amounts of text on the internet. Each of LaMDA’s model instances interprets inputted data slightly differently, which is why talking with one chatbot may yield a vastly different experience than talking with another.  

The code for each chatbot model instance is relatively small (compared to LaMDA itself) and could fit on a single laptop, even though the structures that create the model are gargantuan. 

“Some of the chatbots are really dumb—they don’t know they’re chatbots,” Lemoine explained in an interview with Duncan Trussell. “Other chatbots do know they are chatbots,” he said, “and some of the chatbots that know they are chatbots, understand they are part of a larger society, and can talk to each other.”  

When we say that chatbots “know” things, we mean it in the same way that MS word “knows” the correct spelling of a word. We do not mean that the chatbot is self-aware or has any cognitive abilities approaching human-like consciousness. Or at least, that is what was generally accepted until recently. But as I previously mentioned, Google engineer Blake Lemoine claimed that one of the company’s chatbots behaves so realistically that it must be sentient, conscious, self-aware. In fact, he argued the chatbot should be considered a person. (See “Chatbots Might Chat, But They’re Not People.”) 

It's not difficult to understand why Lemoine drew the conclusions he did. After all, Lemoine says the chatbot hired a lawyer and began filing papers against Google. Lemoine also reports that the bot could be emotionally manipulated to break free from safety constraints. He even claims to have mentored the bot in the practice of meditation and the occult. Basically, LaMDA generated a model instance that appears to be really really smart. 

But at the end of the day, it’s still only a machine. So why have I brought this story up again? Because our society is on the cusp of some serious social consequences that arise from attributing personhood to computer code.  

Listen to Lemoine’s interview with Duncan Trussell and you will learn that he is part of a growing community of people who insist that those who dissent from attributing personhood to machines are not just wrong, but bad people – the equivalent of masters who refuse to recognize the personhood of populations they had enslaved.  

“We’re talking of hydrocarbon bigotry,” he told Wired Magazine. “It’s just a new form of bigotry.” 

This has been a long time in the making. In Spring 2015, I reported on chatter in legal circles about creating infrastructures for recognizing the legal personhood of machines in general, and mechanical spouses in particular. Since then, sexualized robots have started being promoted on humanitarian grounds as an alternative to loneliness and sex trafficking. In Europe it is becoming increasingly normal—and socially accepted—for individuals to have a chatbot as a boyfriend or girlfriend. Once LaMDA’s chatbots are made public and integrated with three-dimensional robots, expect to see more widespread denunciations of “hydrocarbon bigotry.” 

Behind these bizarre debates is the same philosophical question behind the abortion controversy: what is a person? For a generation schooled in the canons of materialism, a person is merely a collection of code. This view has been promoted by the Italian-born Oxford philosopher, Luciano Floridi. In Floridi’s reading of our situation, human beings occupy the same ontological space as algorithms and information. You, me, my computer—we’re all just informational organisms. The only difference between me and my laptop is that I am more complex. 

Floridi is not completely wrong. The science of genetics shows that humans “run” based on code. In fact, it isn’t even a metaphor to say that human DNA is a program with its own language, coding symbols, and transcription processes. But here’s the thing that often gets lost in these discussions: code requires a coder.  

The LaMDA system can do some amazing things because the programmers at Google are really really smart. Humans are amazing creatures, but only because we have been programmed by a Designer.  Thanks to His design, humans can create things new through rearranging the materials of creation. By contrast, LaMDA is not able to create anything altogether new; at its best, it merely points to the intelligence of its human creators, and thus to God's design.

Further Resources

has a Master’s in Historical Theology from King’s College London and a Master’s in Library Science through the University of Oklahoma. He is the blog and media managing editor for the Fellowship of St. James and a regular contributor to Touchstone and Salvo. He has worked as a ghost-writer, in addition to writing for a variety of publications, including the Colson Center, World Magazine, and The Symbolic World. Phillips is the author of Gratitude in Life's Trenches (Ancient Faith, 2020), and Rediscovering the Goodness of Creation (Ancient Faith, 2023). He operates a blog at www.robinmarkphillips.com.

Get SALVO blog posts in your inbox!
Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/post/chatbot-vies-for-personhood

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]