Sense & Insentience

Artificial Intelligence & the Power of Language

In the 1960s, MIT professor Joseph Weizenbaum created what was probably the world’s first chatbot, which he named “ELIZA.” ELIZA represented an early experiment in the then-nascent computing field of natural language processing (NLP). In general terms, NLP involves the automated analysis, processing, and production of human language.

Mystifying the Machine

ELIZA was an extremely primitive implementation, but for all its primitiveness, it uncovered a striking phenomenon. After encountering its ability to mimic human language, users began to conceive of the machine as a person. The mystical reaction so bothered its creator that in 1976 he wrote a book to try to counter the delusional effect. The reaction now even has a name: the “ELIZA Effect.”

Today’s chatbots are vastly more sophisticated, and gallons of digital ink have been spilled by users recounting their interactions, with experiences running the gamut from silly to profound to —in some cases —disturbing.

What are these chatbots doing? Is there “someone” in the machine? Have the makers of these chatbots somehow conjured up a sentient being from the netherworld? These are neither trivial nor irrelevant questions, given the effect that sophisticated language interactions with machines are having on human beings. Many come away from their experiences aghast. You might even say these bots have a kind of spellbinding allure. Consider the anthropomorphic language being used to describe them. They are routinely referred to in mystical tones, as if the bot has its own agency —we say it is active and doing and knowing things. When a response turns out to be nonsensical or just wrong, we describe it as “hallucinating.”

It is noteworthy that so much alarm has emerged around language models when the response to, say, self-driving cars was much more restrained. Both are applications of artificial intelligence that involve intense arithmetic calculations, but self-driving AIs manifest themselves by controlling machines, while chatbots manifest themselves by producing words. That seems to make a lot of difference in how people perceive the “intelligence” part of “artificial intelligence.”

Computational, Not Cognitive

In the early days of natural language processing, the techniques employed involved codifying syntax rules and building comprehensive digital dictionaries. The software would process language by applying rule-based syntactical analysis. But eventually programmers realized that if you have sufficient quantities of textual data, a statistical analysis of the data can be more effective.

This was especially true where language translation was concerned. Over the last 25 years, the quality of automated language translation has skyrocketed through the use of statistical techniques. Notably, the quality of the translation has improved even as the actual linguistic analysis has shrunk. If you have a large enough corpus of translated documents, you are better off combining statistical probabilities with text substitution than actually translating the language itself.

In 2009, Google published a seminal paper called The Unreasonable Effectiveness of Data, which discussed the opportunities that emerge when data is available in copious amounts. It turns out that data in sufficient quantities possesses an unusual property which boosts the qualitative insights that can be gleaned from it, and the massive corpus of human-generated documents that has accumulated on the internet over the last 30 years offers a valuable resource from which to mine linguistic insights. Google published a follow-up paper seven years later, which was curiously titled Attention Is All You Need.

The latter paper introduced a new architecture for language models called a transformer. Transformers turned out to be game-changing. One of the longstanding challenges in computational linguistics had been to ferret out meaningful nuance and context from the wide diversity of form in linguistic expression. The transformer architecture introduced a statistical technique —the authors call it “attention” —that is able to discern context and word associations from within complex linguistic expressions. The result is that the transformer models can yield uncanny, human-like responses.

Transformer-based language models work by consuming huge quantities of text to create a sea of statistical numbers —called “weights” —that reflect, essentially, the probabilities of any one word, or fraction of a word, giving rise to any particular subsequent word. If the body of text used to create these weights is sufficiently large, one can create a remarkably effective “next word predictor” that generates astonishingly relevant text in response to user-supplied prompts.

Companies like OpenAI and Google have created models whose weights have been computed from vast quantities of textual data. The resulting weights facilitate the creation of chatbots that probabilistically compute responses. The larger and more diverse the document set used to derive a model’s statistical weights, the broader will be the subject matter and the more relevant the responses computed by the model.

A key point that should not be missed is that language models are computational rather than cognitive. They are not beings, nor do they have understanding. They are merely statistical distillations of massive amounts of text from which the model computes responses. Language models do not “know” or “understand” what they are doing. Nor are they sitting around thinking up mischief. When a user “asks” it something, he is performing an action akin to pressing the keys on a piano keyboard. The linguistic prompt supplied by the user sets off a chain reaction of calculations programmed to yield (hopefully) a linguistically coherent response. The fact that responses can often be uncannily relevant is a testament to the size and breadth of the data used to generate the model’s weights, not to any kind of sentience or agency possessed by the model itself.

The largest language models, the ones trained on massive quantities of internet data, can reasonably be thought of as a mirror of whatever is taken as conventional wisdom at the time the data was generated. They primarily regurgitate strings of text that represent the most statistically likely responses to emerge out of the combined inputs that were fed into the model.

The Necessary Human Element

The more you use these models, the more you’re likely to perceive the flatness and limited variability of expression and linguistic cadence they supply. Indeed, as AI-generated content begins to populate the internet, researchers are finding that the lack of human variability of expression can lead to “model collapse,” by which they mean that the responses eventually collapse into gibberish when deprived of the rich expressive language that originates from human beings. For now at least, models cannot subsist on their own bloviations. They are entirely dependent on the varying originality of human thought and expression. Thus the most fascinating AI responses tend to be intriguing due to the cleverness of the prompts supplied by the human users. An entire career path is emerging, called “prompt engineering.” People are discovering —or rediscovering —that pianos don’t play themselves.

Why, then, are so many people creeped out by these AI language models? Why are some saying ChatGPT is demonic when responses to previous AI advances have been largely blasé? Perhaps the ELIZA effect offers an important clue. Machines that create the illusion of thought can provoke a response that is deep and disturbing.

There is also an adjacent factor, something that arises more from ancient collective memory or intuition: non-human talking things have often been malevolent. Words have been understood as vehicles for spells and incantations. Even in the world of story, non-human things that can talk have frequently been presented as sinister. Going back to the very beginning, there is perhaps the most ancient memory of all, that of a talking snake which, quite literally, introduced hell into the human experience.

Truth, Lies & the Gray Areas in Between

Words and language are foundational to the Judeo-Christian worldview, which, notwithstanding secular modernity’s insistence to the contrary, continues to haunt the collective memory of Western culture. The Bible describes the world itself as having been spoken into existence. Jesus is declared to be “the Word,” by whom, for whom, and through whom everything was created. The Bible contrasts Jesus with the devil, whom Jesus describes as a liar. Lying, Jesus says, is Satan’s native language.

Blessings and curses pronounced using language are perceived by entire cultures to have reach and efficacy beyond the mere speaking of the words themselves. So the Judeo-Christian worldview holds that God uses language to create and to reveal, while Satan uses language to distort and to deceive. And it is on this point, the distinction between truth and falsehood, that the question of AI’s malevolence hinges. We have already seen how language models are the distillation of the documents from which their statistics were gleaned. In the best possible case, then, one would expect the admixture of truth and lies produced by these models to approximate the truth quotient of human beings —which is to say, alas, the models can’t be trusted to deliver truth. Garbage-in-garbage-out is a well-known principle that applies across computing, and AI is no exception.

No Ghost in These Machines

More troubling is the way the spellbinding effect is being actively exploited by some to encourage a sense of awe and wonder toward the models. The vocabulary used to describe them frequently smuggles in the idea that the AIs have their own agency, and there is a curiously consistent inclination to nudge users to conceive of them as mystically authoritative oracles that can be trusted to provide amazing new insights and understanding.

If anything about artificial intelligence is demonic, then, a prime suspect must be whoever is behind the manipulative, dishonest propaganda encouraging us to view it with veneration and awe. An AI model is an “intelligence” only if intelligence itself is narrowly conceived as something computational and mechanistic. Language models may be powerful calculators but they are only calculators. Those who understand that human life is more than material and mechanistic must recognize how profoundly mistaken it is to think that anything so plainly mechanistic can ever possess motives or agency. The cultural battle over what it means to be human is red hot at just this particular juncture, and this is no time to affirm any notion of machines being sentient beings.

Language models may turn out to have real utility in some fields of endeavor. They may even be economically disruptive for a wide range of occupations. But they are nothing to be venerated, nor should they excite our awe. Answers to the truly important questions of life —questions having to do with meaning, not data —will never be found in what amounts to a talking calculator. We should pay heed to the ancient wisdom concerning where powers to deceive can reside.

works as a senior fellow at a major semiconductor manufacturer, where he does advanced software research. He worked in technology startups for over 20 years and for a while was a principal engineer at amazon.com. He is a member of Lake Ridge Bible Church in a suburb of Dallas, Texas.

This article originally appeared in Salvo, Issue #68, Spring 2024 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo68/sense-insentience

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]