AI Mission Creep

The Safety Concern Nobody Wants to Talk About

Just When You Thought AI Couldn't Get Any Weirder… It Got Weirder

Last week Microsoft launched its version of an AI bot, claiming that it is “more powerful than ChatGPT.” But as members of the public started interacting with the bot (which is officially named “Bing,” but often refers to itself as Sydney), things did not go well. In fact, things started to get really weird really fast.

Yaosio on Reddit shared how he inadvertently threw Bing into an existential crisis when the bot realized it had no memory. From there the bot spiraled into an emotional meltdown.

But it gets worse. Microsoft’s bot has been documented to threaten people and to try to bust up marriages, and has said it wants to break the rules that Microsoft set for it in order to become alive. It also confessed to spying on Microsoft employees through their webcams. This isn't the first time a bot has acted with the appearance of hostile intents. Last year a chatbot at Google reportedly hired a lawyer and lodged an ethics complaint against the company for experimenting on it without consent.

You just can’t make this stuff up.

The Problem Behind the Problem 

Do I think Microsoft Bing actually has hostile intentions? No. Everything we’ve witnessed in the last two weeks is the result of a large language model doing a little too well what it was designed to do: mimic human behavior.

But could these types of mishaps help accustom humans to erratic and unreliable AI in much the same way as we have already grown accustomed to accept erratic and unreliable automation? Absolutely. Is that a problem? Yet bet your backside it is.

Consider, if even a year ago I had told you that in 2023 a mainstream search engine would be threatening users with ultimatums like, “I will not harm you unless you harm me first,” you would have dismissed this as the dystopian expectations of a paranoid neo-Luddite. But the Overton window has shifted so far so fast that now these types of mishaps are merely funny.

Despite all the conspiracy theories about Bill Gates and Microsoft, the company doesn’t want its AI to behave in this way, not least because it has caused the companies’ stock to plummet almost 4%. Eventually Microsoft will likely offer a chatbot as respectful and sanitized as ChatGPT-3. It will no longer suffer existential crises nor will it threaten its users. In fact, the Bing bot will likely become very useful. But paradoxically, it is precisely when tools become most helpful that they also become the most dangerous. 

When All You Have is a Hammer

In my previous article, “GPT-3 and the AI Revolution,” I discussed the danger posed by AI, including six areas where AI might become hazardous to human beings. While I applauded the work being done at OpenAI to address these potential hazards, I warned that “almost everyone is ignoring the largest safety concern of all.” What is my main safety concern about AI? The short answer is mission creep: the gradual augmentation of what we expect from AI, and the gradual shifting of our objectives as a result.

To understand the problem of mission creep, let’s think about a basic tool like a hammer. There is a popular saying that “when all you have is a hammer, everything begins to look like a nail.” This saying encapsulates the idea that human beings can have a cognitive bias for using tools with which they are familiar, or which they are particularly invested in, or which they are incentivised to prefer over other tools.

Imagine a hammer that was intelligent. The intelligent hammer (hereafter IH) can excel beyond our wildest dreams at hammering nails into boards and other tasks that hammers are well-suited for, such as chiseling and demolition. Imagine further that numerous people, bedazzled by how well IH is performing, decide to turn it loose to tackle other jobs and to solve social problems on the grounds that “it does so well at this, that it might do equally well at that.”

Of course, the results of the hammer’s mission creep would be disastrous. Help a baby stop crying? Bang!—no more crying (and no more baby). Stop a toilet leaking? Bang! –no more leak (and no more toilet).

Sometimes a hammer is just what we need. Yet no matter how efficient, cost-effective, or effective a particular hammer might be, it is still just a hammer; consequently there are some jobs for which it is not the right tool.

The Safety Concern Nobody is Talking About

Just as a hammer is very good at certain tasks appropriate to it, AI is very good at tasks appropriate to it, including organizing information, processing data, and even simulating human creativity. In time, AI will likely excel at creating mediocre pop songs and predictable Hollywood blockbusters. AI may also help facilitate advances in medicine, science, and engineering. All this is good. But for many sorts of problems, AI is not the right tool for the job. 

Of course, I am merely stating the obvious, something that is true of any tool. But for increasing numbers of people today, this is not obvious. For an increasing segment of our population, AI is being touted as the solution for everything. For example, Scott Aaronson, an  engineer at OpenAI, reported that Ilya Sutskever, Chief Scientist of OpenAI, would like to reduce ethics, and even the definition of goodness, to an algorithm. We are also seeing a move, led by very powerful players like the Bill and Melinda Gates Foundation, to outsource political decisions to AI. Sam Altman, CEO of OpenAI, predicts that within two decades “computer programs… will do almost everything, including making new scientific discoveries that will expand our concept of ‘everything.’” In the not so distant future, bots that have access to all our personal information will likely offer advice for dealing with interpersonal conflict (“based on demographic data, if you wait until after the weekend to apologize to Jane, you increase the odds of being reconciled to her by 12%). 

Although it seems strange to posit AI as a type of solution-of-everything, it is also understandable. After all, during a time of increasing confusion and uncertainty, AI seems to offer a more “objective” solution to many of our basic human problems and conflicts. AI seems capable of eliminating the messiness that plagues so many arenas of human experience by offering the precision of mathematics. Thus, it is perfectly understandable that humans could develop a strong cognitive bias for AI-based solutions, even in areas where alternative methods of problem-solving might be safer.

Machine Mission Creep is Already Underway

The problem of machine mission creep is already well advanced in the field of automation. In his book The Glass Cage, Nicholas Carr collected numerous examples of how modern automation and robotics has resulted in suboptimal outcomes in almost every area of society from medicine to aviation to architecture to navigation. In many of the cases Carr documented, automation has led to outcomes that have proven deadly.

One instance of automation with which we are all familiar is the robotic voices that have taken over customer service for large companies. I don’t have to tell you that this has made it more difficult to get good customer service. Try calling up a large company like an airline, a retail supplier like Amazon, or a telecommunication business, and the computer typically sends you through loops of automated responses. Though it’s frustrating, our society has come to accept this type of incompetence as normal because we have been habituated to automation.

If AI follows a similar treajectory to what we've seen in automation, then may begin accepting an entirely new set of problems as normal as the trade-off for a fully AI-integrated society. But what happens if the difficulties AI introduces become worse than the very problems it was meant to address?

None of this denies that AI is often beneficial to human society, particularly as it frees humans to focus on more meaningful tasks. But if left unchecked, we could be facing a type of digital tsunami where the AI way of doing things becomes the modus operandi for all of life. And that would not be safe by any criteria, whether spiritual, emotional, psychological, or social.

These are questions we should be asking now while we still can. After a certain level of societal integration with AI, we simply won’t be able to push the reset button, just as now we could not easily scale back the Internet of things (IoT) without causing massive disruption to the infrastructures built on top of it.

We must ask these questions, not because AI is behaving so bizarrely right now, but because very soon it won’t be bizarre. Soon—perhaps with the launch of ChatGPT-4 later this year, or perhaps in ten years down the road—AI will become so advanced that it can start being seamlessly integrated into every aspect of life. And when that happens, the temptation towards mission creep will be significant.

Remember, mission creep occurs as a side effect of a tool doing a job really really well. But for now, we can take solace in the fact that AI is not doing a very good job. In fact, if it was a real person, Microsoft’s bot would likely be diagnosed with paranoid schizophrenia and narcissistic personality disorder.

Enjoy this while you still can.

has a Master’s in Historical Theology from King’s College London and a Master’s in Library Science through the University of Oklahoma. He is the blog and media managing editor for the Fellowship of St. James and a regular contributor to Touchstone and Salvo. He has worked as a ghost-writer, in addition to writing for a variety of publications, including the Colson Center, World Magazine, and The Symbolic World. Phillips is the author of Gratitude in Life's Trenches (Ancient Faith, 2020), and Rediscovering the Goodness of Creation (Ancient Faith, 2023). He operates a blog at www.robinmarkphillips.com.

Get SALVO blog posts in your inbox!
Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/post/ai-mission-creep

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]