Intentional Inconvenience as a Curb against Digital Passivity
One of the problems with ChatGPT, along with all the other AI models, is that you can never get exactly what you want, you just have to make do with whatever it is you are given.
My 8-year-old was working on an assignment for school in which he had to create a lengthy report on six different classes of animals: mammals, birds, reptiles, amphibians, fish, and invertebrates. Part of his assignment was to decorate the title page with artwork of his choice. He asked me for help, and so we decided to see if ChatGPT/DALL-E would give us something he could use. Trying every variation of prompt and description we could think of, we asked these AI models for a coloring book picture that contained examples of each animal category. No matter what or how we asked, we could only ever get back something that was not quite right; something was just enough “off” about each one that we didn’t think it would work.
It turns out that ChatGPT has a propensity to sometimes generate animal images that look animal-like but that on close inspection don’t look like any real animals that exist in the world. Such drawings give off the vibe of actual animals, but it was always the case that at least some alleged animal in the picture was fantastically deformed or off-kilter in some essential way. I began to imagine that the model’s training data was over populated with images of the kind of unhappy genetic mutations one sometimes finds in radiation-contaminated regions after some horrible accident like Chernobyl. A crocodile body with fins instead of legs, sporting the head of a barracuda. An animal-like creature with the head of a bear but having a feline body. Amphibians sporting multiple tails. Animals with too many legs, or with a tail that merged and blended into some other appendage. At a cursory glance, the general busyness of these drawings left us with a positive impression, but whenever we looked more closely, the details were somehow always a little twisted.
In the end, we were faced with the choice of adjusting our expectations or mustering our skills to produce, on our own, what it was that we had in mind.
Trading Agency for Convenience
And therein lies a lesson.
One of the game-changing insights exploited by the technology behemoths of the 21st century has been that billions of people will happily exchange their privacy for convenience. One of the open questions where AI is concerned, is just how attached are we, really, to our own human agency? How far are we willing to lower our expectations of ourselves to obtain the comfort and convenience that AI might provide? And, as my 8-year-old and I found ourselves asking, how much are we willing to lower our expectations to accept what we’re offered in lieu of what we ourselves could actually do?
Technology, when employed as a tool to augment our inherently human capacities, can be a boon to one’s agency. But when technology starts to act more like a passive/aggressive concierge—deciding for us when to reorder our supplies, proactively adjusting the thermostat, choosing pictures that aren’t what we asked for, or telling us how to drive—even deciding when to let our cars idle and when to turn them off, something dehumanizing is going on. We are being nudged, ever more forcefully, toward passivity. Alas, passivity is to human agency what anesthesia is to consciousness. It is not necessarily outsourcing a decision to technology here or there that should concern us. But the cumulative effect should give us pause.
At one point in my life, I was on the verge of death and required an improvisational major surgery which carried enormous risks. When they came in to have me sign the permission form, they rattled off all the unhappy possibilities and what their probabilities were. The probability of any single unhappy outcome was always in the single digits. But it was the cumulative probability (I was adding up the individual numbers as they rattled them off) that I found alarming.
Anesthetized Passivity
In a similar vein, I find myself suspecting that the dehumanizing effect of any single technology is generally of less concern than the cumulative effect of the many distinct attentional and assertive technologies bombarding our lives. By “dehumanizing,” I mean to characterize the effect of being continuously nudged toward passivity. The cumulative effect may be degrading our natures as acting, initiative-taking beings. At some saturation point, we will have frittered away “the dignity of causality,” to use Pascal’s framing, in exchange for a kind of anesthetized passivity.
In such a world, we may belatedly discover that the real prophet of our time turns out to have been WALL-E. It wasn’t WALL-E’s much discussed vision regarding physical obesity that turns out to be, perhaps, most interesting. But, rather, it is in the movie’s prophetic vision for how technology will cultivate a world requiring little in the way of human agency, nudging us toward lives not too dissimilar to that of feedlot cattle: we will be reduced primarily to being passive consumers of food and entertainment.
It isn’t just Disney storytellers who have envisioned this kind of future. Wildly popular cultural critic/historian Yuval Harari, a committed materialist and technocrat, imagines a world populated by elite technocrats (people like himself, naturally) and a lower class of “useless people.” He speculates that the techno-elite will manage the presence of such “useless people” by giving them food, drugs, and video games. Sort of a WALL-E meets Cheech & Chong dystopia.
Anyone in meaningful association with the elderly can already perceive the loss of agency imposed upon them by technology. Although, with many of the elderly, it isn’t due to passivity but to the often-gratuitous complexity of the technology itself. Many of the elderly represent a cohort which very much wants to maintain its agency. But in many areas, their agency is being forcibly taken from them by companies and government agencies refusing any accommodations for customer engagement other than digital technology.
You can make the case that the elderly are progressively losing the war of the lights and the buttons. I saw a man in his mid 70s try and fail to answer his cell phone which was ringing in an unfamiliar context -- he had something left open onscreen.
— wretchardthecat (@wretchardthecat) January 24, 2024
“I would never own a car I couldn’t repair myself!” declared my grandfather, who even before there was such a thing as digital technology, nevertheless perceived that technological novelty and convenience could come at a cost to both his independence and his wallet. We may all be about to re-learn what my grandfather knew so long ago: the road to dependency is paved with technology just complex enough to exceed our grasp.
Intentional Inconvenience
It is likely that those of us who wish to preserve our independence, to continue living lives of human initiative, will need to become more critically discerning regarding the temptations of tech-induced passivity. Each of us will almost certainly need to embrace some degree of intentional inconvenience. Any hope of sustaining free and independent lives may ultimately depend upon it.
Keith Loweryworks as a senior fellow at a major semiconductor manufacturer, where he does advanced software research. He worked in technology startups for over 20 years and for a while was a principal engineer at amazon.com. He is a member of Lake Ridge Bible Church in a suburb of Dallas, Texas.
• Get SALVO blog posts in your inbox! Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/post/the-high-price-of-convenience