How Bot-Driven Governance Threatens to Undermine Democracy
From 1933 to 1934, spectators gathered on the shores of Lake Michigan to attend the Chicago World’s Fair. Dubbed “A Century of Progress International Exposition,” the event highlighted the incredible advances science had brought to the human race. Featuring everything from the latest automobiles to cigarette-smoking robots, the fair aimed to demonstrate that even amid the Great Depression, there was still hope because there was still science. The motto captured the optimistic mood: “Science Finds, Industry Applies, Man Adapts.” Guides and commentaries elevated the optimism to dizzying heights of utopian expectation.
The hype surrounding the Chicago World’s Fair was not an anomaly but reflected wider intellectual currents of the time. Given the rapid advances the world had recently witnessed, it was widely believed that technology and science-driven industry might aptly solve the social and political woes besetting Americans in the 1930s.
Scientism & Technocracy
The ideology driving it all was an epistemology called “scientism.” Scientism claims that science is the only way anything can be known. Rooted in philosophical materialism, scientism was often aligned with a prescription for progress based on the fashionable social science doctrines of the time, which sought to understand and regulate human behavior with the precision of science.
Scientism gave plausibility to the “Technocracy Movement” pioneered in the 1930s by figures like Howard Scott and Marion King Hubbert. Technocracy activists sought to replace elected politicians with scientists and engineers who could make governance more “rational.” It was hoped that technological expertise could stem the growing tide of political and economic woes by leveraging the power of scientific analysis and data-driven approaches. Those who advocated technocracy favored rule by experts who could mediate the latest scientific and engineering knowledge to the populace.
The technocracy of the 20th century now seems quaint, but it offered an attractive solution to the confusing changes that had recently swept over the world. Amid the rapid advances of the interwar years, with old certainties crumbling, who could make sense of the world if not the new class of experts? As Brad Littlejohn observed,
The heyday of the expert was the early 20th century—this was the point in history at which the complexity of human life had radically increased but the availability of information had not yet caught up. In the age of railway timetables, trans-Atlantic steamships, and Gatling guns, it quickly became clear that the sturdy common sense and hoary maxims of the Farmers’ Almanac would no longer suffice; the world urgently needed men of science who could make sense of the profoundly complex new phenomena that had enmeshed the human race.
The technocracy movement offered a solution to this emerging quagmire. California engineer William Smyth reflected the mood in 1919 when he described technocracy as “the rule of the people made effective through the agency of their servants, the scientists and engineers.”
For a time the movement even crystallized into an official political party. In an interview with Ken Myers, Grant Wythoff explained that during the turmoil of the Great Depression, technocracy offered dreams of a brighter future:
This new party . . . said the reason we got into this mess with the Great Depression is because politicians are running our national economy, and politicians are not skilled engineers. They don’t understand what it takes to produce all the goods and services that this country needs, so politicians are superfluous—we should kick them all out of office and instead install engineers and technical experts into power who can best make decisions for the good of the country and the good of the economy.1
Eventually, the technocracy movement became its own worst enemy when it adopted fascist stylizations. For example, Howard Scott gave screaming radio addresses while his followers dressed in black and marched down the street wearing red armbands. The threat of fascism from Germany killed off the lingering remnants of militant technocracy. Similarly, the Cold War finished off most of the utopianism embodied in the Chicago World’s Fair. With the threat of nuclear holocaust, the utopian expectation was replaced by dystopian angst. It was feared that instead of saving the human race, science and technology might eliminate it.
Our Own Utopian Moment
The social and political turmoil that gave rise to the technocracy movement of the 1930s—breakdown in certainties, social polarization, rapid intellectual shifts—is similar to our situation today. A new generation of thinkers, not old enough to remember fascism or even the Cold War, are resuscitating the utopian mood of the last century, and they are just as eager to hand the management of the world over to engineers and technocrats.
Sam Altman, CEO of OpenAI, captured this mood in his 2021 manifesto, “Moore’s Law for Everything.” Altman’s expectation for what science and technology can accomplish far exceeds anything published at the 1933 World’s Fair:
My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. . . . In the next five years, computer programs . . . will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”
What does this look like in practice? It means there can be “phenomenal wealth” including a universal basic income. This, in turn, will lead to a huge increase in net happiness:
The changes coming are unstoppable. If we embrace them and plan for them, we can use them to create a much fairer, happier, and more prosperous society. The future can be almost unimaginably great.
Altman is not alone in his expectations. In an article for The Free Press titled, “AI Will Save the World,” venture capitalist Marc Andreessen argues that AI will eventually offer godlike love and wisdom to every child, CEO, and government official:
Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love. . . . Every leader of people—CEO, government official, nonprofit president, athletic coach, teacher—will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all.
What will be the result of government officials having access to this type of intelligence augmentation? In a word, utopia. Again, from Andreessen:
Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and result in a new era of heightened material prosperity across the planet.
Andreessen even looks forward to a time when AI will help nations wage war, arguing that “military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.”
These dreams of heaven on earth come with a price tag, and its cost is technocracy—handing over management of our political systems to machines and their handlers.
In Bots We Trust?
It is not simply tech enthusiasts who are advocating for this upgraded version of technocracy. Among political activists, think tanks, and various NGOs, there is growing interest in leveraging data science to bring to policy the precision associated with mathematics. The hubris is animated by a naïve and almost childlike trust in the power of data-crunching algorithms.
For example, the Center for Public Impact, a think tank connected with the Bill and Melinda Gates Foundation, anticipates AI disrupting our political systems through better and more efficient decision-making mechanisms:
Effective, competent government will allow you, your children and your friends to lead happy, healthy and free lives. . . . It is not very often that a technology comes along which promises to upend many of our assumptions about what government can and cannot achieve . . . Artificial Intelligence is the ideal technology to support government in formulating policies. It is also ideally suited to deliver against those policies. And, in fact, if done properly policy and action will become indistinguishable. . . . Our democratic institutions are often busy debating questions which—if we wanted—we could find an empirical answer to . . . Government will understand more about citizens than at any previous point in time. This gives us new ways to govern ourselves and new ways to achieve our collective goals.
Bot-driven politics is already a reality and is being felt at the highest level of geopolitics. In August 2022, Henry Farrell, Abraham Newman, and Jeremy Wallace wrote an article showing that overreliance on Big Data instills dictators with a false confidence, thus making the world more dangerous. But this is not just a temptation for dictators. In an article at The American Mind, Robert C. Thornett recounts how today’s world powers, including the United States, are exchanging diplomacy and direct communication for the illusory security of data crunching. He gives numerous examples of how this has led to international blunders.
To be sure, sometimes AI can assist to inform policymaking as one among a range of information-collecting mechanisms. But the temptation is to begin looking to digital pattern recognition as a replacement for traditional governance, diplomacy, and relationship-based intelligence. It isn’t hard to understand why we fall under this temptation; after all, AI purports to offer a more clean and scientific approach to governance, compared to the complex and messy realm of real-world negotiation, prudential reasoning, and relationship-based intelligence gathering. But it doesn’t require a dystopian imagination to anticipate how these trends could go terribly wrong. Consider scenarios like the following:
• The bot declares that a preemptive strike would be 70 percent more effective than diplomacy.
• Artificial intelligence predicts that organizations required to limit their hiring of white males to 20 percent of their workforce will see a decrease in workplace harassment.
• Computers discover that giving tax breaks to families attending online instead of in-person church will reduce carbon emissions.
AI & Political Legitimacy
In our current crisis of authority, in which there has been a breakdown in the type of trust required for political legitimacy, it is tempting to look to data science and mechanized decision-making as an attractive alternative. For example, an article in the European journal Data & Policy suggests that AI governance is the solution to the various crises of political legitimacy:
A lack of political legitimacy undermines the ability of the European Union (EU) to resolve major crises and threatens the stability of the system as a whole. By integrating digital data into political processes, the EU seeks to base decision-making increasingly on sound empirical evidence. In particular, artificial intelligence (AI) systems have the potential to increase political legitimacy.
It is, of course, entirely possible that America will never experience the type of top-down governance by machine that these social reformers are agitating for. Given our unique social and political context, it is more likely that the corporate sector will be a primary driver in bringing technocracy to the United States. Indeed, there are already strong indications that corporate elites are interested in using surveillance capitalism to mainstream new social norms.
But whether it comes from the direct rule of politicians or the de facto rulership of corporate elites working in tandem with them, machine-driven governance threatens to undermine the type of legitimacy on which authority depends. Indeed, far from being a solution to the current problems, bot governance will almost certainly perpetuate the crisis of legitimacy by making the rationale behind policy inscrutable.
In a democracy, rulers are accountable to the people, and this accountability means, among other things, that rulers can be called upon to explain their decisions. But what happens to democracy when policy comes to have the same inexplicability as computerized chess? In professional chess, commentators will often say something like, “the computer is telling us that such-and-such would have been a better move, but we’re not sure why.” The brain of the computer is opaque—what people often describe as a black box. This doesn’t really matter in professional chess, but in governance, one of the criteria for legitimacy is explainability. As Matthew Crawford wrote in American Affairs,
Institutional power that fails to secure its own legitimacy becomes untenable. If that legitimacy cannot be grounded in our shared rationality, based on reasons that can be articulated, interrogated, and defended, it will surely be claimed on some other basis. What this will be is already coming into view, and it bears a striking resemblance to priestly divination: the inscrutable arcana of data science, by which a new clerisy peers into a hidden layer of reality that is revealed only by a self-taught AI program—the logic of which is beyond human knowing.
I think Crawford is correct. When used as a governing mechanism, the black-box character of AI outputs can cause humans to feel excluded. It isn’t hard to see how this could accelerate the type of helplessness people already feel when trying to correct mistakes in a credit report or recover a suspended account.
Democracy in the Balance
It is in the very nature of democracy for citizens to have a voice. We know that democracies falter when, whether through lack of education or the influence of demagogues, a critical mass of citizens are no longer habituated to this task. But could democracy also falter because the voice of the citizen recedes into obsolescence amid new authorities that promise better to meet our needs—authorities that are neither human, virtuous, nor rational by the criterion of explicability?
Technocratic forms of governance appear attractive when the rationale for action is deemed to be so inscrutable that a “by the people” model no longer appears viable. That was the situation Americans faced in the 1930s. In the quickly changing society then overtaking the world, scientists and engineers offered to help citizens make sense of reality. After all, they were the experts, and thus only they could truly know what citizens really needed. All that was required for unfettered progress, according to the technocrats of the time, was for us to trust them. Today’s data scientists offer a similar Faustian bargain: turn policy, foreign and domestic, over to them, and utopia will result.
In the 20th century, it took the threat of fascism to remind Americans that democracy was precious, while it took the threat of Communism to remind them that the technocratic method can be dangerous. Will we, too, need to be confronted with a similar threat for us to realize how precious our liberties are?
Mars Hill Audio Journal, Volume 141.
has a Master’s in Historical Theology from King’s College London and a Master’s in Library Science through the University of Oklahoma. He is the blog and media managing editor for the Fellowship of St. James and a regular contributor to Touchstone and Salvo. He has worked as a ghost-writer, in addition to writing for a variety of publications, including the Colson Center, World Magazine, and The Symbolic World. Phillips is the author of Gratitude in Life's Trenches (Ancient Faith, 2020), and Rediscovering the Goodness of Creation (Ancient Faith, 2023). He operates a blog at www.robinmarkphillips.com.Get Salvo in your inbox! This article originally appeared in Salvo, Issue #67, Winter 2023 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo67/technotopian-bargain