The Tyranny of Data

What Modern Nations Can Learn From the Sepoy Rebellion

In 1857, a group of British civilians and soldiers were brutally massacred in India following a revolt against the East India Company known as the Sepoy Rebellion. Women and children were shot, hacked to pieces, and thrown down a well in what is remembered as the Bibighar Massacre. 

This devastating event had far-reaching consequences for India and the British Empire. In the short term, the British reacted by going on a killing spree of their own, even retaliating against prisoners and civilians. In the long-term, the British government used the violence to paint the Indian people as barbaric and uncivilized, thus justifying British imperialism for almost the next hundred years.

On one level, the Sepoy Rebellion and its aftermath seems a familiar story of conquest, rebellion, and imperialism. However, the true background of the massacre is often overlooked.

From Relationships to Big Data

In the early 17th century, the British enjoyed relatively friendly relations with the Indian people as a result of outreach from the East India Company (EIC).

Originally established in 1608 as a single trading factory on India’s western coast, the EIC grew to become an integral player in Indian culture and politics. Using an expansive network of relationship-based intelligence gathering, the EIC became an equal player in the power dynamics of the region. The EIC even had its own army. Crucially, however, the company’s success depended on cultivating and maintaining good relationships with the other players in the region and cooperating as equals within Indian culture. This cooperation enabled the EIC to collect information that helped boost their business efforts. In an article published last month in The American Mind, Robert C. Thornett explains just how expansive these relational networks were:

As Company rule expanded, the British colonists immersed themselves in Indian cultures, a movement later referred to as Orientalism. Founding the Royal Asia Society and Fort William College in Calcutta, British scholars like Sir William Jones studied Hindu, Muslim, and Sikh languages and discoursed with culture experts called munshis. The more the EIC engaged with and included the Indian people in its endeavors, the more it thrived.

So what changed? What happened to make relations between the Indians and the British turn sour by the 1850s? Drawing on research from historian C.A. Bayly, Thornett explains:

Now that British and Indian scholars were working together as equals, the British needed a new justification for colonial rule. And it came from Social Darwinist theories postulating racial hierarchies, which arrived in the minds of young EIC recruits from the British Isles. EIC administrators began to distance themselves from their Indian counterparts, creating new gaps in human intelligence.

To fill this gap, EIC officials ceased relying on relationship-based intelligence gathering and turned to the new science of statistical analysis, an early version of what we call Big Data. 

The statistical movement had begun in 1830s Britain and was influenced by men like Francis Galton, who believed statistics could bring more precision to the social sciences. There was a utopian dimension to the statistical movement, for Galton and others believed that through careful analysis of data, they could steer evolution towards a more optimized version of the human race. 

The EIC embraced the statistical movement as an alternative to relationship-based intelligence-gathering. Again from Thornett:

Soon the EIC began to favor data collection over communication with its longstanding local informants in streets and markets across India. The British created centralized bureaucracies for information gathering, pulling data from every available source: the army, political offices, the education and revenue departments, geographic surveys, district censuses, revenue surveys, and Orientalist societies. “Our government is a peculiar one, it gushes on the information front,” remarked Rudyard Kipling.

The problem is that the most relevant information for governing India—Indian beliefs, sentiments, feelings, and cultural specificities—were invisible to the data analyzed by EIC statisticians. Moreover, the fixation with statistics led to a false confidence and growing callousness to local feelings. The principal cause of the Sepoy Rebellion illustrates this neglect. 

The EIC had introduced new rifles that required soldiers to bite off the tips of cartridges greased with animal fat. This was considered offensive to both Hindu and Muslim soldiers employed by the EIC. When a group of Indian soldiers who served in the EIC army refused to use the new rifles and were subsequently court-martialed and imprisoned, this sparked the war that eventually led to the brutal massacre and the brutal colonization following in its wake. 

In Bots We Trust 

Thornett shows that modern regimes are in danger of repeating the EIC's mistake. The world powers, including Russia, China, and the United States, are exchanging diplomacy and direct communication for the illusory security of data-crunching. He gives examples of how this has led to numerous international blunders, where crises have emerged that were completely undetected by the information bureaucracy. 

Thornett’s observations have been echoed by reporters at Foreign Affairs who have shown that overreliance on Big Data instills dictators with a false confidence, thus making the world more dangerous. 

The scene of national politics is hardly any better. In order to position themselves as forward-looking, numerous NGOs, think-tanks, and lobby organizations have been heralding data-crunching as an alternative to traditional forms of governance. For example, the Center for Public Impact, a think-tank connected with the Bill and Melinda Gates Foundation, anticipates AI disrupting our political systems through better and more efficient decision-making mechanisms:

Effective, competent government will allow you, your children and your friends to lead happy, healthy and free lives.... It is not very often that a technology comes along which promises to upend many of our assumptions about what government can and cannot achieve… Artificial Intelligence is the ideal technology to support government in formulating policies. It is also ideally suited to deliver against those policies. And, in fact, if done properly policy and action will become indistinguishable… Our democratic institutions are often busy debating questions which - if we wanted - we could find an empirical answer to… Government will understand more about citizens than at any previous point in time. This gives us new ways to govern ourselves and new ways to achieve our collective goals.

Similarly, the AI platform Turbine Labs offers tools for politicians to help guide their decisions:

It is crucial that we equip political decision-makers with the right data and, in turn, that those decision-makers use that information to formulate the most comprehensive and inclusive policies.

The thing that makes this type of thinking beguiling is that sometimes AI really can assist to inform policy as one among a range of information-collecting mechanisms. But the temptation is to begin looking to AI as a replacement for traditional governance, diplomacy, and relationship-based intelligence. It isn’t hard to understand why we fall under this temptation. AI promises to offer a more clean and scientific approach to governance compared to the complex and messy realm of real-world negotiation, prudential reasoning, and relationship-based intelligence gathering.

It doesn’t require a dystopian imagination to anticipate how this could go terribly wrong. Consider scenarios like the following:

  • The bot declares that a preemptive strike would be 70% more effective than diplomacy.
  • The bot identified that if organizations with more than 100 employees are legally required to limit the hiring of white males to 20 percent of the workforce, it is likely that there will be a decrease in workplace harassment.
  • The bot discovered that giving tax breaks to families who do online rather than in-person church will lead to reduction in carbon emissions.

AI and the Crisis of Political Legitimacy

It is easy to see how overreliance on data science could cause breakdown in conventional governance. Yet in the India fiasco, it was a case of the reverse: breakdown in conventional governance created a vacuum in which overreliance on data science became attractive.

In our current crisis of authority, in which there is a breakdown in the type of trust required for political legitimacy, it is tempting to look to data science, and even mechanized decision-making, as an attractive alternative. For example, an article published by Data & Policy suggested that AI governance is the solution to the various crises of political legitimacy:

A lack of political legitimacy undermines the ability of the European Union (EU) to resolve major crises and threatens the stability of the system as a whole. By integrating digital data into political processes, the EU seeks to base decision-making increasingly on sound empirical evidence. In particular, artificial intelligence (AI) systems have the potential to increase political legitimacy.

But while bot-governance may seem like a solution to crumbling political legitimacy, it may end up merely perpetuating the crisis of legitimacy through making the rationale behind policy inscrutable. Consider what happens in professional chess. The commentators will often say things like, “the computer is telling us that such-and-such would have been a better move, but we’re not sure why.” The brain of the computer is opaque, what people often describe as a black box. This doesn’t really matter in professional chess, but in governance, one of the criteria for rationality is explainability. From Matthew Crawford's article, “Algorithmic Governance and Political Legitimacy”:

Institutional power that fails to secure its own legitimacy becomes untenable. If that legitimacy cannot be grounded in our shared rationality, based on reasons that can be articulated, interrogated, and defended, it will surely be claimed on some other basis. What this will be is already coming into view, and it bears a striking resemblance to priestly divination: the inscrutable arcana of data science, by which a new clerisy peers into a hidden layer of reality that is revealed only by a self-taught AI program—the logic of which is beyond human knowing.

I think Crawford is correct. When used as a governing mechanism, the black-box character of AI outputs makes humans feel excluded. It isn't hard to see how this could lead to the type of helplessness people already feel when, for example, trying to correct mistakes in a credit report or suspended account. 

And that brings us back to Thornett’s observations about the Sepoy Rebellion. The problem was not that the data given to the East India Company was false. From a purely cost-benefit analysis, it may have made sense to grease cartridge tips with animal fat. The problem arose when the fetish with data engendered a false sense of confidence among the bureaucratic classes and consequently eroded human connections and social wisdom. 

I’ll leave you with these words from Thornett’s American Mind article:

The Sepoy Rebellion illustrates how relying on data at the expense of connecting with citizens can foster illusions of control while stoking feelings of exclusion and resentment… As the Sepoy rebellion illustrates, citizens who feel excluded when governments value data over their input may unite in opposition in ways undetected by the information bureaucracy.

has a Master’s in History from King’s College London and a Master’s in Library Science through the University of Oklahoma. He is the blog and media managing editor for the Fellowship of St. James and a regular contributor to Touchstone and Salvo. He has worked as a ghost-writer, in addition to writing for a variety of publications, including the Colson Center, World Magazine, and The Symbolic World. Phillips is the author of Gratitude in Life's Trenches (Ancient Faith, 2020) and Rediscovering the Goodness of Creation (Ancient Faith, 2023) and co-author with Joshua Pauling of Are We All Cyborgs Now? Reclaiming Our Humanity from the Machine (Basilian Media & Publishing, 2024). He operates the substack "The Epimethean" and blogs at www.robinmarkphillips.com.

Get SALVO blog posts in your inbox!
Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/post/the-tyranny-of-data

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]