We’re All Living in Experiment House

Matthew Crawford on the New Anti-Humanism

In C.S. Lewis’s Narnia series, some of the books feature characters who attend a school known as “Experiment House.” This co-educational boarding house was inspired by modern experiments in education that Lewis found objectionable. He particularly despised attempts to remake education according to the fashionable canons of scientific rationalism, behaviorist psychology, and modern social theories. 

C.S. Lewis was not alone in raising these concerns. In his classic 1954 book The Technological Society, the French philosopher Jacques Ellul offered a critique of the modern technological society. Ellul’s critique went beyond the domains we normally think of as technological. For the French philosopher, technique references not simply machines that assist us to attain ends, but the entire social order dominated by the fetish for efficiency and proceduralism. Accordingly, Ellul considered bureaucracy a type of technique, and a hallmark of the technological society.

Ellul was alarmed to see schools used as instruments for propagandizing children into the order of technique and thus part of the induction process towards a social order reconstituted along modernist lines. As he remarked, “The chief purpose of instruction and education today, is to bring along a younger generation that is adjusted to this society.”

Life has moved on since the mid-20th century, and the 20th-century symbols of being modern now seem quaint. For example, in the Narnia stories, one inmate of Experiment House is Eustace Clarence Scrubb, whose parents were vegetarians, teetotalers, and wore a special sort of underwear. If they lived today, his parents would likely eat ethically-sourced gluten-free products, sip kombucha, and not wear any underwear at all (a trendy movement known as “going commando”).

Yet one thing has not changed: we continue to be enthralled with the idea of using education to achieve utopia through the right technique.

Confusing Students with Computers

In contemporary education, our fetish with technique has fixated on training students to think like computers. Our schools are becoming a new sort of Experiment House—laboratories designed on the premise that we should strive to align human cognition, as much as possible, with the methods, capabilities, and priorities of the computer.

Curiously, while most ordinary people do not concur with the new methods, they are being promulgated by some of the wealthiest and most powerful people in our world. For example, the billionaire computer scientist Stephen Wolfram argues that computational thinking should become integral to how we teach students to view the entire world. Speaking on the Harvard EdCast podcast, Wolfram summarized his educational theories saying,

Computational thinking should be something that you routinely use as part of any subject you study…. I think it’s something that should be part of every subject that’s taught.

Wolfram defines computational thinking as the ability to “formulate thoughts so that you can communicate them to an arbitrarily smart computer.” This type of thinking, he argues, should be the model for how we teach students to understand the world:

How do you take the things you want to do, the questions you might have about the world, the things you want to achieve, and how do you formulate them in a way so that a computer can help you do them?

Wolfram is not simply talking about basic digital literacy, but the goal of  trying to help students view the entire world like a computer. We get a glimpse of what this looks like in practice from the International Society for Technology in Education (ISTE). Backed by funding from the Chan Zuckerberg Initiative, GM, Walmart, and other powerful interests, the ISTE is pushing to draw the liberal arts into the all-consuming orbit of the computational mindset. On the ISTE website, they give an example of how Shakespeare’s Macbeth could become amenable to computational thinking.

Students could create a chatbot to quiz their classmates by creating questions, crafting answers, and designing rules for the chatbot to follow. You need a deep understanding of Macbeth to do all of that!

The ISTE’s proposal would reduce Shakespeare’s Macbeth to a series of data points that become fodder for the real learning activity, namely coding. The subtext is that for any subject to have value, it must become an auxiliary to computer education.

The Bad Anthropology Behind Bad Pedagogy

Behind every bad pedagogy is bad anthropology. It should come as no surprise that these educational theories are arising at a time when our understanding of human nature is under threat from computer-centric metaphysics and epistemology. Regarding the latter, consider how it is becoming increasingly common to hear people describing human intelligence with metaphors drawn from the realm of computing. 

The conflation of the human and the machine is now reflected in the models used by neuroscientists today, such as the neural network model or the computational theory of mind. Proponents of the latter hold, in the words of the Wikipedia article about the theory, that “the human mind is an information processing system and that cognition and consciousness together are a form of computation.” This is not an entirely false way to describe the brain but becomes problematic when adapted as an all-encompsing explanation for what happens between the ears.

This reductionism is not limited to neuroscience: as AI sediments itself into more areas of life, there may be great temptation to assume that the machine way of doing things is always preferable to organic, God-given intelligence. We already see a trend—represented by theorists like Ray Kurzweil, Nick Bostrom, and David Chalmers—toward treating the differences between human and artificial intelligence as merely quantitative rather than qualitative. With this comes the temptation to assume an anthropology in which humans are merely informational organisms. As Yuval Harari put it, “organisms are algorithms.”

Under these types of false equivalences between the human and the machine, we end up understanding neither: we imagine machines are capable of things they are not, such as fixing our world’s problems and even moral reasoning, and we also begin believing that the only human type of cognition that really matters is calculation. On such a scheme, wisdom is narrowed to knowledge, knowledge is reduced to mere information, while information—along with everything else that matters—collapses into pure data.  And given that data is something a computer can always crunch better, it follows that human thought, imagination, and insight are mere surplus input.

In My Fair Lady, Professor Higgins famously declared, “Why can't a woman be more like a man?” For the modern educational theorists, the question is, “Why can’t a student be more like a computer?”

Consequences of the New Anti-Humanism

It is not clear that we can sustain a society based on antipathy towards the human qua human. At least, that was a warning issued by Matthew Crawford earlier this year in a First Things lecture. In his talk, titled Antihumanism and the Post-Political Condition,” Crawford suggested that antihumanism manifests itself is in the idea that humans are merely inferior versions of computers.” In an environment increasingly customized for computers, humans become the weak link in the system since they are not adapted to the type of clean inputs that automated systems require. Accordingly, our current class of social engineers advocate for a New Man based on an anti-humanism rivaling Christian anthropology. 

As an example, Crawford points to what happened when one of Google’s self-driving cars found itself at a four-way stop. Naturally, being a robot, the vehicle was unable to communicate with the other drivers. All the factors that go into negotiating a four-way stop (eye contact, social intelligence, movements of the car that indicate willingness to yield, etc.) were invisible to the self-driving vehicle. Not knowing what to do, the car froze and broke down. But the most telling part of this incident is what happened afterwards. When the engineer in charge of the car was asked if he had learned anything from the incident, he said humans need to stop being so idiotic.

Crawford invites us to think about the implications in such a statement. When the engineer said humans need to become less idiotic, he meant they need to behave more like computers or, as Crawford put it, “more legible to systems of control and better adaptive to the need of the system for clean inputs.” This is the same worldview that is transforming our schools into laboratories designed to condition students to view all subjects computationally. In townscapes and social ecosystems designed around the needs of computers, uniquely human types of social intelligence—ways of thinking not transferable to a computer—become a liability. For society to run smoothly, humans must become like algorithms.

From Crawford’s lecture:

Certain developments in the realm of ideas provide the tacit picture of the human being that guides our institutions. One feature that the currently ascendant schools share in common is a low regard for human beings, whether on the premise of their fragility, their cognitive limitations, their latent tendency to “hate,” or their imminent obsolescence with the arrival of imagined technological possibilities. Each of these premises carries an important but partial truth, and each provides the master supposition for some project of social control. Each tends inexorably toward a further concentration of wealth and power, and the further erosion of the concept of the citizen: the wide-awake, imperfect but responsible human being on whom the ideal of self-government rests.

That older ideal has its roots in the long arc of Western civilization. In the Christian centuries, man was conceived to be fallen, yet created in the image of God. This doubleness—this consciousness of imperfection and orientation toward perfection—provided a picture of the human being with political effects that were likewise double. It moderated utopian hopes for a radically New Man, and the political savagery that often accompanies such hopes. It also energized a standard of judgment in political matters—the common good—that put limits on manipulation of the population for private gain. The Christian anthropology had this doubly moderating effect because it grounded our aspiration to perfection, not in open-ended self-creation, but in imitation of the most perfect man, the Son of God. You don't need to be a Christian to recognize the utility of the Christian anthropology for clarifying the effects of our current anti-humanisms, criticizing their presuppositions, and looking for an exit from the uncanny new forms of tyranny that are quickly developing. 

Watch Crawford’s entire lecture below:

has a Master’s in Historical Theology from King’s College London and a Master’s in Library Science through the University of Oklahoma. He is the blog and media managing editor for the Fellowship of St. James and a regular contributor to Touchstone and Salvo. He has worked as a ghost-writer, in addition to writing for a variety of publications, including the Colson Center, World Magazine, and The Symbolic World. Phillips is the author of Gratitude in Life's Trenches (Ancient Faith, 2020), and Rediscovering the Goodness of Creation (Ancient Faith, 2023). He operates a blog at www.robinmarkphillips.com.

Get SALVO blog posts in your inbox!
Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/post/were-all-living-in-experiment-house

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]