Better Than Human

The Transhumanist Transition to a Technological Future

It started with what Richard Dawkins refers to as “stable things.” Before organic life arose on earth, this marvelous universe was nothing more than an interstellar light show. Stars, planets, and moons spun through the galaxy totally devoid of conscious intent, intelligence, or motive. You know the drill: no U2 concerts, no moody ex-girlfriends, no surprises.

Then a fascinating twist of fate altered the course of pre-history. Somewhere on earth, the right stable things swirled together to ignite something totally new. Boom! Splash! Emergence. Life arose and introduced the faintest twinkle of action to an elegant and predictable abyss. Time, for the first time, mattered. Our story gained some steam.

From emergence it wasn’t far to replication and, subsequently, evolution. Over billions of years, DNA transformed into single-celled organisms that strove on in the muck until the survivors were selected for their biological eccentricities to move up the food chain. Trillions of deaths and thousands of centuries led to the gradual selection of ever more complex critters: puffer fish, newts, ostriches, and iguanas; and, suddenly, only a few thousand years ago, our hero took the stage.

The first man (or woman) skipped away from the world of the apes, and chance held its breath. Up to that point, everything had been trial and error, a painstaking game of genetic roulette.

But with man, conscious innovation outpaced unconscious accident. With tools and intelligence, man built spears and then towers and then automobiles. He corralled fire and co-opted the power of the atom. He created fascinating canvas art and laughably bad cinema, and in the time it took amoebas to jump from amoebas to, well, somewhat more advanced amoebas, the planet was transformed. Mankind won the battle for supremacy over stable (and unstable) things, and only one conundrum remained. For all his power, man had failed to change the universe’s most complex creation—himself.

But fear not! Now we sit at the cusp of something more monumental than either emergence or evolution: intelligent design. Just as man unwrapped the complexities of space travel and microwave ovens, he has finally unraveled the secrets of silicon and DNA. And with these and other innovations, to quote acclaimed naturalist Edward O. Wilson, “Homo sapiens, the first truly free species, is about to decommission natural selection, the force that made us.”

So, that is the process, ladies and gentlemen. Emergence. Evolution. Intelligent design. According to the Singularity Institute’s Eliezer Yudkowsky in a speech delivered at the Immortality Institute’s Life Extension Conference on November 5, 2005, far from competing theories, these are succeeding pieces in the puzzle of history. And within a century, we will radically redefine what it means to be human.

Shocked? Astounded? Don’t be. If you believe the “transhumanists,” this is a ride you have to take.

Our Posthuman Future?

Unless you are a prominent gerontologist, cryonicist, or president of the Francis Fukuyama fan club, it is quite possible that you have never heard of “transhumanism.” Sporting a prefix generally reserved for transvestitism, transcendentalism, and the trans-Siberian railroad, transhumanism is a movement that hopes for the radical evolution of our species, and it is quickly attracting the attention of followers and critics alike.

Tracing its etymological roots to 1972 and New School University professor FM 2030 (who proved, once and for all, that alpha-numeric names are not solely the domain of rappers), “transhuman,” short for transitory human, was first used to refer to a human evolutionary stage transitional to “posthumanity” and has grown to include a broad basket of varied philosophic and scientific beliefs.

Throughout the 1970s and 1980s, radical philosophers, futurists, scientists, body builders, and assorted science-fiction fanatics from all walks of life began to meet in hopelessly hip places like LA and New York to discuss the possibilities of rapid technological progress. In 1988, an Oxford-educated California man named Max More launched one of transhumanism’s pioneering organizations, the Extropy Institute; and in 1998 transhumanism found a second home in Nick Bostrom and David Pearce’s World Transhumanist Association.

Now, motivated and ambitious, this eclectic band stands for nothing less than the rapid metamorphosis of the human species into something “posthuman”—that is, better than human. According to Bostrom, “Transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn to remold in desirable ways.” They “yearn to reach intellectual heights as far above any current human genius as humans are above other primates; to be resistant to disease and impervious to aging; to have unlimited youth and vigor; to exercise control over their own desires, moods, and mental states.” And no method of advancement—chemical, digital, molecular, or robotic—is outside the realm of reasonable discussion.

Their first target is immortality, and leading the charge is a passionate blue-eyed Brit with a Rip-Van-Winkle beard named Aubrey de Grey. Of late, de Grey has become the crown prince of the “immortalist” movement with his razor-sharp wit, his confrontational (yet charming) personality, and his SENS (Strategies for Engineered Negligible Senescence) approach to fighting aging; and when I met de Grey at the 2005 Immortality Institute conference, I wasn’t disappointed. Half scientist and half civil rights pulpit pounder, de Grey has a way of framing the issue of aging that tends to undermine his critics. He views it more as a disease, a silent killer, than the acceptable progression of nature, and he is fond of asking would-be opponents, “If you could stop 100,000 people from dying needlessly every day, why not do it?” That’s a hard question to parry, and when you see him in a room surrounded by hundreds of people who have already signed up to have their heads cryogenically frozen in hopes of a future revival (including at least one gentleman in a “Bury funerals, not people!” t-shirt), you understand the kind of zeal the promise of immortality can inspire—whether the messenger is a Cambridge researcher like de Grey or an evangelist like Billy Graham.

Moving beyond de Grey’s modest ambitions, however, the essence of the movement is not just a push for the mortal persistence of the individual human being but for his evolution—even post-biological evolution.

This is the message of inventor, author, and technological optimist Ray Kurzweil. The primary proponent of what Joel Garreau has referred to as transhumanism’s “Heaven Scenario,” Kurzweil doesn’t stop at immortality but instead drives forward to the radical evolution of the human machine. In his book Radical Evolution, Garreau quotes Kurzweil as predicting that by 2029, “a $1,000 unit of computation . . . [will have] the hardware capacity of 1,000 human brains,” and it will be “as hard to tell if a person is handicapped as it is to guess his original hair color.” He is joined by academics like Trinity College’s James J. Hughes, who envisions a futuristic family in which “one member is a cyborg, another is outfitted with gills for living underwater. Yet another has been modified to live in a vacuum.” And if you think gills, cyborgs, and supercomputers are out there, wait until you see where they’re leading: a phenomenon called “the singularity.”

According to a website devoted to the topic ( singularity.org), the singularity is “the rise of super-intelligent life, created through the improvement of human tools by the acceleration of technological progress reaching the point of infinity.” In the abstract, it sounds like a mathematic equation; in practice, it looks eerily like the extinction of the human species.

The theory is that once we make smarter-than-human intelligence capable of replicating and improving itself, it will do so at such a rapid pace that within a short period of time the human species will become antiquated and unnecessary. However, “just because humans become obsolete doesn’t meanyou become obsolete,” notes the Singularity Institute’s Yudkowsky. “You are not a human. You are an intelligence which, at present, happens to have a mind unfortunately limited to human hardware.” And in the future we will trade in these biological bodies, frail and prone to death, for new technological bodies into which we will “upload” our minds—a process kind of like burning a CD with you on it.

It’s a lot for the uninitiated to take in, isn’t it? So—let’s review the future. Within decades we will start to enhance ourselves by means of chemical, genetic, and technological change. Drugs will alter our minds, leaving us happy and motivated around the clock. Gene therapy and biotechnology will allow us to defeat infirmity and then blur the line between diseases (like cancer) and less-than-optimal mental and physical states (like shortness). People will begin to live for hundreds and then thousands of years, adding mental features like telepathy and physical features like super-strong bone structure; and, finally, we will begin to merge with our technological creations: the machines. Like the Borg of Star Trek fame, we will share mental networks and create outer shells that never age. With the help of our mechanical progeny—the AI—we will become pure mental energy, until all that is currently human has faded away and we are Aristotelian gods—all-knowing, all-seeing, and left with nothing but the consideration of ourselves (or self).

“The twenty-first century could end in world peace, universal prosperity, and evolution to a higher level of compassion and accomplishment,” write the National Science Foundation’s Mihail C. Roco and William Sims Bainbridge in the 415-page policy document Converging Technologies for Improving Human Performance. “It is hard to find the right metaphor to see a century into the future, but it may be that humanity would become like a single, distributed and interconnected ‘brain.’”

World peace. Immortality. Unlimited intelligence. Perfection.

Convinced? If you say “no,” you’re not alone.

The Skeptics of Utopia

In April of 2000, Sun Microsystems cofounder and chief scientist Bill Joy rocked the pages of Wired magazine with a stunning 11,000-word article titled “Why the Future Doesn’t Need Us.” In short (to borrow a metaphor from Garreau), where Kurzweil envisions heaven, Bill Joy senses hell.

“Biological species almost never survive encounters with superior competitors,” writes Joy. “In a completely free marketplace, [with] superior robots . . . biological humans would be squeezed out of existence.” And Joy does’t mean this in the happy-go-lucky transformative way that Yudkowsky does. Throughout his Wired piece, Joy notes the dangers of advanced robotics, AI, and nanotechnology—greater than those of conventional Weapons of Mass Destruction (WMDs) because they may soon be cheaply and readily available, virtually undetectable, and “self-replicating” (meaning that nasty little machines might make more of their nasty little selves without us wanting them to).

The ever-present danger of self-replicating nanotechnology even led author Eric Drexler to give it a name—the “grey goo problem”—which prophecies that incredibly simple self-replicating nanotechnological robots could go out of control searching for and assimilating resources to self-replicate (their only purpose) and end up consuming all life (and non-life) on earth. Drexler notes that these machines “might be superior in an evolutionary sense, but this need not make them valuable.” Read a little Drexler and Joy, and you start to think a little harder about the nightmare scenarios pictured in The Matrix and The Terminator. While I was once naïve enough to wonder what terrorists, rogue dictators, and disgruntled soccer moms might do with such technology, the real experts have a greater concern: what the technology might do with itself; and the consequences of the dangers they predict often take on blockbuster sci-fi proportions.

Pushing aside “grey goo,” however, Johns Hopkins professor Francis Fukuyama has taken another line of attack—questioning the cultural and philosophical implications of transhumanism. In the September/October 2004 edition ofForeign Policy, Fukuyama brought infamy to the transhumanist movement when he labeled it “the world’s most dangerous idea”; and, while his full considerations of the topic are detailed in the acclaimed Our Posthuman Future, Fukuyama’s primary objections to transhumanism in Foreign Policy are twofold: It is a threat to the equality of rights, and it is an affront to the outstanding complexity of the human being.

First, Fukuyama sees the consciously directed evolution of individual human beings into “posthumans” as more than slightly problematic for that whole liberal conception of equal rights. As Fukuyama notes, “Underlying the idea of the equality of rights is the belief that we all possess a human essence that dwarfs manifest differences in skin color, beauty, and even intelligence”; however, if the transhumanists have their way, there will be a whole new strata of considerations to deal with—massive differences in physical and mental coordination, the possibility of conscious machines, radically altered animal species—and these new differences could rock to the core our current understanding of rights and dignity.

Second, Fukuyama sees human nature as irreducibly complex and the transhumanist desire to change it as rudderless and hasty—lacking an appropriate value system to replace the one it intends to undermine. “Transhumanism’s advocates think they understand what constitutes a good human being,” writes Fukuyama. “But do they really comprehend the ultimate human goods?” For Fukuyama, there are certain necessities to human life—limitation, jealousy, love—that make life livable and that bind us as a species, but the complexity of our current culture would collapse should this foundation be shaken. In short, the transhumanists might be willing to strip bare the current moral and cultural structures of the human race, but do they have anything solid with which to fill the vacuum once these things are gone?

Finally, many people have a more commonsensical objection, a practical combination of the other warnings brilliantly communicated by the Center for Bioethics & Human Dignity’s Matthew Eppinette: All this utopianism and idealism are just a little scary, and the movement seems to ignore some of the basic historical lessons of past attempts at human perfection. “Transhumanism is both appealing and frightening because of the way they cast things,” remarked Eppinette in a telephone interview last fall. He thinks the transhumanists underemphasize evil as a cause of suffering in the world and gloss over objections like Joy’s and Fukuyama’s in order to barrel ahead with new technologies and an unbounded faith in their ability to perfect the human condition. Meanwhile, they accept a total separation of the mind and body (Cartesian in its belief that humans are defined solely by their capacity for intelligence), and never question whether a life of enhancement would really be a “life” or whether we, in the end, would simply become the machines—the materialistic means by which we achieved those enhancements.

The movement, in total, is a process where the process (improvement, immortality, ascension) is the purpose. There is rarely a “why?”; there is only “why not?”

Sorry but Your Soul Just Died

When all is said and done, the whole transhumanism debate may come to nothing more than a few wacky National Science Foundation reports and a few thousand cryogenically frozen futurists. While the ideas of transhumanism are discussed as if they are verging on reality, most of modern science is still light years behind even the modest predictions of radically prolonged lifespans and nanotechnological medicine. I was constantly reminded of this as I fought a cold—one of humanity’s most persistently untreatable illnesses—throughout the Immortality Institute’s conference. But moving beyond the feasibility of their proposals, it may not be the science that is most disconcerting about the transhumanist view of the future; these problems are obvious enough. It may be—as Eppinette, Fukuyama, and others hint—the psychology.

During his Immortality Institute conference speech on artificial intelligence, AI expert Ben Goertzel addressed two startling areas of inquiry: “human essence” and free will. Goertzel was quick to write off the idea of free will, noting that no person or thing can ever escape the determinism of physics or the determinism of the unconscious mind; and he was skeptical of any “human essence” that would make us a truly unique species. In his mind, the migration to human software won’t be a difficult concept because our only defining characteristic is our intelligence. We represent the latest tool in the progression of nature to gradually uncover the mystery of itself; and our posthuman progeny, finally escaping the myths of essence and free will, could be just the ones to vanquish all illusion (we call it the “heart” or “self-consciousness” or the “soul”) and know the universe perfectly. To quote Goertzel, getting rid of these illusions will allow us “to transmit what is truly valuable.”

And in the back of my mind I wondered: What are these “truly valuable” things? What is the purpose of living forever if there is no purpose? What should I care about if I have no control? Why should I bother to love or be loved if it is all an illusion? How can I find beauty in a moment if I know that moment will never require sacrifice, never wither away? I know that all religions must have their “heaven scenarios,” but doesn’t this one seem a little bleak? To view all of human history—all poetry, courage, discovery, risk, faith, and love—as a mechanical misstep on the way to posthuman perfection is more than wickedly unromantic; it is devastating to even the simplest moral precept, the simplest sense of purpose, the simplest reason for life—particularly immortal life.

You see, the entire enterprise of transhumanism does, to some extent, rest on the “determinism of physics” and on the fundamental belief that we humans are mere cogs in a blind naturalistic process that will culminate in nothing but extinction and pure knowledge. Rather than giving us something valuable, it sounds eerily like the road to what Nietzsche predicted for the twentieth century: “the total eclipse of all values.”

In his incomparable essay “Sorry, but Your Soul Just Died,” Tom Wolfe writes of this century’s Nietzsche (a neuroscientist, he predicts), who, having vanquished the idols “self” and “soul,” finally rises to the podium to bring humanity word of an event. “He will say that he is merely bringing the news,” writes Wolfe, “the news of the greatest event of the millennium: ‘The soul, the last refuge of values, is dead, because educated people no longer believe it exists.’” And, suddenly, everything that most people have ever lived for will simply fall and wither away.

I guess my biggest question is this: Is all that is human—history, will, limitation, enigma, hope, individuality, vulnerability, that culminating cosmological weirdness of stability shacked up with instability—such an easy (or desirable) thing to dismiss? •

The Cartesian Mind-Body Problem 

It was the philosopher René Descartes who proposed the idea that the mind and body are separate entities (hence the word “ Cartesian”). Where the mind is a “thinking thing,” said Descartes, the part that represents one’s true self and in which one doubts and hopes and believes, the body is an entirely material substance, a sort of container for our minds (a movie like The Matrixmakes the same point). More significantly, he argued that the immaterial mind and the material body can interact somehow, mental events producing physical events and vice versa.

It is with this latter claim that the mind-body problem arises. How can something without physical substance affect something that exists entirely within the material world? Also called the “problem of interactionism,” it was a dilemma with which Descartes struggled his entire life. It is also the chief reason why most contemporary philosophers and scientists have abandoned Descartes’ ideas for the belief that the mind is really just matter too, a physical object—no different than the body—whose “thoughts” can be explained by processes in the brain (synapses firing and that sort of thing).

The Borg: Resistance Is Futile

In the Star Trek universe, the Borg are creatures made from organic humanoid parts and cybernetic implants that together give them extraordinary mental and physical abilities. Indeed, the Borg’s one aim in life is to achieve intellectual and bodily perfection, and so they travel the galaxy in order to capture and “assimilate” other species into their Unicomplex, a collective mind to which they are all connected via implants. Once captured, individuals are injected with nanoprobes and surgical prostheses in order to “update” them according to the Borg standard. Should a person possess a valuable trait, something that would “improve the quality of life for all species,” that characteristic is taken and distributed to all. Despite this “altruistic” intent, most in the universe view the Borg as evil and their assimilation process as torture.


One Singular Sensation

According to inventor and futurist Ray Kurzweil, the Singularity is “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” The nature of this transformation, in case you were wondering, is the complete and permanent fusion of humans with computers, an event that Kurzweil deems the next stage in evolutionary development and one that will result in immortality for all.

As the subtitle of one of his books (“Live Long Enough to Live Forever”) makes abundantly clear, it’s not like we’re looking at a whole lot of wait time either. Indeed, Kurzweil sets the Singularity’s start date at about 2020, which means that most of us living today will experience it. And as one might expect, the author, already age 58, is doing everything short of freeze-drying his head to guarantee he’s around when that first brain wave is downloaded.

Why is Kurzweil so confident that we sit at the threshold of such a monumental occurrence, especially since we haven’t even managed to reverse-engineer a dust mite, let alone create the seamless interface between technological mechanisms and biological components required to merge man with his machines? It all has to do with what he calls “the law of accelerating returns.”

To put the matter as simply as possible, Kurzweil believes that we have reached the moment in technological progress when not only the number of innovations grows exponentially but the rate at which such innovations occur likewise increases with each passing advancement. Thus, what took 20 years to accomplish back in 2000 will only take 14 years in 2004; extrapolate this phenomenon to the end of the 21st century, says Kurzweil, and we will have made 20,000 years of progress in just 100 years.

Think of it this way: Each time we improve the processing power of a computer, we can then use its amplified capabilities to create another computer that’s even more powerful and do so in less and less time. Eventually, argues Kurzweil, this process will no longer require human input; rather, our hyper-intelligent machines will do it for (to?) us, constructing ever smarter machines at an ever faster rate until that point—which, apparently, is soon—when the distinction between man and machine has pretty much disappeared.

That is, of course, as long as everybody plays by the rules. The main flaw in Kurzweil’s optimistic projection is that he completely discounts man’s capacity for mischief. Smarter and more robust machines also mean a greater aptitude for evil and destruction. Should one of us—or one of our machines, for that matter—ever decide to do away with humanity rather than proceed with the augmentation, that job is only going to get easier with time. Indeed, it may be that instead of verging on the glorious moment “when humans transcend biology,” as Kurzweil’s most recent book (The Singularity Is Near) puts it, we are actually on the cusp of total and abject biological extinction. •


From Salvo 1 (Autumn 2006)

If you enjoy Salvo, please consider giving an online donation! Thanks for your continued support.

This article originally appeared in Salvo, Issue #1, Fall 2006 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo1/better-than-human

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]