It Comes Naturally

Real Intelligence Can Never Be Matched by the Artificial

In Salvo 46, we looked at the artificial intelligence doomsdays prophesied by media magnates like Stephen Hawking (1942–2018) and Elon Musk ("more dangerous than nukes").1 Media do not, as a rule, examine such claims carefully. There is a market for them, after all; why damage the brand? Debunking thus falls to comparatively obscure sources.

But what a land of opportunity awaits! The AI industry faces major, open, unsolved problems with the dream of replicating human intelligence. Some insights from ID thinkers and sympathizers can help us unpack the breathless claims.

First, what is intelligence? Intelligence enables us to know things. But what does it mean to "know" something? We know things in the sense that our "selves" are aware of them. When we talk about knowledge, we assume a "knower," a self to which the information is apparent. Absent a self as the subject of the experience of knowing, knowledge—in the sense in which we usually use the word—does not exist. As neurosurgeon Michael Egnor says, "Your computer doesn't know a binary string [of code] from a ham sandwich. . . . Your cell phone doesn't know what you said to your girlfriend this morning."2

Could a very sophisticated machine develop such a self? That's a tough question, though not for the reason some would think. Our sense of a self is bound up with our consciousness. But there is no generally accepted theory of consciousness.

A Messed-Up Field

There are theories, yes. For example, Darwinian philosopher Daniel Dennett argues that consciousness is an illusion that perpetuates our species.3 Many naturalists (materialists) don't find his answer viable. But what are their options? Cognitive computing professor Mark Bishop outlines a dilemma in New Scientist: If we can develop a sophisticated machine that experiences consciousness, "then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on."4 Bishop presents this as an impossible view, but serious thinkers, not cranks, do attempt to resolve the issue in precisely this way, by claiming that everything, even inanimate objects, is conscious to some extent (panpsychism).5

Apart from untenable theories, the field as a whole is a mess. In June 2018, Tom Bartlett attended an academic conference on consciousness on behalf of The Chronicle of Higher Education and summarized it by asking, "Is this the world's most bizarre scholarly meeting?"6 If that is a key question, we can infer that consciousness studies will shed little light on artificial consciousness. We are left with no way to define what we are even talking about, let alone aim for it.

Outside the Range

Software programmer Brendan Dixon of the Bio­Logic Institute illustrates some of the limitations created by the absence of an actual self: Computers outstrip humans in games like chess and Go, he notes, because these games are "perfect information" environments with inviolable rules. Programmers can take into account everything that could happen. But what about language? "Language is anything but a 'perfect information' environment. Words are vague. Meaning is unclear. (And arguments thus ensue.)"7 That is a strength of human language, not a weakness, because it helps us capture reality on the fly. But it is simply outside the range of what computer algorithms can do.

One recent project is MIT's Moral Machine, aimed at getting self-driving cars to make correct ethical decisions. About that, Dixon (who has played some rounds) says,

To solve problems in a computer requires that we encode the problem into terms the computer can manage. That means reducing the shades of grey into a selection of discrete choices. MIT's researchers replaced those shades of grey with a binary choice between gruesome outcomes. . . . But, in moral choices, it is the immediate circumstances, the thousand little details we handle holistically with our minds, that do matter.8

We may not know what consciousness is, but we can be pretty sure that it is not a series of 1 or 0 gates—and that the reverse is also true.

Searching for the Homunculus

Or is it? One end run around consciousness and its artifacts is to insist that the human brain is itself a highly sophisticated computer, differing from its current artifacts only in complexity. Thus, once we understand how our brains work, we can build computers that do likewise. To that notion, mathematician David Berlinski responds,

It is hardly beyond dispute that the human brain is a computer, except on the level of generality under which the human brain is like a weather system. . . . It is possible to embed the rules of recursive arithmetic in a computer, but how might embedding take place in the brain? If this question has no settled answer, then neither does the question of whether the brain is a computer.9

The standby response, "We just need more research!", is not much help if we don't know what we are trying to research.

Bill Dembski likens the search for a machine self that would enable the machine to think creatively to the early modern search for the "homunculus," the tiny human that was once thought to animate the early embryo: "We have no precedent or idea what such a homunculus would look like. And obviously, it would do no good to see the homunculus itself as a library of AI algorithms controlled by a still more deeply embedded homunculus."10

Positive Views

Probably because ID theorists do not see AI as an emerging alien intelligence, they tend to view it positively, at least in principle. Brendan Dixon suggests,

Let's use Deep Learning machines to assist radiologists in analyzing MRIs and mammograms. Let's use self-learning machines to monitor for fraudulent transactions (to be reviewed, in turn, by humans). Let's use pattern-matching machines to ferret out possible plagiarism (to be addressed, in turn, by a human editor). Let's capture, as best we can, what others have learned and make it available for others to use. But let's stop fooling ourselves: AI is not the emergence of another intelligence.11

Baylor computer engineering professor Robert Marks offers,

A.I. will continue replacing workers and changing our world. Algorithms are replacing travel agents, tollbooth workers, bank tellers, checkout clerks and brick & mortar stores. But the growth of A.I. has also created jobs like specialists in information technology, bloggers, data analysts, programmers and webmasters.12

The Real Peril

In reality, artificial intelligence will continue to be an extension of the intelligence of programmers. And that, precisely, is the peril. Those who have the technology can rule over those who do not via constant management and surveillance at a depth never before possible. Something of the sort is emerging in China.13 Whether it emerges here depends on which social and political philosophies prevail.

As it happens, many of our political, media, and academic elites are drifting toward philosophies that accommodate authoritarian rule. In Salvo 48, we will look at some risky trends and some strategies for preventing abuses.

Notes
1. See, for example: George Dvorski, "Experts Sign Open Letter Slamming Europe's Proposal to Recognize Robots as Legal Persons," Gizmodo (Apr. 4, 2018): https://bit.ly/2EUGehK; Catherine Clifford, "Elon Musk: 'Mark my words—A.I. is far more dangerous than nukes,'" CNBC (Mar. 13, 2018): https://cnb.cx/2FOoAkh; Phil Baker, "AI Will Soon Outperform Humans on High School Essays, Bestselling Books, Surgery, Driving," PJ Media (May 1, 2018): https://bit.ly/2ws8Sin.
2. Michael Egnor, "Your Computer Doesn't Know Anything," Evolution News & Science Today (Jan. 23, 2015): https://evolutionnews.org/2015/01/your_computer_d_1.
3. Denyse O'Leary, "Post-Modern Science: The Illusion of Consciousness Sees Through Itself," Evolution News & Science Today (Sept. 28, 2017): https://bit.ly/2LPI3kf.
4. Mark Bishop, "Fear artificial stupidity, not artificial intelligence," New Scientist (Dec. 18, 2014): https://bit.ly/2JShNbk.
5. Olivia Goldhill, "The idea that everything from spoons to stones is conscious is gaining academic credibility," Quartz (Jan. 27, 2018): https://bit.ly/2Gp13nD.
6. Tom Bartlett, "Has Consciousness Lost Its Mind?", The Chronicle of Higher Education (June 6, 2018): https://bit.ly/2l0L2e8.
7. Brendan Dixon, "Artificial Intelligence and the Language Barrier," Evolution News & Science Today (Aug. 16, 2016): https://bit.ly/2l7F1MP.
8. Brendan Dixon, "Artificial Intelligence Has a Morality Problem," Evolution News & Science Today (Oct. 13, 2016): https://bit.ly/2y693dd.
9. David Berlinski, "Godzooks," Inference review (Feb. 14, 2018): https://bit.ly/2C8BvvU.
10. Bill Dembski, "Artificial Intelligence's Homunculus Problem: Why AI Is Unlikely Ever to Match Human Intelligence," Freedom, Technology, Education (May 30, 2018): https://bit.ly/2MqWEmW.
11. Brendan Dixon, "Artificial Intelligence and Its Limits," Evolution News & Science Today (Aug. 2, 2016): https://bit.ly/2ataw2j.
12. Robert Marks, "Why You Shouldn't Worry About A.I. Taking Over the World," The Stream (Oct. 3, 2017): https://bit.ly/2HLpuLt.
13. Joyce Liu, "In Your Face: China's all-seeing state," BBC (Dec. 10, 2017): https://bbc.in/2JBhceT.

is a Canadian journalist, author, and blogger. She blogs at Blazing Cat Fur, Evolution News & Views, MercatorNet, Salvo, and Uncommon Descent.

This article originally appeared in Salvo, Issue #47, Winter 2018 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo47/it-comes-naturally

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]