Is Artificial Intelligence Taking Over? Or Is a Fashionable Panic Afoot?
Earlier this year, over 150 experts in AI, robotics, ethics, and supporting disciplines signed an open letter denouncing the European Parliament's proposal to make intelligent machines persons.
According to Canadian futurist George Dvorsky, the Parliament's purpose is to hold the machines liable for damages, as a corporation might be: "The EU is understandably worried that the actions of these machines will be increasingly incomprehensible to the puny humans who manufacture and use them."
AI experts acknowledge that no such robots currently exist. But many argue, as does Seth Baum of the Global Catastrophic Risk Institute, "Now is the time to debate these issues, not to make final decisions." AI philosopher Michael LaBossiere likewise wants to "try to avoid our usual approach of blundering into a mess and then staggering through it." Maybe, but the wish is often father to the thought, and a grand protocol may tempt many with an interest in the matter to see in AI what isn't there because they need it to be.
For some, it's a moral issue: sociologist and futurist James Hughes considers existing rights language to be "often human-racist" and "unethical." Dvorsky, who describes himself as "Canada's leading agenda-driven futurist/activist," is a big fan of personhood in principle. As founder and chair of the Institute for Ethics and Emerging Technologies (IEET), he wants personhood for whales, dolphins, elephants, and other highly sapient creatures. He doesn't want a noble agenda upset by questionably victimized robots.1
Many internationally known sci-tech figures assume, along with the European Parliament, that AI equivalent to humans is a sure thing and coming soon. In March, space entrepreneur Elon Musk called AI "more dangerous than nuclear warheads."2 Musk took Harvard's top pop-science influencer Steven Pinker to task recently for not doubling down on the current panic: "If Pinker doesn't have a grasp of AI, 'humanity is in deep trouble.'"3 Stephen Hawking (1942-2018), however, was a believer. In 2017, he told a Web Summit technology conference that "computers can, in theory, emulate human intelligence, and exceed it," and that AI could be the "worst event in the history of our civilization."4
Facebook CEO Mark Zuckerberg, on the other hand, is "really optimistic" and thinks Musk's predictions are "pretty irresponsible." Nor does Bill Gates see much danger at present ("we can look forward to longer vacations"). Warren Buffett agrees. That said, in 2015, Gates seemed more worried ("I don't understand why some people are not concerned").5
Predictions from lesser lights abound. Product development expert Phil Baker, citing the results of an industry survey, tells us,
Using AI to write high school essays without human intervention will be available . . . in 2026. And the ability of artificial intelligence to drive a truck on its own is expected to be available by 2027. Using AI to replace retail clerks will be in place by 2031. Of course, we're already seeing kiosks replace counter personnel at fast-food restaurants. And writing a bestselling book completely by a machine that exceeds the skill of a human author will occur by 2049, according to the survey.6
The soft touch world chimes in, too. Mariana Lin, a California poet who speaks regularly at Stanford University on creative writing for artificially intelligent beings, proposes an elaborate integration scheme, including "I-That" relationships to go along with Martin Buber's iconic I-Thou and I-It relationships.7
Panic over Alchemy?
AI philosopher LaBossiere argues that the mind need not be strictly limited to organic beings. "After all, if a lump of organic goo can somehow think, then it is no more odd to think that a mass of circuitry or artificial goo could think," he said. "For those who think a soul is required to think, it is also no more bizarre for a ghost to be in a metal shell than in a meat shell."8 But the question is not whether it is "odd" or "bizarre." The question is, are there good reasons for thinking it will happen?
Engineering professor Michael I. Jordan notes that the controversy is partly a question of semantics. Historically, the term "AI" meant an entity with human-level intelligence. But today it also, conveniently, refers to machine learning as well. Great progress has been made in machine learning but not toward human-level intelligence.
Taking the field as a whole, Jordan notes that there are "major open problems in classical AI," including the need to "infer and represent causality" and "the need to develop systems that formulate and pursue long-term goals."9 More bluntly, Matthew Hutson wonders at Science if AI has become the new alchemy: AI researcher Ali Rahimi tells him that researchers "do not know why some algorithms work and others don't, nor do they have rigorous criteria for choosing one AI architecture over another."10
In other words, panic is mounting over supposed dangers arising from an enterprise with major open, unsolved problems that lack rigorous criteria.
Some Things Do Not Compute
Are there good reasons for thinking that human-like AI cannot really happen? Yes indeed. Mathematician David Berlinski observes shrewdly,
The great goal of artificial intelligence has always been to develop a general learning algorithm, one that, like a three-year-old child, could apply its intelligence across various domains. This has not been achieved. It is not even in sight. And no wonder. We have no theory that explains human or animal behavior.
So we don't even know what we are trying to replicate.11 Baylor computer engineering professor Robert J. Marks notes that Gödel's Theorem, a key moment in mathematics, shows that some things are outside the laws of mathematics. In any event, they are not computable. And there is no evidence that algorithms, by themselves, become creative and produce large amounts of new information. The source of large amounts of new information is still elusive.
AI will make a big difference, Marks reckons, but not by producing brains with metal shells. Rather, he laments, Google knows where he is every second of the day,12 and where we are, too. No authoritarian regime in history, not even 1984 in fiction, had such potential power. In my next two Deprogram columns, we will explore both the reasons why ID theorists like Marks think that AI programs will not morph into minds and how AI represents a serious threat anyway, by dramatically lowering the cost of 24/7/365 surveillance of whole societies.
- George Dvorsky, "Experts Sign Open Letter Slamming Europe's Proposal to Recognize Robots as Legal Persons," Gizmodo (Apr. 4, 2018): https://bit.ly/2EUGehK.
- Catherine Clifford, "Elon Musk: 'Mark my words—A.I. is far more dangerous than nukes,'" CNBC (Mar. 13, 2018): https://cnb.cx/2FOoAkh.
- Catherine Clifford, "Elon Musk responds to Harvard professor Steven Pinker's comments on A.I.: 'Humanity is in deep trouble,'" CNBC (Mar. 1, 2018): https://cnb.cx/2GUQd8c.
- Arjun Kharpal, "Stephen Hawking says A.I. could be 'worst event in the history of our civilization,'" CNBC (Nov. 6, 2017): https://cnb.cx/2zoPUlt.
- Catherine Clifford, "Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible,'" CNBC (July 24, 2017): https://cnb.cx/2woTU5I; Catherine Clifford, "Billionaire Bill Gates on the impact of A.I.: 'Certainly' we can look forward to longer vacations," CNBC (Jan. 29, 2018): https://cnb.cx/2BCR2j5; Catherine Clifford, "Warren Buffett and Bill Gates think it's 'crazy' to view job-stealing robots as bad," CNBC (Feb. 3, 2017): https://cnb.cx/2KRIRp2; Peter Holley, "Bill Gates on dangers of artificial intelligence: 'I don't understand why some people are not concerned,'" Washington Post (Jan. 29, 2015): https://wapo.st/2jP00TU.
- Phil Baker, "AI Will Soon Outperform Humans on High School Essays, Bestselling Books, Surgery, Driving," PJ Media (May 1, 2018): https://bit.ly/2ws8Sin.
- Mariana Lin, "How to Write Personalities for the AI Around Us," The Paris Review (May 2, 2018): https://bit.ly/2jqre2Q.
- Op. cit., note 1.
- Michael I. Jordan, "Artificial Intelligence—The Revolution Hasn't Happened Yet," Medium (Apr. 18, 2018): https://bit.ly/2HzhHEC.
- Matthew Hutson, "Has artificial intelligence become alchemy?", Science (May 4, 2018): https://bit.ly/2jXtcbg.
- David Berlinski, "Godzooks," Inference (Feb. 14, 2018): https://bit.ly/2C8BvvU. He is reviewing Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari.12. Robert J. Marks, "Robert Marks—Why AI's Will Never Achieve Consciousness or Human Understanding," Houseform Apologetics video (Jan. 11, 2018): https://bit.ly/2wFM2gn; Robert J. Marks, "Artificial Intelligence and Human Exceptionalism: A Conversation with Dr. Robert Marks II," Apologetics Academy video (Feb. 23, 2018): https://bit.ly/2KZdx7G.
is a Canadian journalist, author, and blogger. She blogs at Blazing Cat Fur, Evolution News & Views, MercatorNet, Salvo, and Uncommon Descent.This article originally appeared in Salvo, Issue #46, Fall 2018 Copyright © 2021 Salvo | www.salvomag.com https://salvomag.com/article/salvo46/ai-apprehension