There’s been much ado about artificial intelligence lately. This has largely been prompted by a computer convincing some people that it is a 13-year-old boy and an article written by a veritable who’s who among emerging tech thinkers warning of the risks of superintelligent machines.
A computerbot, named Eugene Gootsman, was able to convince 33% of people who interacted with him for five minutes via chat that he was a human. This was touted as a clear instance of a computer passing the Turing Test, but it was met with some criticism, including this piece in by Gary Marcus in The New Yorker.
Ironically, rather than showcasing advances in human ingenuity, the Eugene Gootsman experiment reveals some of our less noble attributes. For one, in order to make computers sufficiently human-like, programmers needed to make the machines dumber. As Joshua Batson points out in his Wired commentary, prior winners in an annual Turing Test competition incorporate mistakes and silliness to convince the judges that the computer is a person. This calls into question the value of a test for artificial intelligence which requires a machine to be “dumbed-down” in order to pass.
Secondly, the Turing Test, as it was presented in the media, could easily be one of those tests that the psychology department at your local university conducts on willing participants. The results of the Eugene Gootsman test say more about the judges than it does about the machine. Taken from another perspective, 33% of people tested were more gullible than the rest of the participants.
You Have to Want to Want It
This is contrasted to Stephen Hawking’s warning in an Independent article, co-authored by Stuart Russell, Max Tegmark, and Frank Wilczek, that superintelligence may provide many benefits, but could also lead to dire consequences if it cannot be controlled. Hawking and company write that “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.” Yes, one can imagine technology doing this, but the question is can the technology imagine itself doing this?
Hawking assumes that we will have the capability to engineer creativity or that creativity will somehow emerge from technology. However, we see from the examples of Eugene Gootsman, Watson the computer, Google smart cars, and Siri, that complex programming does not produce ingenuity. One could argue that the only way a machine would even muster up the “motivation” to do any of these things is if a human programmed that motivation into it.
An example from science fiction is Asimov’s three laws of robots. These are the inviolable principles programmed into the hardwiring of every robot in Isaac Asimov’s fictional world. These laws provide the motivations behind the robots’ behavior, and while they lead to some ridiculous and troubling antics, they are not the same as a robot coming up with its own fundamental motivations. In Asimov’s series, I, Robot, the impetus behind each of the robots’ behavior harkens back to these pre-programmed (ostensibly by humans) laws.
This is not to dismiss the field of artificial intelligence. This is to call into question some of the assumptions behind the recent hype regarding the progress and the potential peril of AI. Technology is becoming increasingly more powerful with the potential for both helping and hurting. Hawking’s warning that “the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all” is worth heeding. However, the problem is not endemic in the machine but in the humans that make and wield this technology.