Artificial Intelligence: Friend or Foe?

ADVERTISEMENT

3K1COVvC-GQHave you been waiting, counting the days and wrinkles on your face, until computer technology will spare you the suffering of illness, old age and perhaps even death?

In tech and sci-fi circles, the theoretical, future moment when intelligent machines exceed the abilities of humans and, essentially, take over is called “the singularity.” The thought, or wish, for a technology that “saves” us might not be that crazy of an idea, considering the power of modern microprocessors and the complexity achieved in robotics.

However, what if continued advances in technology brought about not a helpful artificial intelligence, but a hostile one? Such a possibility is explored by philosopher Nick Bostrom in his book, “Superintelligence: Paths, Dangers, Strategies.” Bostrom theorizes about a “paper clip maximizing” machine which, so well programmed, might become more and more intelligent, to the point where it not only invents a new kind of paper clip but newer, better machines, and then develops the ability to check itself, and eventually rule a world where everything is a paper clip. While the scenario might not be realistic, it raises questions about the possibility of carefully-designed computers eventually breaking past their own limitations – superintelligence – which the author can foresee. Furthermore, this superintelligence could then make unpredictable moves which alter the world in any number of ways, including destruction of any given thing: animals, plants, humans, or the world itself.

Many scientists have, until now, aligned with another camp, which thinks that those who predict a wild, unchained A.I. (artificial intelligence) simply do not comprehend what computer “intelligence” really is.

Questions about whether computers can think have been around for as long as the computer itself. The term “artificial intelligence” was coined by John McCarthy, a programmer, in 1955. The general popularity of sci-fi in the 1960s and 70s, fueled by advances in space travel, brought to the mainstream ideas about computers eventually learning to talk and understand. All one has to do is look at the films of the time, like “2001: A Space Odyssey” to know that AI was already on people’s minds.

Once that discussion became more complex, there soon emerged two types of hypothesizing about the future of AI. Scientists like Ray Kurzweil foresaw a computer that could problem-solve around obstacles and eventually critique itself, modifying and improving its own programming, learning and becoming more efficient. This series of changes, or evolution, would bring about a group of computers that could theoretically improve anything and everything, including life in general. Others, like Stephen Hawking, the subject of a recent Oscar-nominated biopic, warned that people would never be able to suppress such a powerful computer and should therefore be wary of it.

Superintelligence does not currently exist, as far as we know. The closest we have gotten are “artificial neural networks,” like Apple’s Siri. AI optimists point out that, no matter how great the power of a computer’s recognition ability, the essence of “thought” is something that does not and cannot exist. Siri might know what the word pizza sounds like, and even what a pizza looks like, but not what a pizza is, in its essence. Rodney Brooks, founder of iRobot and a professor at the Massachusetts Institute of Technology, argued that evil AI, if at all possible, is at least a few hundred years in our future.

Bostrom, however, Argues that researchers and developers should proceed with caution; the biggest companies of our day are all attempting to build an intelligent computer, but what they should really be developing is the means to control such a creation. Bostrom wonders whether future superintelligence would be good-natured or ill-willed, and how much that fate is in the hands of developers ­– questions heavily influenced by philosophers like Eliezer Yudkowsky. How can we design a computer that wants to obey us and benefit us, much in the same way that a dog is wired to be “man’s best friend”? How can we instill ethics or values in a computer? These are the questions that Bostrom struggles with and urges all researchers to consider. Although a human vs. computer Armageddon might not be looming, the scientists and corporate sponsors currently pushing the envelope of AI technology should keep the dangers in mind, argues Bostrom.

ADVERTISEMENT