How will humanity end itself: Super-robots or climate catastrophe?
The New York Times reports “Scientists Worry Machines May Outsmart Man”. This long-standing fear of countless science fiction stories will likely come true in my lifetime:
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone…
[T]hey agreed that robots that can kill autonomously are either already here or will be soon.
The short-term threat is not super-intelligent robots but idiot-savants. Militaries — esp. the U.S. military’s DARPA — are pushing very hard into robotic warfare. Because successful killing machines require the ability to make split-second kill decisions, militaries will inevitably hand lethal power to their metallic soldiers long before those soldiers possess the wisdom and empathy to make human-like decisions. (Of course, humans make plenty of deadly mistakes, and much of the killing we glorify in the purported name of national defense is quite unnecessary and quite arguably immoral.) Further, pretty soon, police departments will want their own robots, and we could wake up to life in an iRobot world.
[The scientists] focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?
The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home…
The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity…
How would it be, for example, to relate to a machine that is as intelligent as your spouse?
While no robot could ever be THAT smart (in case my wife’s reading!), I’ve long feared the apparently inevitable approach of super-intelligent computers and long doubted humanity will be able to prevent sliding down a slippery slope of increasing dependence on intelligent robots.
But perhaps my fear is overblown. We may well do ourselves in via climate catastrophe, biological weapons or nuclear weapons before our robots get us.
Posted by James on Monday, July 27, 2009