Stephen Hawking Is Worried About Artificial Intelligence Wiping Out Humanity

Screen Shot 2014 05 05 at 9.36.14 AMBusiness Insider – by Dylan Love

We’ve previously reported on the realistic potential for malicious artificial intelligence to wreak havoc on humanity’s way of life. Physicist Stephen Hawking agrees it’s worth worrying about.

Current artificial intelligence is nowhere near advanced enough to actually be of sci-fi-movie-style harm, but its continued development has given rise to a number of theories about how it may ultimately be mankind’s undoing.   

Writing in The Independent, Hawking readily acknowledges the good that comes from such technological advancements:

Recent landmarks such as self-driving cars, a computer winning at “Jeopardy!,” and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

But he keeps the negatives close to mind, writing that “such achievements will probably pale against what the coming decades will bring”:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

A scientist named Steve Omohundro recently wrote a paper that identifies six different types of “evil” artificially intelligent systems and lays out three ways to stop them. Those three ways are:

  • To prevent harmful AI systems from being created in the first place. We’re not yet at the point where malicious AI is being created. Careful programming with a Hippocratic emphasis (“First, do no harm.”) will become increasingly important as AI technologies improve
  • To detect malicious AI early in its life before it acquires too many resources. This is a matter of simply paying close attention to an autonomous system and shutting it down when it becomes clear that it’s up to no good.
  • To identify malicious AI after it’s already acquired lots of resources. This quickly approaches sci-fi nightmare territory, and it might be too late at this point.

Read more: http://www.businessinsider.com/stephen-hawking-on-artificial-intelligence-2014-5#ixzz30xCPIEoS

 

10 thoughts on “Stephen Hawking Is Worried About Artificial Intelligence Wiping Out Humanity

  1. Well, I have to admit, he’s right about that one. With the way the elites and the government psychopaths are trying to create terminator machines and drones with a central NSA information hub (similar to a Skynet system) to probably connect it all together in the future, who wouldn’t be surprised?

    1. Yes NC, this is what is so worrisome about AI. These automaton drones that make their own decisions regarding intel and response without human input are scary to think about. Of course, when humans are in the decision making process, you could be dead for years but the cops will breakdown your door a dozen times in 10 years only to find that your still dead! 🙂

    1. You know, now that I think about it, is Hawking even his real name? Never really heard of that kind of last name before other than with him.

  2. I want to tell you of a very real possibility for starting WW3. Have you noticed how incredible the computer models are now for the weather channel? They have the American model and the European model. They nailed super storm Sandy that hit New Jersey. They are very accurate, all you have to do is plug in the data and it spits out the forecast. Now substitute that with a computer that’s 100 times smarter at predicting future outcomes. Now put that computer in the hands of the NSA, the FBI, the CIA and DOD and the Pentagon. Now feed in war scenarios. You can plug in types of weapons a country has, the leader and personality that country has, the geographical location of that country, the natural resources, the religion, the history of past wars that country participated in. Suddenly the super war computer tells you EXACTLY how to win a war. Most importantly it tells you when and how to start a war if you really want to win it. Suppose the war computer tells the folks over at the Pentagon to start war with China now, today, immediately …because if you wait….even just one year then the chances of you winning WW3 will be only 40%. But if you start war tomorrow your chances of winning WW3 are 98%. Now…what would you do? I see this as a HUGE problem with Artificial Intelligence. If forces the humans to act.

    1. Anyone who puts their faith in a computer is just asking for trouble. Period.

      Try putting your faith in GPS and Google Maps. You’ll get lost before you know it. It’ll take you some place miles away from the location you want to go to.

      I like to use this thing I have inside my head called a BRAIN!

      But hey, that’s just me.

Join the Conversation

Your email address will not be published. Required fields are marked *


*