New User ? Sign Up  |  Sign In  |  Help
Strange Questions
Get the Answers at Strange Questions!
ask
answer
explore
Search for questions :
My Profile

anonymous

Open Questions Bookmark and Share

When machines become self-aware, is it likely they will be malevolent toward humans?


The singularity is upon us. Can we 100% expect a dystopian future where self-aware artificial intelligence looks out for itself at the expense of the human race? Or is that a bunch of sci-fi baloney?

4607 day(s) ago

    Comment(s) (0)
    Report Abuse
   Find Interesting  
   Email to Friends  
   Bookmark  
   Subscribe to Answer Alert  
No comments yet !!!     Be the first to comment !!!
Answers (2)

thespartan1024
What bjones said seems like a logical answer, if AI interests you, check out www.cleverbot.com, the most up-to-date chatbot that learns from people talking to it. the creators at Cambridge University have announced, as of August 2011, Cleverbot has reached a comparison intelligence of 59% human.

Posted 4560 day ago

( 0 )
( 0 )
    Comment(s) (0)
   Report Abuse
No comments yet !!! Be the first to comment on this answer !!!


bjones
Although only some scientists believe we will one day see a time when an artificial intelligence gains self-awareness, all scientists understand the future is uncertain. It is quite possible that the machines will consider humans a threat. Perhaps it is simply the next step in evolution. Dinosaurs once ruled the planet, and after that, rodents were king. Perhaps, in the future, the world will be ruled by machines.

The rule of the machines may not have to be malevolent, and some say talk of a singularity in life or intelligence on Earth presupposes that machine intelligence and biological intelligence will survive in harmony via cybernetic organisms.

Interestingly, it is one of the first creators of robots in science fiction who understood that should self-aware robots be invented, they will have to be governed by three laws in their programming that can never be overridden. Isaac Asimov was a science fiction writer, scientist and professor in organic chemistry. Among his many creations was the concept of a positronic brain that could, at the very least, match the effectiveness and capacity of the human brain.

Asimov understood that mechanical beings that were physically and mentally superior to humans must be kept in check or chaos, murder and revolution may ensue. The three laws of robotics that would keep humans lords over the machines are as follows:

1. Robots may not injure a human being or allow a human being to be injured through inaction.

2. Robots must obey all orders from humans unless the orders would cause them to break the first law.

3. Robots must protect their own existence, but only if doing so does not cause them to break the first or second laws.

Later on, Asimov though it prudent to add a fourth law that overrode the other three. It was called the zeroth law:

0. Robots may not harm humanity or allow humanity to be harmed through inaction.

Asimov believed this law would sufficient protect human civilization from destruction by intelligent machines.

So, now the question becomes which sci-fi scenario is the baloney: the dystopian society at war with machines or a stable human society with machines under their control. For the answer, we will have to wait and see.


Posted 4579 day ago

( 0 )
( 0 )
    Comment(s) (1)
   Report Abuse

thespartan1024
4560 day(s) ago
sounds like the three laws from IRobot, which was written by Issac if i remember correctly, correct me if i'm wrong.

Edit your answer. Click save when done.
Question Title When machines become self-aware, is it likely they will be malevolent toward humans?
Your Answer
Character Count ( Max. - 5000 ) : 201
Email this question link to friends
Please enter e-mail address and name for each friend..
Friend #1 -
Friend #2 -
Friend #3 -
Friend #4 -
Friend #5 -
  Your comment on this question
Max Allowed : 5000 Characters Current Count : 0
  Your comment on this answer
Max Allowed : 5000 Characters Current Count : 0

Copyright © 2024 Terms & Conditions