I think when the time come the intelligent algorithms, tools, machines will be so much ingrained/inter-winded in everyday life ... that we could not make a distinction who ended it all.. Once get out in Space it is even silly to think that intelligent machines would want to destroy humanity, there is so much more resources in Space.
Didn't we invented the fail-safe already in the 50's ... The 3 laws of robotics :))) -------| http://ifni.co On Sat, May 3, 2014 at 11:40 PM, Chris Jernigan <[email protected]> wrote: > Well, we can add a few more names to that list of scientists who think AI > will eventually kill us all. Every time someone comes out with a statement > like this, I can’t help but wonder if I’m missing something. I have thought > about these things over and over and over again. I just can’t see how we > might get to this horrible point of no return without first implementing a > fail-safe. Would it really be that unpredictable if we created a machine > with true intelligence? What am I missing here? > > here is the original article written by Hawking, Stuart Russell, Max > Tegmark, Frank Wilczek > http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > _______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
