I haven't read the paper (yet), but here's my take on the failsafe: For now, man poses the greatest risk to mankind. I think we're likely to destroy ourselves before a (non humanity-caused) natural disaster. If we were to build a failsafe that actually worked in AI, we'd be able to apply the same principles to ourselves. Personally, I think AI has the potential for vastly improving life on earth (and beyond). I'm optimistic that the benefits far outweigh the risk.
On Sat, May 3, 2014 at 8:40 PM, Chris Jernigan < [email protected]> wrote: > Well, we can add a few more names to that list of scientists who think AI > will eventually kill us all. Every time someone comes out with a statement > like this, I can’t help but wonder if I’m missing something. I have thought > about these things over and over and over again. I just can’t see how we > might get to this horrible point of no return without first implementing a > fail-safe. Would it really be that unpredictable if we created a machine > with true intelligence? What am I missing here? > > here is the original article written by Hawking, Stuart Russell, Max > Tegmark, Frank Wilczek > > http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > >
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
