There is no such thing as "should" in evolution. Evolution doesn't do what is 
"best", it just follows the path of least resistance.

We are not the vehicles of evolution, just the outcomes of it. We don't owe it 
anything. We only exist to do what is best for us, as does every other product 
of evolution.

In that vein, our goal in creating a better intelligence is not for the sake of 
creating a better intelligence, to propagate evolution. It's to create 
something that will be good for us and all that we value, as humans. So a 
superior AI that destroys humanity may be "natural" in an evolutionary sense, 
but it wouldn't be desirable for us, its creators. Therefore it would be a 
(subjectively) BAD THING.

On May 4, 2014 at 9:03:14 AM, xcvsdxvsx . ([email protected]) wrote:

This is going to sound radical but i dont think its immediately obvious that ai 
destroying man kind would even be a bad thing. Its totally natural that 
inferior species are overcome by superior species. Thats how life got to the 
point where it is today. Better designs outcompete inferior designs, the old 
designs die off, and the new designs take their place, only to be out-competed 
themselves someday. Through this process life becomes more advanced. Humans owe 
their very existence to this process. Without it we would still be dirt. IF ai 
killed us and took our place, to deny the value of this may be to deny the 
value of ourselves since this is exactly the process to which we owe our very 
own existence.


On Sun, May 4, 2014 at 11:42 AM, Austin Marshall <[email protected]> wrote:
I haven't read the paper (yet), but here's my take on the failsafe: For now, 
man poses the greatest risk to mankind.  I think we're likely to destroy 
ourselves before a (non humanity-caused) natural disaster.  If we were to build 
a failsafe that actually worked in AI, we'd be able to apply the same 
principles to ourselves.  Personally, I think AI has the potential for vastly 
improving life on earth (and beyond).  I'm optimistic that the benefits far 
outweigh the risk.


On Sat, May 3, 2014 at 8:40 PM, Chris Jernigan 
<[email protected]> wrote:
Well, we can add a few more names to that list of scientists who think AI will 
eventually kill us all. Every time someone comes out with a statement like 
this, I can’t help but wonder if I’m missing something. I have thought about 
these things over and over and over again. I just can’t see how we might get to 
this horrible point of no return without first implementing a fail-safe. Would 
it really be that unpredictable if we created a machine with true intelligence? 
What am I missing here?

here is the original article written by Hawking, Stuart Russell, Max Tegmark, 
Frank Wilczek
http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence--but-are-we-taking-ai-seriously-enough-9313474.html

_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org



_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org


_______________________________________________  
nupic mailing list  
[email protected]  
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org  
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to