Eliezer S. Yudkowsky wrote:
> 1)  AI morality is an extremely deep and nonobvious challenge which has 
> no significant probability of going right by accident.

> 2)  If you get the deep theory wrong, there is a strong possibility of 
> a silent catastrophic failure: the AI appears to be learning everything 
> just fine, and both you and the AI are apparently making all kinds of
> fascinating discoveries about AI morality, and everything seems to be
> going pretty much like your intuitions predict above, but when the AI
> crosses the cognitive threshold of superintelligence it takes actions
> which wipe out the human species as a side effect.

> AIXI, which is a completely defined formal system, definitely undergoes 
> a failure of exactly this type.

You have not shown this at all. From everything you've said it seems
that you are trying to trick Ben into having so many misgivings about
his own work that he holds it up while you create your AI first. I hope
Ben will see through this deception and press ahead with novamente. -- A
project that I give even odds for sucess... 


-- 
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF ALL CPUS.
http://users.rcn.com/alangrimes/
[if rcn.com doesn't work, try erols.com ]

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to