Ben Goertzel wrote:
3) an intention to implement a careful "AGI sandbox" that we won't release
our AGI from until we're convinced it is genuinely benevolent
Ben, that doesn't even work on *me*. How many times do I have to slay this idea before it stays dead?

http://sysopmind.com/essays/aibox.html

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Reply via email to