Eliezer,

I certainly remember all those discussions on the SL4 list.

I did not mean to imply that the "AGI sandbox" would be a perfect mechanism.

Like everything else I mentioned, it is an imperfect mechanism.

Of course, there is a nonzero chance that the AGI will turn evil and escape
from the sandbox, turn subtly evil and get erroneously released from the
sandbox, etc.

All the arguments against the AGI sandbox seem to me to be arguments against
its *infallibility*, not arguments against its *potential utility* as part
of an overall programme intended to encourage, but not guarantee, the
creation of an AGI that is benevolent toward humans and other lifeforms.

-- Ben



> Ben Goertzel wrote:
> >
> > 3) an intention to implement a careful "AGI sandbox" that we
> won't release
> > our AGI from until we're convinced it is genuinely benevolent
>
> Ben, that doesn't even work on *me*.  How many times do I have to slay
> this idea before it stays dead?
>
> http://sysopmind.com/essays/aibox.html
>
> --
> Eliezer S. Yudkowsky                          http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to