Ben Goertzel wrote:
> I think this line of thinking makes way too many assumptions about the
> technologies this uber-AI might discover.
>
> It could discover a truly impenetrable shield, for example.
>
> It could project itself into an entirely different universe...
>
> It might decide we pose so little threat to it, with its shield up, that
> fighting with us isn't worthwhile.  By opening its shield perhaps it would
> expose itself to .0001% chance of not getting rewarded, whereas by leaving
> its shield up and leaving us alone, it might have .000000001%
> chance of not
> getting rewarded.
>
> ETc.

You're thinking in static terms. It doesn't just need to be safe from
anything ordinary humans do with 20th century technology. It needs to be
safe from anything that could ever conceivably be created by humanity or its
descendants. This obviously includes other AIs with capabilities as great as
its own, but with whatever other goal systems humans might try out.

Now, it is certainly conceivable that the laws of physics just happen to be
such that a sufficiently good technology can create a provably impenetrable
defense in a short time span, using very modest resources. If that happens
to be the case, the runaway AI isn't a problem. But in just about any other
case we all end up dead, either because wiping out humanity now is far
easier that creating a defense against our distant descendants, or because
the best defensive measures the AI can think of require engineering projects
that would wipe us out as a side effect.

Billy Brown

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to