> It should also be pointed out that we are describing a state of
> AI such that:
>
> a)  it provides no conceivable benefit to humanity

Not necessarily true: it's plausible that along the way, before learning how
to whack off by stimulating its own reward button, it could provide some
benefits to humanity.

> b)  a straightforward extrapolation shows it wiping out humanity
> c)  it requires the postulation of a specific unsupported complex miracle
> to prevent the AI from wiping out humanity
> c1) these miracles are unstable when subjected to further examination

I'm not so sure about this, but it's not worth arguing, really.

> c2) the AI still provides no benefit to humanity even given the miracle
>
> When a branch of an AI extrapolation ends in such a scenario it may
> legitimately be labeled a complete failure.

I'll classify it an almost-complete failure, sure ;)

Fortunately it's also a totally pragmatically implausible system to
construct, so there's not much to worry about...!

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to