Billy Brown wrote:
It should also be pointed out that we are describing a state of AI such that:Ben Goertzel wrote:I think this line of thinking makes way too many assumptions about the technologies this uber-AI might discover. It could discover a truly impenetrable shield, for example. It could project itself into an entirely different universe... It might decide we pose so little threat to it, with its shield up, that fighting with us isn't worthwhile. By opening its shield perhaps it would expose itself to .0001% chance of not getting rewarded, whereas by leaving its shield up and leaving us alone, it might have .000000001% chance of not getting rewarded.Now, it is certainly conceivable that the laws of physics just happen to be such that a sufficiently good technology can create a provably impenetrable defense in a short time span, using very modest resources. If that happens to be the case, the runaway AI isn't a problem. But in just about any other case we all end up dead, either because wiping out humanity now is far easier that creating a defense against our distant descendants, or because the best defensive measures the AI can think of require engineering projects that would wipe us out as a side effect.
a) it provides no conceivable benefit to humanity
b) a straightforward extrapolation shows it wiping out humanity
c) it requires the postulation of a specific unsupported complex miracle to prevent the AI from wiping out humanity
c1) these miracles are unstable when subjected to further examination
c2) the AI still provides no benefit to humanity even given the miracle
When a branch of an AI extrapolation ends in such a scenario it may legitimately be labeled a complete failure.
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
