Yan King Yin wrote:

I agree, but there'll still be limitations (NP-hardness, computational and physical complexity etc).

So what, if the limitations are far, far above human level? The only reason I've been going on about RSI is to make the point that, from our perspective, you can have what looks like a harmless little infrahuman AI, and the next (day, hour, year) it's a god, just like what happened with the multicellular organisms that invented agriculture.


I agree with you that there will be no limits to the
above 2 processes. What I'm skeptical about is how do
we exploit this possibility. I cannot imagine how an
AI can "impose" (can't think of better word) a morality
on all human beings on earth, even given intergalactic
computing resources. If this cannot be done, then we
*must* default to self-organization of the free market
economy. That means you have to specify what your AI
will do, instead of relying on idealistic descriptions
that have no bearing on reality.

This seems to me like a sequence of complete nonsequiturs.


Of course you have to specify exactly what an engineered set of dynamics do, including the dynamics that make up what is, from our perspective, a mind. Who ever said otherwise? Well, me. But I now fully acknowledge my ancient position to have been incredibly, suicidally stupid.

As for the rest of it... I have no idea what you're visualizing here. To give a simple and silly counterexample, someone could roll Friendly AI Critical Failure #4 and transport the human species into a world based on Super Mario Bros - a well-specified task for an SI by comparison to most of the philosophical gibberish I've seen - in which case we would not be defaulting to self-organization of the free market economy.

--
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to