From: "Ben Goertzel" <[EMAIL PROTECTED]> >> I agree with you that there will be no limits to the >> above 2 processes. What I'm skeptical about is how do >> we exploit this possibility. I cannot imagine how an >> AI can "impose" (can't think of better word) a morality >> on all human beings on earth, even given intergalactic >> computing resources. If this cannot be done, then we >> *must* default to self-organization of the free market >> economy. That means you have to specify what your AI >> will do, instead of relying on idealistic descriptions >> that have no bearing on reality. > >Hmmm.... > >I'm not sure why the notion of "free market" necessarily applies here. The >market economy is a manifestation of a particular phase of human history, >not necessarily relevant to future AI systems. > >Also, once AI's are truly intelligent and autonomous, they won't be relying >on human specifications anymore, at least not directly.... I agree that our >idealistic descriptions will not be too relevant to their behaviors however.
Unless you have some kind of secret agenda, I don't think this really answers the question. Our society is currently functioning under an economic system; No matter what is your vision you have to deal with the transition from here to there. This is an important issue that should be addressed. YKY ____________________________________________________________ Find what you are looking for with the Lycos Yellow Pages http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10 ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
