From: "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]> >> I agree, but there'll still be limitations (NP-hardness, >> computational and physical complexity etc). > >So what, if the limitations are far, far above human level? The only >reason I've been going on about RSI is to make the point that, from our >perspective, you can have what looks like a harmless little infrahuman AI, >and the next (day, hour, year) it's a god, just like what happened with >the multicellular organisms that invented agriculture.
I think you've exaggerated the power of RSI quite a bit, but that's fine. We have consensus that it is potentially extremely powerful. I think the problems we need to address are: 1) How to distribute the benefits of AI + RSI. 2) How to engineer the transition from here to there. >> I agree with you that there will be no limits to the >> above 2 processes. What I'm skeptical about is how do >> we exploit this possibility. I cannot imagine how an >> AI can "impose" (can't think of better word) a morality >> on all human beings on earth, even given intergalactic >> computing resources. If this cannot be done, then we >> *must* default to self-organization of the free market >> economy. That means you have to specify what your AI >> will do, instead of relying on idealistic descriptions >> that have no bearing on reality. > >As for the rest of it... I have no idea what you're visualizing here. To >give a simple and silly counterexample, someone could roll Friendly AI >Critical Failure #4 and transport the human species into a world based on >Super Mario Bros - a well-specified task for an SI by comparison to most >of the philosophical gibberish I've seen - in which case we would not be >defaulting to self-organization of the free market economy. In case you cannot specify a utopian goal for the AI, then you'll have to create an AIs that cater to specific interests of specific groups, ie those with relatively rigid goal structures. My prediction is that this goal specification problem would become so complicated that people would be better off simply building simple utlity AIs that integrate with the economy. Notice the similarity between this goal specification problem and the problem of command economies. Fact: You do not know how to specify utopia. Whereas individuals know what makes *themselves* happy. The fact that the first few uploads will be insanely advantageous does not bother me at all. Bill Gates is insanely rich, isn't he? Back in the days of Karl Marx it must have boggled his mind that individuals might one day be able to leverage industrialization to amass unprecedented levels of wealth. And it did happen. What we need is a way to distribute the benefits of AI + RSI. The solution to this problem will likely be eclectic, as is usual in this organic, chaotic world. Hope you don't mind my criticism YKY ____________________________________________________________ Find what you are looking for with the Lycos Yellow Pages http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10 ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
