From: "Ben Goertzel" <[EMAIL PROTECTED]> >YKY wrote: >> I agree that uploading is not easy. Notice that your idea >> of recursive self-improvement being able to work wonders >> may also be very much hyped =) Intuitively I guess the >> rate of RSI might be roughly inversely proportional to >> the complexity of the task... > >About recursive self-improvement, I'd be curious to know the line of >thinking underlying your intuition.
It was just a wild guess without any quantitative basis... Although computational learning theory may have something to say about this. >My intuition is quite different. > >Suppose one has a mind with intelligence X_N and architecture A_N using >computational resources Y_N, which figures out how to expand its >computational resources to Y_(N+1). Its intelligence will then increase to >X_(N+1), just by virtue of its having figured out how to expand its >computational resources. This assumes of course that the mind's >architecture A_N is able to make use of greater computational resources; in >order to achieve this it may have to change to a new architecture A_(N+1). >Repeat as N tends to infinity... > >The point is, for iterated self-improvement to work, you don't even need >amazing breakthroughs in cognitive science, all you need is for AI >architecture to keep up with Moore's-Law type improvements in computing >infrastructure. > >For this reason, it seems to me that iterated AI self-improvement really >WILL be able to work wonders, one day. I agree, but there'll still be limitations (NP-hardness, computational and physical complexity etc). >Arguably, once one achieves a certain level of intelligence on a certain >computing infrastructure, progressive improvements in intelligence will get >harder and harder.... But, the use of intelligence to expand the computing >infrastructure seems to counter this potential problem. Also agreed. The "original" Moore's Law, concerning chip density, has got to be an S-curve (=sigmoidal). It will flatten off when it reaches the molecular level. Maybe it has already past the inflection point already. Afterwards we'll have to rely on other forms of improvement, an obvious one being parallelism. But the performance curve of parallelism may or may not be another S-curve. [Not an expert in computer architecture.] >Do you believe that there's some amount of computing power, beyond which >increases in computing power no longer lead to commensurate increases in >intelligence? If so, why? > >Do you believe that, at some amount of computing power, the amount of >intelligence achievable using this amount of computing power will not be >adequate to figure out how to gather more computing power? If so, why? I agree with you that there will be no limits to the above 2 processes. What I'm skeptical about is how do we exploit this possibility. I cannot imagine how an AI can "impose" (can't think of better word) a morality on all human beings on earth, even given intergalactic computing resources. If this cannot be done, then we *must* default to self-organization of the free market economy. That means you have to specify what your AI will do, instead of relying on idealistic descriptions that have no bearing on reality. YKY ____________________________________________________________ Find what you are looking for with the Lycos Yellow Pages http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10 ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
