>>Ben Goertzel wrote:
>> 
>> But the different trials need not be independent --- we can save the
>> trajectory of each AI's development continuously, and then restart a new
>> branch of "AI x at time y" for any recorded AI x at any recorded time point
>> y.
>> 
>> Also, we can intentionally form composite AI's by taking portions of AI x's
>> mind and portions of AI y's mind and fusing them together into a new AI z...
>> 
>> So we don't need to follow a strict process of evolutionary trial and error,
>> which may accelerate things considerably ---- particularly if, as
>> experimentation progresses, we are able to learn abstract theories about
>> what makes some AI's smarter or stabler or friendlier than others.

It seems to me this will not reduce the complexity of
the problem of AGI as a whole, if we're using a really
meaningful measure of that complexity. It also seems
that this trick will NOT accelerate evolution of the
AGI, unless given *additional* space-time resources.

Secondly, fusion of two trained AIs may be a very costly
procedure. I'm beginning to study this topic and it seems
that the complexity is generally polynomial with small
exponents such as O(n^3) but with EXTREMELY large n's
like 1 billion. I'm mainly studying ANNs, maybe other
architectures are superior in this respect...

YKY


____________________________________________________________
Find what you are looking for with the Lycos Yellow Pages
http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to