Hi,

 

I have a general question for those (such as Novamente) working on AGI
systems that use genetic algorithms as part of their search strategy.

 

A GA researcher recently explained to me some of his experiments in
embedding prior knowledge into systems. For example, when attempting to
automate the discovery of models of a mechanical system, they tried adding
some "textbook models" to the set of genetic operators. The results weren't
good - the prior knowledge worked too well, causing the GA to converge too
fast onto the prior knowledge. so fast that there wasn't time for the GA to
build up sufficient diversity and quality in other solutions that might have
helped get out of the local maxima. The message seemed to be that prior
knowledge is too powerful - it can 'blind' a search - and that if you must
use it, you'd have to very very aggressively artificially deflate the
fitness of instances that use prior knowledge (and this is tricky to get
right).

 

This struck me as relevant to GA-based AGIs that continually build on and
improve a knowledge-base. Once an AGI learns very simple initial models of
the world, if it then tries to evolve deeper knowledge about more difficult
problems (but, in the context of its prior learning), then its initial
models may prove to be too good: forcing the GA to converge on poor local
maxima that represent only minor variations on the initial models it learnt
in its earliest days.

 

Does this issue actually crop up in GA-based AGI work? If so, how did you
get around it? If not, would you have any comments about what makes AGI
special so that this doesn't happen?

 

-Ben

 




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to