Hi, I am curious about the result you mention. You say that the genetic algorithm stopped search very quickly. Why? It sounds like they want to search to go longer, but can't they just tell it to go longer if they want it to? And to reduce convergence, can't they just increase the level of mutation? Do you know if they tried this, and if so, why it wasn't sufficient?
Other than that, I think there are several things to try. First, it seems more natural to me to put the textbook solutions in the initial population, rather than coding them as genetic operations. Second, if they are used as operations, I'd try splitting them up further (just to reduce the bias). Disclaimer: I do not consider myself an expert, as I am still an undergraduate. --Abram On Sun, Sep 7, 2008 at 8:55 PM, Benjamin Johnston <[EMAIL PROTECTED]> wrote: > > > Hi, > > > > I have a general question for those (such as Novamente) working on AGI > systems that use genetic algorithms as part of their search strategy. > > > > A GA researcher recently explained to me some of his experiments in > embedding prior knowledge into systems. For example, when attempting to > automate the discovery of models of a mechanical system, they tried adding > some "textbook models" to the set of genetic operators. The results weren't > good – the prior knowledge worked too well, causing the GA to converge too > fast onto the prior knowledge… so fast that there wasn't time for the GA to > build up sufficient diversity and quality in other solutions that might have > helped get out of the local maxima. The message seemed to be that prior > knowledge is too powerful – it can 'blind' a search – and that if you must > use it, you'd have to very very aggressively artificially deflate the > fitness of instances that use prior knowledge (and this is tricky to get > right). > > > > This struck me as relevant to GA-based AGIs that continually build on and > improve a knowledge-base. Once an AGI learns very simple initial models of > the world, if it then tries to evolve deeper knowledge about more difficult > problems (but, in the context of its prior learning), then its initial > models may prove to be too good: forcing the GA to converge on poor local > maxima that represent only minor variations on the initial models it learnt > in its earliest days. > > > > Does this issue actually crop up in GA-based AGI work? If so, how did you > get around it? If not, would you have any comments about what makes AGI > special so that this doesn't happen? > > > > -Ben > > > > ________________________________ > agi | Archives | Modify Your Subscription ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51 Powered by Listbox: http://www.listbox.com
