You can implement a new workaround to bootstrap your organisms past
each local maximum, like catalyzing the transition from water to land
over and over. I find this leads to cheats that narrow the search in
unpredictable ways, though. This problem comes up again and again.

Maybe some kind of drift in the parameters or fitness function would
destabilize deeply-converged positions. I've thought before how useful
it would be to have an AI tuning my GA. >_o


On 9/8/08, Benjamin Johnston <[EMAIL PROTECTED]> wrote:
>
>> I am curious about the result you mention. You say that the
>> genetic algorithm stopped search very quickly. Why? It sounds
>> like they want to search to go longer, but can't they just
>> tell it to go longer if they want it to?
>
> They found that the system converged too quickly. The initial knowledge
> quickly dominated the population, and then successive generations showed
> little improvement.
>
>> And to reduce convergence, can't they just increase the
>> level of mutation? Do you know if they tried this, and if
>> so, why it wasn't sufficient?
>
> The quality of the solutions found using prior knowledge was such that any
> random mutations was almost always inferior. As I understood it, to get out
> of the local maxima that prior knowledge gets a GA stuck in, you really need
> some reasonable quality solutions so that larger structures of a good
> solution can be introduced via cross-over. Any given random mutation was
> usually detrimental - real progress depended on a child being able to
> combine complex substructures from two different parents.
>
>> Other than that, I think there are several things to try. First,
>> it seems more natural to me to put the textbook solutions in the
>> initial population, rather than coding them as genetic
>> operations. Second, if they are used as operations, I'd try
>> splitting them up further (just to reduce the bias).
>
> Yes, those are good points - I have been wondering about that, but I didn't
> have the chance to ask those questions. Presumably one problem is that if
> you just put prior knowledge in the initial population, unmatched to the
> system parameters, then the textbook models would be unreasonably bad; they
> would quickly be eliminated and there would be little chance for them to be
> reintroduced later into the population. One solution to this might then be
> to have a fixed 'immortal' population of textbook models that can be crossed
> with the rest of the population at any time.
>
> Another possibility could be to use island-GA, with prior knowledge 'banned'
> from some of the islands.
>
> Anyway, I'm sure there must be lots of different ways that sound like they
> might solve the problem. But, which (or whether any) ones actually work in
> practice is another matter. And that's why I'm curious to know whether AGI
> researchers have encountered this problem, and what they have done about
> it...
>
> -Ben
>
>
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to