On 9/9/05, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > Leitl wrote: > > > >In the language of Gregory Bateson (see his book "Mind and Nature"), > > > >you're suggesting to do away with "learning how to learn" --- which is > > > >not at all a workable idea for AGI. > > > > Learning to evolve by evolution is sure a workable idea. It's > > also sufficient > > for an AGI: look into the mirror. > > Of course I agree with that... > > What YKY suggested was to make an AGI based on a fixed set of reasoning > rules and heuristics that are not pliable and adaptable based on > experience. > I don't think this is viable in practice, I think one's system needs to be > able to learn how to learn. Evolution is one example of a dynamic that is > able to learn how to learn, but it need not be the only example.
Does evolution have the the lowest level of inference that you talked about? Or would it be better characterised as self-modifying (e.g. crossover that can alter the mechanics of crossover). Will Pearson ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
