Leitl wrote: > > >In the language of Gregory Bateson (see his book "Mind and Nature"), > > >you're suggesting to do away with "learning how to learn" --- which is > > >not at all a workable idea for AGI. > > Learning to evolve by evolution is sure a workable idea. It's > also sufficient > for an AGI: look into the mirror.
Of course I agree with that... What YKY suggested was to make an AGI based on a fixed set of reasoning rules and heuristics that are not pliable and adaptable based on experience. I don't think this is viable in practice, I think one's system needs to be able to learn how to learn. Evolution is one example of a dynamic that is able to learn how to learn, but it need not be the only example. Bateson proposed that we humans can learn, learn how to learn, and learn how to learn how to learn (the latter only over a long period like a decade or so), but not generally any more than that... So far there are some AI systems that may be classified as "learning how to learn" but only on a simple level --- e.g. a system that uses the GA to search parameter space for the GA to find the parameters that give the GA optimal learning ... this program is learning how to learn but only in a very restrictive domain rather than with the generality that humans can. Higher orders of learning, in this sense, come for free with Novamente and any other sufficiently powerful/flexible AI architecture. -- Ben G ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
