What YKY suggested was to make an AGI based on a fixed set of reasoning rules and heuristics that are not pliable and adaptable based on experience.
I don't think this is viable in practice, I think one's system needs to be able to learn how to learn.  Evolution is one example of a dynamic that is able to learn how to learn, but it need not be the only example.

Bateson proposed that we humans can learn, learn how to learn, and learn how to learn how to learn (the latter only over a long period like a decade or so), but not generally any more than that...

So far there are some AI systems that may be classified as "learning how to learn" but only on a simple level --- e.g. a system that uses the GA to search parameter space for the GA to find the parameters that give the GA
optimal learning ... this program is learning how to learn but only in a very restrictive domain rather than with the generality that humans can.

Higher orders of learning, in this sense, come for free with Novamente and any other sufficiently powerful/flexible AI architecture.
 
We have a subtle issue here, and we need to distinguish between two ideas:
 
First, I admit that the AGI knowledge base can be subject to higher abstraction.  That means the AGI can generalize from experience and the resultant knowledge can be further generalized, giving rise to highly abstract knowledge (eg the rules of logic).  However, this process obviously do not go on ad infinitum.
 
Having (finitely) multiple levels of abstraction is already sufficient to generate all the knowledge that an AGI needs, in my opinion.  At least for the sake of human-level intelligence.
 
Notice that my idea of multiple abstraction is different from your idea of "learning to learn" which I interpret as applying the current knowledge rules to the knowledge base itself.  Your idea is to build an AGI that can modify its own ways of learning.  This is a very fanciful idea but is not the most direct way to build an AGI.  Instead of building an AGI you're trying to build something that can learn to become an AGI.  Unfortunately, this approach is  computationally inefficient.
 
You seem to think that letting an AGI learn from its environment is superior than programming it by hand.  In reality, learning is not magic.  1. It takes time. 2. It takes supervision (in the form of labeled examples).  Because of these two things, programming an AGI by hand is not necessarily dumber than building an AGI that can learn.
 
But of course we cannot have a system that is totally rigid. To be practical, we need to have a flexible system that can learn and that can also be programmed.
 
In summary I think your problem is that you're not focusing on building an AGI efficiently.  Instead you're fantasizing about how the AGI can improve itself once it is built.  The ability of an AGI to modify itself is not essential to building an AGI efficiently.  Nor can it help the AGI to learn its basic knowledge faster.  Self modification of an AGI will only happen after it has acquired at least human-level knowledge.  It is just a fantasy that self-modification can speed up the acquisition of basic knowledge.  The difference would be like driving an ordinary car and a Formula-1, in the city area =)   Not to mention that we don't possess the tools to make the Formula-1 yet.
 
yky


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to