YKY,
 
I do think that letting an AGI learn from its environment is superior to hard-wiring knowledge into it.  The latter path has been pursued pretty far in the AI community and it's become quite clear that it's infeasible due to
 
a) the huge mass of knowledge that would be required
b) the difficulty in explicitly articulating all the implicit knowledge that we use every day
 
Regarding "labeled examples" being required for learning, you are referring to CS-style supervised learning, which is not the only kind of learning out there.  What I advocate is learning in an environment occupied with other embodied agents, including one or more teachers.  This is a mix of supervised and unsupervised learning.
 
Regarding building an already-intelligent system versus building a system that can learn to be intelligent, I believe that the right approach is a mixture.  What one can usefully build into a proto-AGI system is a somewhat subtle issue that doesn't have one right answer, I have addressed that to an extent in:
 
http://www.goertzel.org/papers/PostEmbodiedAI_June7.htm
 
You say
 
"
It is just a fantasy that self-modification can speed up the acquisition of basic knowledge.
"
but I disagree if by self-modification you mean adaptive learning of inference control heuristics.  I agree if by self-modification you mean rewriting of the basic inference rules or the underlying codebase....
 
I have articulated a complete and consistent AGI design according to my own perspective.  I don't believe it is the only possible one, nor the best possible one; but I believe it would work on a reasonably affordable amount of contemporary hardware if completely implemented, tested and tuned.  If you have an alternative AGI design or specific reasons why you think mine won't work, I'm curious to hear them...
 
-- Ben G
-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]On Behalf Of Yan King Yin
Sent: Friday, September 09, 2005 10:28 AM
To: [email protected]
Subject: Re: [agi] Representing Thoughts

What YKY suggested was to make an AGI based on a fixed set of reasoning rules and heuristics that are not pliable and adaptable based on experience.
I don't think this is viable in practice, I think one's system needs to be able to learn how to learn.  Evolution is one example of a dynamic that is able to learn how to learn, but it need not be the only example.

Bateson proposed that we humans can learn, learn how to learn, and learn how to learn how to learn (the latter only over a long period like a decade or so), but not generally any more than that...

So far there are some AI systems that may be classified as "learning how to learn" but only on a simple level --- e.g. a system that uses the GA to search parameter space for the GA to find the parameters that give the GA
optimal learning ... this program is learning how to learn but only in a very restrictive domain rather than with the generality that humans can.

Higher orders of learning, in this sense, come for free with Novamente and any other sufficiently powerful/flexible AI architecture.
 
We have a subtle issue here, and we need to distinguish between two ideas:
 
First, I admit that the AGI knowledge base can be subject to higher abstraction.  That means the AGI can generalize from experience and the resultant knowledge can be further generalized, giving rise to highly abstract knowledge (eg the rules of logic).  However, this process obviously do not go on ad infinitum.
 
Having (finitely) multiple levels of abstraction is already sufficient to generate all the knowledge that an AGI needs, in my opinion.  At least for the sake of human-level intelligence.
 
Notice that my idea of multiple abstraction is different from your idea of "learning to learn" which I interpret as applying the current knowledge rules to the knowledge base itself.  Your idea is to build an AGI that can modify its own ways of learning.  This is a very fanciful idea but is not the most direct way to build an AGI.  Instead of building an AGI you're trying to build something that can learn to become an AGI.  Unfortunately, this approach is  computationally inefficient.
 
You seem to think that letting an AGI learn from its environment is superior than programming it by hand.  In reality, learning is not magic.  1. It takes time. 2. It takes supervision (in the form of labeled examples).  Because of these two things, programming an AGI by hand is not necessarily dumber than building an AGI that can learn.
 
But of course we cannot have a system that is totally rigid. To be practical, we need to have a flexible system that can learn and that can also be programmed.
 
In summary I think your problem is that you're not focusing on building an AGI efficiently.  Instead you're fantasizing about how the AGI can improve itself once it is built.  The ability of an AGI to modify itself is not essential to building an AGI efficiently.  Nor can it help the AGI to learn its basic knowledge faster.  Self modification of an AGI will only happen after it has acquired at least human-level knowledge.  It is just a fantasy that self-modification can speed up the acquisition of basic knowledge.  The difference would be like driving an ordinary car and a Formula-1, in the city area =)   Not to mention that we don't possess the tools to make the Formula-1 yet.
 
yky


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to