I do think that letting an AGI learn from its environment is superior to hard-wiring knowledge into it. The latter path has been pursued pretty far in the AI community and it's become quite clear that it's infeasible due toa) the huge mass of knowledge that would be requiredb) the difficulty in explicitly articulating all the implicit knowledge that we use every day
Sorry about the confusion -- I actually agree that an AGI should learn from its environment. What I disagree is "learning to learn" which I suspect is nonessential.
Designing a learning system that can absorb the vast and irregular real-world knowledge is our primary goal. The solution will likely consist of both learning and direct programming. My second point is that direct programming is probably more efficient than learning.
Regarding "labeled examples" being required for learning, you are referring to CS-style supervised learning, which is not the only kind of learning out there. What I advocate is learning in an environment occupied with other embodied agents, including one or more teachers. This is a mix of supervised and unsupervised learning.
The problem with unsupervised learning is that the resulting concepts do not have labels. For example, an AGI may learn to recognize trees, cars, people etc. But those concepts have no names, we don't know what they refer to, and therefore they are useless. Eventually we have to manually label those concepts as "trees", "cars", "humans" etc. So it seems that unsupervised learning is less efficient (a point that you have made yourself some time ago).
By the way, I don't see what is the merit of letting several dumb AGIs interact with each other (rather than letting it interact with the real world or human teachers).
Regarding building an already-intelligent system versus building a system that can learn to be intelligent, I believe that the right approach is a mixture. What one can usefully build into a proto-AGI system is a somewhat subtle issue that doesn't have one right answer, I have addressed that to an extent in:
I think the proto-AGI is like a a container of knowledge, with the ability to abstract knowledge from experience. Then we fill this proto-AGI with real world knowledge (via experience).
You say"It is just a fantasy that self-modification can speed up the acquisition of basic knowledge."but I disagree if by self-modification you mean adaptive learning of inference control heuristics. I agree if by self-modification you mean rewriting of the basic inference rules or the underlying codebase....
I think the reasoning process of the AGI can be directly programmed, so we can save the trouble of letting the AGI learn it. "Learning to infer" is likely going to be very time consuming. We should avoid it by all means.
I have articulated a complete and consistent AGI design according to my own perspective. I don't believe it is the only possible one, nor the best possible one; but I believe it would work on a reasonably affordable amount of contemporary hardware if completely implemented, tested and tuned. If you have an alternative AGI design or specific reasons why you think mine won't work, I'm curious to hear them...
I'm still reading your book chapters. Your theory seems rather complicated, and I don't fully understand it yet. I'm trying to suggest some ways to simplify it. As you should understand, simplicity is critically important for us to engineer an AGI.
yky
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
