On Feb 7, 2008 11:53 AM, Mike Tintner <[EMAIL PROTECTED]> wrote: > And I think it's clear, if only in a very broad way, how the human mind > achieves this (which I'll expound in more detail another time) - it has what > you could call a "general activity language" - and learns every skill *as an > example of a general activity*. Every particular skill is learned in a > broad, general way in terms of concepts that can be and are applied to all > skills/ activities (as well as more skill-specific terminology). Very > general, "modular" concepts.
My approach to the whole problem of AGI is to think "What is the hard bit, that noone has really gotten right at all?". I completely agree with the problem you pose here - I think making a representation of the world which can be generalised and abstracted is the emergent crux of AGI. Neural networks and other machine learning methods like decision trees don't have representations which support the kind of operation we're talking about. How can we solve this? I think it requires making a really simple associative language of sorts. We need something in which I can trivially represent things like: "A is somehow related to B" "A and B have some common properties. I will call these common properties P and make A and B specialisations of P" "C is similar to A and B somehow. C might have the properties of P." If the representation just makes a graph of links between things (/objects/concepts/cortical columns) then finding the common links between two objects doesn't actually seem that hard a problem. I think. -J ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=94603346-a08d2f
