RE: [agi] WordNet and NARS

2004-02-04 Thread kevinc
Ben said: However, we need to remember that the knowledge in an AGI should be *experientially grounded*. . . . but it needs to turn this knowledge into knowledge by crosslinking a decent fraction of it with perceptual and procedural patterns . . . Can a color-blind man

Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Bill Hibbard
On Wed, 21 Jan 2004, Eric Baum wrote: New Book: What is Thought? Eric B. Baum What a great book. --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Philip Sutton
Thanks Bill for the Eric Baum reference. Deep thinker that I am, I've just read the book review on Amazon and that has orientated me to some of the key ideas in the book (I hope!) so I'm happy to start speculating without having actually read the book. (See the review below.) It seems

RE: [agi] WordNet and NARS

2004-02-04 Thread Ben Goertzel
I agree that not all knowledge in a mind needs to be grounded. However, I think that a mind needs to have a LOT of grounded knowledge, in order to learn to reason usefully. It can then transfer some of the thinking-ability (and some of the concrete relationships) learned on the grounded

RE: [agi] What is Thought? Book announcement

2004-02-04 Thread Ben Goertzel
Philip, I have mixed feelings on this issue (filling an AI mind with knowledge from DB's). I'd prefer to start with a tabula rasa AI and have it learn everything via sensorimotor experience -- and only LATER experiment with feeding DB knowledge directly into its knowledge-store

Re: [agi] What is Thought? Book announcement

2004-02-04 Thread Bill Hibbard
It seems that Baum is arguing that biological minds are amazingly quick at making sense of the world because, as a result of evolution, the structure of the brain is set up with inbuilt limitations/assumptions based on likely possibilities in the real world - thus cutting out vast areas for

RE: [agi] WordNet and NARS

2004-02-04 Thread Ben Goertzel
Philip, I think it's important for a mind to master SOME domain (preferably more than one), because advanced and highly effective cognitive schemata are only going to be learned in domains that have been mastered. These cognitive schemata can then be applied in other domains as well, which are

[agi] Simulation and cognition

2004-02-04 Thread Ben Goertzel
Philip, You and I have chatted a bit about the role of simulation in cognition, in the past. I recently had a dialogue on this topic with a colleague (Debbie Duong), which I think was somewhat clarifying. Attached is a message I recently sent to her on the topic. -- ben Debbie, Let's

RE: [agi] WordNet and NARS

2004-02-04 Thread Philip Sutton
Hi Ben, So, I am skeptical that an AI can really think effectively in ANY domain unless it has done a lot of learning based on grounded knowledge in SOME domain first; because I think advanced cognitive schemata will evolve only through learning based on grounded knowledge... OK. I think

RE: [agi] WordNet and NARS

2004-02-04 Thread Ben Goertzel
So my guess is that the fastest (and still effective) path to learning would be: - *first* a partially grounded experience - *then* a fully grounded mastery - then a mixed learning strategy of grounded and non-grounded as need and oportunity dictates Cheers, Philip Well, this

Re: [agi] Simulation and cognition

2004-02-04 Thread Philip Sutton
Hi Ben, What you said to Debbie Duong sound intuitively right to me. I think that most human intuition would be inferential rather than a simulation. but it seems that higher primates store a huge amount of data on the members of their clan - so my guess is that we do a lot of simulating of

RE: [agi] WordNet and NARS

2004-02-04 Thread Philip Sutton
Hi Ben, Well, this appears to be the order we're going to do for the Novamente project -- in spite of my feeling that this isn't ideal -- simply due to the way the project is developing via commercial applications of the half-completed system. And, it seems likely that the initial

RE: [agi] Simulation and cognition

2004-02-04 Thread Ben Goertzel
What you said to Debbie Duong sound intuitively right to me. I think that most human intuition would be inferential rather than a simulation. but it seems that higher primates store a huge amount of data on the members of their clan - so my guess is that we do a lot of simulating of the

RE: [agi] Simulation and cognition

2004-02-04 Thread Philip Sutton
Hi Ben, Maybe we do simulate a *bit* more with out groups than I first thought - but we do it using caricature stereotypes based on *ungrounded* data - ie. we refuse to use grounded data (from our ingroup), perhaps, since that would make these outgroup people uncomfortably too much like us.

RE: [agi] WordNet and NARS

2004-02-04 Thread Yan King Yin
From: Ben Goertzel [EMAIL PROTECTED] Well, this appears to be the order we're going to do for the Novamente project -- in spite of my feeling that this isn't ideal -- simply due to the way the project is developing via commercial applications of the half-completed system. And, it seems likely

RE: [agi] WordNet and NARS

2004-02-04 Thread Ben Goertzel
Well, this appears to be the order we're going to do for the Novamente project -- in spite of my feeling that this isn't ideal -- simply due to the way the project is developing via commercial applications of the half-completed system. And, it seems likely that the initial partially