Mark Waser wrote:
From: "Ben Goertzel" <[EMAIL PROTECTED]>
Sent: Thursday, June 15, 2006 12:17 AM
Subject: Re: [agi] How the Brain Represents Abstract Knowledge


You seem to be confusing Novamente with Richard Loosemore's system...

No, I don't think so . . . . I know that I know nothing about Richard's system :-)

The way this dialogue evolved was:
* Loosemore was saying that he was having trouble tuning the many parameters of his system * I said that I had had this problem in Webmind, but not in Novamente due to having a simpler design with fewer tunable parameters
* You complained that Novamente is not sufficiently modular

Funny, my recollection (backed by direct quotes) was that it went like
* Richard was saying that he was having trouble tuning the many parameters of his system * I said "I'm confused . . . . Why not have multiple independent instances of the same mechanisms with different local parameters for different processes? Once you uncouple the local parameters from instance to instance, making all the processes happen at once should be no more complicated than making them happen individually in isolation." (Which, by the way, I now think that he IS doing to the extent possible.) * You said "These processes all have to play together nicely as they are acting on the same data at the same time, and need to benefit from each others' intelligence in order to make cognition happen. Therefore, the parameters of all the cognitive processes need to be "tuned together" -- a tuning that will work for one process in isolation will not necessarily work for that process when it acts in the context of other processes..."


Okay, since I started this debate, can I interject that I an NOT HAVING TROUBLE tuning the many parameters in my model ;-)!

It would be more accurate to say that I am relishing the trouble with the parameters, that having trouble with them is what my entire research paradigm is all about! Just wanted to clear that up.

Ben has put my case himself in the above quote: this is all about making cognition happen by admitting that cognition is the cooperative effect of a cluster of processes that simply cannot be disentangled. What I claim is that AI research (and cognitive science, for that matter) has boxed itself into a corner by bending over backwards to try to force intelligent systems to have decomposable, independent modules. Everyone does this to some extent, although some do it much less than others.

Now, Ben would probably not go as far as I am going, and would not want to imply that all his modules are completely entangled with one another, but I do want it understood that close-quarters entanglement is what my approach is all about.

Why do it this way? (1) I think that all the cognitive science literature points in that direction (even though the cog sci folks might not like to admit that), and (2) I can see how to utterly simplify the things that the cog sci folks are trying to describe IF I build a model of cognition that embraces this kind of rich interconnectedness. I can see, in principle, how to explain an enormously wide variety of cognitive phenomena without having to resort to (what I see as) the baroque complexities of many AI and AGI models.

Many of these models, indeed, seem to grow in complexity as a function of the number of aspects of cognition that they try to embrace: it looks as if each new aspect doesn't quite fit, and as a result provokes an adjustment of the existing mechanism to make it fit. An optimist would say that the AGI architect is just learning about the many subtle things needed for intelligence, and is making sensible, motivated additions to the model...... but a pessimist would say that these are Epicycles!

But of course, all I have so far is a written formalism that purports to show how these many aspects of cognition can be explained within a unified system (and fragmentary implementations that show that some of the aspects do work), but I cannot test such an approach without systematically building and testing a very wide variety of systems. I am caught between a rock and hard place: I can take the mechanisms in isolation and try to shop them to the world (in which case, as I said, I am open to the charge that when I fixed them up to substitute for the missing "rest of the system", I was just kludging them. Or I can stretch out my research program and build the tools I need to test the system as a whole, and get hammered in the mean time for not actually doing anything that counts. Very tricky.

Richard Loosemore.






-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to