But of course, all I have so far is a written formalism that purports to show how these many aspects of cognition can be explained within a unified system (and fragmentary implementations that show that some of the aspects do work),

That's a lot. Would you be willing to share any of it on a "here's where I am now basis . . . . "? Many (if not most) of us are very willing not to hammer people who realize that they are still in the throes (but think that they are getting somewhere) -- although we may be a bit eager to over-kibitz at times when we think that we have something helpful to say (even if we don't have a clue in reality :-).

Or I can stretch out my research program and build the tools I need to test the system as a whole, and get hammered in the mean time for not actually doing anything that counts.

I think that this is the approach that the first person to reach AGI is going to take. That they are going to quietly build their tools until there are enough pieces lying around that they can just slap them into their framework, tune them a bit, and go. Or rather, what I REALLY believe is that they will do this and realize that they are STILL missing a whole layer of abstraction (or two or three), but still be way far ahead of the people who are trying to essentially program an AGI in assembly language AND have a rational foundation to continue experimenting on and building from.

Personally, though, if you're not willing to share your work on your tools (or, at least, the lessons you've learned), I am willing to be obnoxious and hammer you for that . . . . :-)

       Mark

----- Original Message ----- From: "Richard Loosemore" <[EMAIL PROTECTED]>
To: <[email protected]>
Sent: Thursday, June 15, 2006 2:57 PM
Subject: **SPAM** [agi] Not having trouble with parameters! WAS [Re: How the Brain Represents Abstract Knowledge]


Mark Waser wrote:
From: "Ben Goertzel" <[EMAIL PROTECTED]>
Sent: Thursday, June 15, 2006 12:17 AM
Subject: Re: [agi] How the Brain Represents Abstract Knowledge


You seem to be confusing Novamente with Richard Loosemore's system...

No, I don't think so . . . . I know that I know nothing about Richard's system :-)

The way this dialogue evolved was:
* Loosemore was saying that he was having trouble tuning the many parameters of his system * I said that I had had this problem in Webmind, but not in Novamente due to having a simpler design with fewer tunable parameters
* You complained that Novamente is not sufficiently modular

Funny, my recollection (backed by direct quotes) was that it went like
* Richard was saying that he was having trouble tuning the many parameters of his system * I said "I'm confused . . . . Why not have multiple independent instances of the same mechanisms with different local parameters for different processes? Once you uncouple the local parameters from instance to instance, making all the processes happen at once should be no more complicated than making them happen individually in isolation." (Which, by the way, I now think that he IS doing to the extent possible.) * You said "These processes all have to play together nicely as they are acting on the same data at the same time, and need to benefit from each others' intelligence in order to make cognition happen. Therefore, the parameters of all the cognitive processes need to be "tuned together" -- a tuning that will work for one process in isolation will not necessarily work for that process when it acts in the context of other processes..."


Okay, since I started this debate, can I interject that I an NOT HAVING TROUBLE tuning the many parameters in my model ;-)!

It would be more accurate to say that I am relishing the trouble with the parameters, that having trouble with them is what my entire research paradigm is all about! Just wanted to clear that up.

Ben has put my case himself in the above quote: this is all about making cognition happen by admitting that cognition is the cooperative effect of a cluster of processes that simply cannot be disentangled. What I claim is that AI research (and cognitive science, for that matter) has boxed itself into a corner by bending over backwards to try to force intelligent systems to have decomposable, independent modules. Everyone does this to some extent, although some do it much less than others.

Now, Ben would probably not go as far as I am going, and would not want to imply that all his modules are completely entangled with one another, but I do want it understood that close-quarters entanglement is what my approach is all about.

Why do it this way? (1) I think that all the cognitive science literature points in that direction (even though the cog sci folks might not like to admit that), and (2) I can see how to utterly simplify the things that the cog sci folks are trying to describe IF I build a model of cognition that embraces this kind of rich interconnectedness. I can see, in principle, how to explain an enormously wide variety of cognitive phenomena without having to resort to (what I see as) the baroque complexities of many AI and AGI models.

Many of these models, indeed, seem to grow in complexity as a function of the number of aspects of cognition that they try to embrace: it looks as if each new aspect doesn't quite fit, and as a result provokes an adjustment of the existing mechanism to make it fit. An optimist would say that the AGI architect is just learning about the many subtle things needed for intelligence, and is making sensible, motivated additions to the model...... but a pessimist would say that these are Epicycles!

But of course, all I have so far is a written formalism that purports to show how these many aspects of cognition can be explained within a unified system (and fragmentary implementations that show that some of the aspects do work), but I cannot test such an approach without systematically building and testing a very wide variety of systems. I am caught between a rock and hard place: I can take the mechanisms in isolation and try to shop them to the world (in which case, as I said, I am open to the charge that when I fixed them up to substitute for the missing "rest of the system", I was just kludging them. Or I can stretch out my research program and build the tools I need to test the system as a whole, and get hammered in the mean time for not actually doing anything that counts. Very tricky.

Richard Loosemore.






-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to