Mike Tintner wrote:

Sounds a little confusing. Sounds like you plan to evolve a system through testing "thousands of candidate mechanisms." So one way or another you too are taking a view - even if it's an evolutionary, "I'm not taking a view" view - on, and making a lot of asssumptions about

-how systems evolve
-the "known architecture of human cognition."

No, I think because of the paucity of information I gave you have misunderstood slightly.

Everything I mentioned was in the context of an extremely detailed framework that tries to include all of the knowledge we have so far gleaned by studying human cognition using the methods of cognitive science.

So I am not making assumptions about the architecture of human cognition I am using every scrap of experimental data I can. You can say that this is still "assuming" that the framework is correct, but that is nothing compared to the usual assumptions made in AI, where the programmer just picks up a grab bag of assorted ideas that are floating around in the literature (none of them part of a coherent theory of cognition) and starts hacking.

And just because I talk of thousands of candidate mechanisms, that does not mean that there is evolution involved: it just means that even with a complete framework for human cognition to start from there are still so many questions about the low-level to high-level linkage that a vast number of mechanisms have to be explored.


about which science has extremely patchy and confused knowledge. I don't see how any system-builder can avoid taking a view of some kind on such matters, yet you seem to be criticising Ben for so doing.

Ben does not start from a complete framework for human cognition, nor does he feel compelled to stick close to the human model, and my criticisms (at least in this instance) are not really about whether or not he has such a framework, but about a problem that I can see on his horizon.


I was hoping that you also had some view on how a system 's symbols should be grounded, especially since you mention Harnad, who does make vague gestures towards the brain's levels of grounding. But you don't indicate any such view.

On the contrary, I explained exactly how they would be grounded: if the system is allowed to build its own symbols *without* me also inserting ungrounded (i.e. interpreted, programmer-constructed) symbols and messing the system up by forcing it to use both sorts of symbols, then ipso fact it is grounded.

It is easy to build a grounded system. The trick is to make it both grounded and intelligent at the same time. I have one strategy for ensuring that it turns out intelligent, and Ben has another .... my problem with Ben's strategy is that I believe his attempt to ensure that the system is intelligent ends up compromising the groundedness of the system.


Sounds like you too, pace MW, are hoping for a number of miracles - IOW creative ideas - to emerge, and make your system work.

I don't understand where I implied this. You have to remember that I am doing this within a particular strategy (outlined in my CSP paper). When you see me exploring 'thousands' of candidate mechanisms to see how one parameter plays a role, this is not waiting for a miracle, it is a vital part of the strategy. A strategy that, I claim, is the only viable one.



Anyway, you have to give Ben credit for putting a lot of his stuff & principles out there & on the line. I think anyone who wants to mount a full-scale assault on him (& why not?) should be prepared to reciprocate.

Nice try, but there are limits to what I can do to expose the details. I have not yet worked out how much I should release and how much to withhold (I confess, I nearly decided to go completely public a month or so ago, but then changed my mind after seeing the dismally poor response that even one of the ideas provoked). Maybe in the near future I will write a summary account.

In the mean time, yes, it is a little unfair of me to criticise other projects. But not that unfair. When a scientist sees a big problem with a theory, do you suppose they wait until they have a completely worked out alternative before discussing the fact that there is a problem with the theory that other people may be praising? That is not the way of science.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=65349870-56ef76

Reply via email to