Mike Tintner wrote:
Richard: Now, interpreting that result is not easy,

Richard, I get the feeling you're getting understandably tired with all your correspondence today. Interpreting *any* of the examples of *hard* cog sci that you give is not easy. They're all useful, stimulating stuff, but they don't add up to a hard pic. of the brain's cognitive architecture. Perhaps Ben will back me up on this - it's a rather important point - our overall *integrated* picture of the brain's cognitive functioning is really v. poor, although certainly we have a wealth of details about, say, which part of the brain is somehow connected to a given operation.

You make an important point, but in your haste to make it you may have overlooked the fact that I really agree with you ... and have gone on to say that I am trying to fix that problem.

What I mean by that: if you look at cog psy/cog sci in a superficial way you might come awy with the strong impression that "they don't add up to a hard pic. of the brain's cognitive architecture". Sure. But that is what I meant when I said that "cog sci has a huge amount of information stashed away, but it is in a format that makes it very hard for someone trying to build an intelligent system to actually use".

I believe I can see deeper into this problem, and I think that cog sci can be made to add up to a consistent picture, but it requires an extra organizational ingredient that I am in the process of adding right now.

The root of the problem is that the cog sci and AI communities both have extremely rigid protocols about how to do research, which are incompatible with each other. In cog sci you are expected to produce a micro-theory for every experimental result, and efforts to work on larger theories or frameworks without introducing new experimental results that are directly explained are frowned upon. The result is a style of work that produces "local patch" theories that do not have any generality.

The net result of all this is that when you say that "our overall *integrated* picture of the brain's cognitive functioning is really v. poor" I would point out that this is only true if you replace the "our" with "the AI community's".

Richard:I admit that I am confused right
now:  in the above paragraphs you say that your position is that the
human mind is 'rational' and then later that it is 'irrational' - was
the first one of those a typo?

Richard, No typo whatsoever if you just reread. V. clear. I say and said: *scientific pychology* and *cog sci* treat the mind as rational. I am the weirdo who is saying this is nonsense - the mind is irrational/crazy/creative - rationality is a major *achievement* not something that comes naturally. "Mike Tintner= crazy/irrational"- somehow, I don't think you'll find that hard to remember.

The problem here is that I am not sure in what sense you are using the word "rational". There are many usages. One of those usages is very common in cog sci, and if I go with *that* usage your claim is completely wrong: you can pick up an elementary cog psy textbook and find at least two chapters dedicated to a discussion about the many ways that humans are (according to the textbook) "irrational".

I suspect what is happening is that you are using the term in a different way, and that this is the cause of the confusion. Since you are making the claim, I think the ball is in your court: please try to explain why this discrepency arises so I can understand you claim. Take a look at e.g. Eysenck and Keane (Cognitive Psychology) and try to reconcile what you say with what they say.

Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73173298-c0f919

Reply via email to