Mark Waser wrote:
>> True enough, but Granger's work is NOT total BS... just partial BS ;-)
In which case, clearly praise the good stuff but just as clearly (or
even more so) oppose the BS.
You and Richard seem to be in vehement agreement. Granger knows his
neurology and probably his neuroscience (depending upon where you draw
the line) but his link of neuroscience to cognitive science is not only
wildly speculative but clearly amateurish and lacking the necessary
solid grounding in the latter field.
I'm not quite sure why you always hammer Richard for pointing this out.
He does have his agenda to stamp out bad science (which I endorse
fully) but he does tend to praise the good science (even if more
faintly) as well. Your hammering of Richard often appears as a strawman
to me since I know that you know that Richard doesn't dismiss these
people's good neurology -- just their bad cog sci. And I really am not
seeing any difference between what I understand as your opinion and what
I understand as his.
You know, you're right: I do spend a lot less time praising good stuff,
and I sometimes feel bad about that (Accentuate The Positive, and all that).
But the reason I do so much critiquing is that the AI/Cog
Sci/Neuroscience area is so badly clogged with nonsense and what we need
right now is for someone to start cutting down the dead wood. We need
to stop new people coming into the field and wasting years (or their
entire career) reinventing wheels or trying to fix wheels that were
already known to be broken beyond repair 30 years before they were born.
About the Granger paper, I thought last night of a concise summary of
how bad it really is. Imagine that we had not invented computers, but
we were suddenly given a batch of computers by some aliens, and we tried
to put together a science to understand how these machines worked.
Suppose, also, that these machines ran Microsoft Word and nothing else.
As scientists, we then divide into at least two camps. The
"neuroscientists" take these computers and just analyze wiring and other
physical characteristics. After a while these folks can tell you all
about the different bits they have named and how they are connected:
DDR3 memory, SLI, frontside bus, water cooling, clock speeds, cache, etc
etc etc. Then there is another camp, the "cognitive scientists" who try
to understand the Microsoft Word application running on these computers,
without paying much attention to the hardware.
The cog sci people have struggled to make sense of Word (and still don't
have a good theory, even today), and over the years they have embraced,
and then rejected, several really bad theories of how Word works. One
of these, which was invented about 70 years ago, and discarded about 50
years ago, was called "behaviorism" and it had some pretty nutty ideas
about what was going on. To the behaviorists, MS Word consisted of a
huge pile of things that represented words ("word-units"), and the way
the program worked was that the word-units just had an activation level
that went up if there were more instances of that word in a document, or
if the word was in a bigger font, or in bold or italic. And there were
links between the word-units called "associations". The behaviorists
seriously believed that they could explain all of MS Word this way, but
today we consider this theory to have been stupidly simplistic, and we
have far for subtle, complex ideas about what is going on.
What was so bad about the behaviorist theory? Many, many things, but
take a look at one of them: it just cannot handle the "instance-generic"
distinction (aka the "type-token" distinction). It cannot represent
individual instances of words in the document. If the word "the"
appears a hundred times, that just makes the word-unit for "the" so much
stronger, that's all. It really doesn't take a rocket scientist to tell
you that that is a big, fat problem.
The one "virtue" of behaviorism is that amateurs can pick up the talk
pretty quickly, and if they don't know all the ridiculous limitations
and faults of behaviorism, they can even convince themselves that this
is the beginnings of a workable theory of intelligence.
So now, along comes a neuroscientist (Granger, although he is only one
of many) and he writes a paper that is filled with 95% talk about wires
and busses and caches and connections .... and then here and there he
inserts statements out of the blue that purport to be a description of
things going on at the Microsoft Word level (and indeed the whole paper
is supposed to be about finding the fundamental circuit components that
explain Microsoft Word). Only problem is that whenever he suddenly
inserts a few sentences of Microsoft Word talk, it is just a vague
reference to how the circuitry can explain the things going on in what
sounds like a *behaviorist* theory! His statements look wildly out of
place: its all "SLI bus connects with a feedback loop to the water
cooled RealTek phase-shifted master clock DMA controller, and then
[boom!] this patch seems to implement associations in the grammar
checking function". Huh?! 8-)
Note the sudden appearance of one of the behaviorist buzzwords:
"associations". Not enough to convict, but inasmuch as he says
anything, he alogns himself with behaviorism.
In other words, he claims to be finding the circuit-level counterparts
to things in a dead and broken theory.
It is actually worse than that: he only gives us a few fragmentary
hints of his ideas about how Microsoft Word works, he does not even make
it clear if he really does subscribe to that old behaviorist theory, or
what.
And that's it.
Nothing else of substance in the paper *except* a lot of description of
wiring relationships between different parts of the system (and that
kind of wiring talk happens all the time among these neuroscientists).
The wiring talk doesn't make the paper "good" or "bad" because it is
just a catalogue of previously discovered wiring patterns.
Amazing and terrible thing: he gets away with it because the
neuroscientists hear the cognitive science and think that that stuff is
really cool, while the cognitive scientists hear the neuroscience and
think that stuff is cool, and the AI researchers hear both sides and
think that those bad-boy neuroscientists are finally showing those
ramshackle psychologists how to to do proper science, and they think
both parts of the paper are really cool.
So I come along and criticise this, and all of a sudden I puncture what
looked like a nice story. Heck, I feel sad about that, I hate to be a
spoilsport, but, what? Are we going to just sit around wasting time for
another 50 years?
And meanwhile, I am also told that (see first line quoted above, which I
think Ben wrote) "... but Granger's work is NOT total BS... just partial
BS". Yes, but the only part of any interest to us, the stuff that the
paper is supposed to be all about, *that* part is the BS.
Anyhow, enough of this: back to work.
Richard Loosemore
P.S.
(Okay, just for the record, here is some positive stuff: anyone who
wants to simplify their career should read Eysenck and Keane's Cognitive
Psychology, then get hold of a copy of the old 4-Volume Handbook of AI
and read it with a view to critiquing it. Or wait a few years and I
will get my 3-volume textbook finished ;-). There, that was extremely
positive.).
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=56308778-9f0359