Benjamin Goertzel wrote:
Loosemore wrote:
Edward
If I were you, I would not get too excited about this paper, nor others
of this sort (see, e.g. Granger's other general brain-engineering paper
at http://www.dartmouth.edu/~rhg/pubs/RHGai50.pdf).
This kind of research comes pretty close to something that deserves to
be called "bogus neuroscience" -- very dense publication, full of
neuroanatomic detail, with occasional assertions that a particular
circuit or brain structure corresponds to a cognitive function. Only
problem: the statements about neuroanatomy are at the [Experienced
Researcher] level, while the statements about cognitive functions are at
the [First Year Psychology Student Who Took One Class In Cog Psy And
Thinks They Know Everything] level.
The statements about cognitive functions are embarrassing in their
naivete.
Richard, I think you do have a point, but as often, I think you overstate
it ;-)
And, as so often, you opine that I overstate my case without going on to
give me any reasons to believe that this is true.
The title of one of Granger's other papers makes an interesting point:
Granger R (2006) Engines of the Brain: The computational instruction set
of human cognition. AI Magazine (In press)
This was, I believe, the original reference that Edward Porter made.
Both this paper and the other one I mentioned suffer the same faults.
Let's suppose that he is right and he has found, in some moderately
accurate metaphorical sense, "the computational instruction set of human
cognition."
I hear everything you say, below, but this misses the point that I was
making, which is that Granger does not actually say anything coherent
and mature about cognitive-level structures and mechanisms.
The questions you ask are not worth asking, because you cannot do
anything with a 'theory' (Granger's) that consists of a bunch of vague
assertions about various outdated, broken cognitive ideas, asserted
without justification.
Richard Loosemore
It's not really clear what this means....
For instance, let's suppose that Susan Greenfield is roughly right --
and concepts, when they rise to attention, take the form of transient
neural assemblies, each one of which is assembled based on a core of
complexly interconnected neurons.
Then, the most Granger's "instruction set" would explain would be some
of the mechanics by which these transient neural assemblies form.
He refers to olfaction a lot, but Walter Freeman showed years ago that
rabbit olfactory cortex is full of complex strange attractors that play
a role in olfactory recognition. Most likely similar complex strange
attractors (and associated strange transients, associated with
Greenfieldian transient assemblies) play a role in cognitive cortex ...
but Granger's work tells you none of this. At best it tells you the
low-level neural structures and operations that mediate the emergence of
these complex dynamics...
So, when Granger talks about language learning and language processing,
yeah, he seems to be WAY oversimplifying things. Maybe the mechanisms
he isolates really ARE in some sense the basic operations underlying
linguistic facility, but surely not in the simplistic sort of way he
alludes to. Rather, if he's right, it would most likely be because the
mechanisms he isolates serve as the infrastructure for some complex
dynamical process giving rise to the strange transient assemblies
representing linguistic concepts and structures.
But then there are a couple missing links,
-- explain how Granger's mechanisms or something analogus gives rise to
Greenfieldian strange transients, with Freeman-like strange-attractor
aspects
-- explain how this Greenfield/Freeman stuff can give rise to complex
behaviors like language learning
In some chapters in my books Chaotic Logic and From Complexity to
Creativity, in the late 1990's, I attempted to explain the latter, but
didn't finish the job as I got distracted with AI ;-)
Basically, one can look at a strange attractor and model its dynamics
using formal grammar theory. So, grammars can emerge from complex
dynamical systems. This is a means via which symbolic systems can
palpably emerge from subsymbolic systems. In physics it's called
"symbolic dynamics."
Anyway I'm digressing too much into my own weird brain theories (which
btw are only loosely connected to Novamente) -- my point is that SOME
additional theories like this are necessary to connect Granger's neural
ideas to cognition .. you can't just hack them together with glib
verbiage as he seems to do in some passages in his papers...
OTOH I find his discussion of various issues in neuroscience quite
insightful...
-- Ben G
------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
<http://v2.listbox.com/member/?&>
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=56152040-8d97a7