Hi Richard,

Yes, I understand the cautions you mention...

and, we do have a quite specific idea of what we are looking for, in
terms of what behaviors the networks of columns must display in order
to properly emulate probabilistic deduction/induction/abduction on
their inputs ...

The hope is that the proper inferential behaviors will "fall out" of
the network simulator with biologically realistic parameter settings,
without a lot of parameter tuning -- of course then there is a
significant risk of overfitting, given the high flexibility present in
realistic NN models...

But, beyond just avoiding overfitting, there is of course a danger in
looking at any particular cognitive function in isolation, which is
what we are doing...

-- Ben
On 10/12/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben,

To be sure, that sounds like a much higher quality of work than most of
what goes on.

There certainly are some traps to avoid, however (and it is worth
talking about these because the issue seldom gets addressed at this
level of sophistication).

For example:  when you set out to look for evidence of first-order
probabilistic inference behavior in a small numbers of cortical columns,
there is a danger of guaranteeing a positive result due to the
underconstrained nature of the hypothesis -- basically, if you cast
around in enough of the neurons in these structures, you have a high
probability of finding some that will relate to one another in a way
that matches your desired cognitive behavior, simply because so much is
going on in the networks that *somewhere* there will be neurons
connected up in just the right way.

If you are very specific about what you are looking for, ahead of time,
or if you make prior commitments to exactly how you think the columns
are representing information, and therefore restrict your search, then
you make the result more valid, of course.  Quite often, people do not.

A second type of trap is to find evidence of a cognitive function, but
then be completely unable to link this to any conceivable way that the
rest of the system might be working with that function.  A primitive
example of this would be the early NN work on supervised learning, where
it was never clear that "supervision" would be available in a real
system without something else in the system being required to have more
smarts than the supervised NN itself.  So, for example, if you found
evidence of probabilistic inference, but the representation of knowledge
implied by your discobvery was such that the representation was totally
incapable of (say) dealing with multiple simultaneous instances of a
single concept, the idea would ultimately be a dead end.

I obviously do not know what you are doing in detail, but I must say
that I have seen others find suggestive-looking graphs from the behavior
of a neural net, and go straight into print declaring that the net had
captured a cognitive function.  That kind of stuff makes for great
publication fodder, sometimes elevates a person's reputation up to the
firmament of greatness, but then fades from view after the research
bandwagon moves on, and finally is condemned by history as a complete
waste of time.

Not to be too negative, though:  I wish you luck.

Richard.


Ben Goertzel wrote:
> Richard,
>
> I tend to agree with you that very little fundamental progress is
> being made in linking neuron-level dynamics to cognitive-level
> dynamics.
>
> Though it is not my main area of focus (by any means), I have been
> developing some specific ideas in this regard, in collaboration with a
> neuroscientist friend.  So I don't think that building such a linkage
> is impossible, or necessarily even insanely difficult -- but I do
> think it requires a way of thinking different from what most
> neuroscientists are used to.
>
> The way my friend and I want to proceed is as follows.  We have
> certain cognitive behaviors we would like to see emerge from a small
> neural network (i.e. a relatively small number of cortical columns),
> and he has a fairly biologically accurate simulator of neural
> networks.  We then want to see how hard it is to tune the parameters
> of the neural net simulator to caue it to give rise to the cognitive
> behaviors.
>
> Specifically, we believe that networks of cortical columns can be
> shown to give rise to first-order probabilistic inference behavior;
> and that higher-order inference behavior can be achieved by including
> hippocampus in the mix (according to a specific theory about the role
> of hippocampus, which I don't want to discuss pre-publication).
>
> My point in this message isn't to spout off about my own theory in
> this regard, which I'd rather not do before writing the paper on it.
> My point is to suggest the kind of work I think could be helpful in
> building a bridge btw the neural and cognitive levels.  I agree with
> you that not much work of this nature is going on.
>
> I don't think we need to have a realistic sim of the whole brain to
> begin to crack the problem of the relationship btw neurons and
> cognition.  I think that moderately realistic, biologically
> sophisticated simulations of relatively small parts of the brain can
> be shown to display behaviors of obvious cognitive significance, and
> that this can help us build up a real neurocognitive theory.
>
> And, I don't think this is at all necessary for AGI ... though it's
> damn interesting on its own, and certainly may contain lessons for AGI
> ;-) ... as you know my work on AGI is not based on human brain
> emulation, because a) I feel we just don't know enough yet about the
> human brain, b) I feel human brains are far from optimal as
> intelligences...
>
> -- Ben G
>
> On 10/11/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>>
>> Sergio,
>>
>> Your words sound nice in theory, but that is not the way it is happening
>> on the ground.
>>
>> What I tried to say was that neuroscience folks are far too quick to
>> deploy words like "cognition" and "concepts" and "consciousness"
>> (apparently as a way to sound impressive) when in fact their
>> justification for using those words is appalling.  They often seem not
>> understand the way those words are used (by cognitive scientists), and
>> their attempts to link the words to their models or observations are,
>> quite frankly, a joke.  (I am being blunt, but that is because there are
>> strong feelings on the cognitive science side of this situation.)
>>
>> The point of my presenting that "analogy" quote was to say that THIS is
>> what it looks like to the cognitive scientist who reads what they put
>> out, so you are slipping the point sideways a little when you say that
>> maybe the influence of noise is more important in neurons than it would
>> be in transistors.
>>
>> I was not lampooning it at that level, I was saying that a
>> neuroscientist who started off by claiming great progress in
>> fantastically high-level areas (cognition) is being idiotic when they
>> jump all the way down and suddenly begin to talk about *any* aspect of
>> the neuron level directly, without just cause.  My analogy was a person
>> who claimed to be studying high-level software architecture of huge
>> systems, but who then spent most of their time talking about transistors
>> .... and then occasionally popped up to the top level and said (e.g.)
>> "this looks like the concept of "object persistence" in OOP systems!"
>>
>> That person could not claim to be doing just "early science," trying to
>> establish what the right concepts might be, etc (the way you frame it),
>> because there already exists (using my analogy) an entire field of
>> people who know a great deal about terms like objects, object
>> persistence, message passing and so on.  It ain't early stage science
>> unless that latter crowd is ignored, as if they did not exist!
>>
>> Now that is the problem.
>>
>> You can get a flavor of some aspects of what I am ranting about ;-) from
>> a special issue of Cognitive Neuropsychology, the lead article of
>> which was:
>>
>> Harley, T. A. (2004). Does cognitive neuropsychology have a future?
>> Cognitive Neuropsychology, 21, 3-16.
>>
>> Trevor Harley and I are currently working on a paper in this area.
>>
>>
>> Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to