Richard,

Structure, instances and temporary relations can be represented by uniform
elements through activation set. I'm sure it's addressed in theory of
Hebbian learning somewhere, and I'd be grateful if someone could provide a
reference for description of this process. I tried to describe it
(admittedly in a sketchy way) in my message about a month ago:
http://www.listbox.com/member/archive/303/2007/09/sort/time/page/20/entry/24:629/


On 10/22/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Edward W. Porter wrote:
> > Dear Readers of the RE: Bogus Neuroscience Thread,
> >
> > Because I am the one responsible for bringing to the attention of this
> > list the Granger article ("Engines of the brain: The computational
> > instruction set of human cognition", by Richard Granger) that has caused
> > the recent  kerfuffle, this morning I took the time to do a reasonably
> > careful re-read of it.
> >
> > [snip]
> >
> > In his Sun 10/21/2007 2:12 PM post Richard Loosemore cited failure to
> > answer the following questions as indications of the paper's
> worthlessness.
> >
> > "RICHARD>> "How does it cope with the instance/generic distinction?"
> >
> >             I assume after the most general cluster, or the cluster
> >             having the most activation from the current feature set,
> >             spreads its activation through the matrix loop, then the
> >             cluster most activated by the remaining features spreads
> >             activation through the matrix loop.  This sequence can
> >             continue to presumably any desired level of detail supported
>
> >             by the current set of observed, remembered, or imagined
> >             features to be communicated in the brain.  The added detail
> >             from such a sequence of descriptions would distinguish an
> >             instance from a generic description reprsented by just one
> >             such description..
>
> A misunnderstanding:  the question is how it can represent multiple
> copies of a concept that occur in a situation without getting confused
> about which is which.  If the appearance of one chair in a scene causes
> the [chair] neuron (or neurons, if they are a cluster) to fire, then
> what happens when you walk into a chair factory?  What happens when you
> try to understand a sentence in which there are several nouns:  does the
> [noun] node fire more than before, and if it does, how does this help
> you parse the sentence?
>
> This is a DEEP issue:  you cannot just say that this will be handled by
> other neural machinery on top of the basic (neural-cluster =
> representation of generic thing) idea, because that "other machinery is
> nontrivial, and potentially it will require the original (neural-cluster
> = representation of generic thing) idea to be abandoned completely.
>
> >
> > "RICHARD>> "How does it allow top-down processes to operate in the
> > recognition process?"
> >
> >             I don't think there was anything said about this, but the
> >             need for, and presence in the brain of, both top-down and
> >             bottom-up processes is so well know as to have properly been
> >             assumed.
>
> Granted, but in a system in which the final state is determined by
> expectations as well as by incoming input, the dynamics of the system
> are potentially completely different, and all of Granger's assertions
> about the roles played by various neural structures may have to be
> completely abandoned in order to make allowance for that new dynamic.
>
>
> > "RICHARD>> "How are relationships between instances encoded?" "
> >
> >             I assume the readers will understand how it handles temporal
>
> >             relationships (if you add the time dilation and compression
> >             mentioned above).  Spatial relationships would come from the
> >             topology of V1 (but sensed spatial relationships can also be
>
> >             build via a kohonen net SOM with temporal difference of
> >             activiation time as the SOM's similarity metric).
> >             Similarly, other higher order relationships can be built
> >             from patterns in the space of hierarchical gen/comp pats
> >             networks derived from inputs in these two basic dimensions
> >             of space and time plus in the dimensions defined by other
> >             sensory, emotional, and motor inputs.  [I consider motor
> >             outputs as a type of input].
>
> Again, no:  relationships are extremely dynamic:  any two concepts can
> be linked by a relationship at any moment, so the specific question is,
> if "things" are represented as clusters of neurons, how does the system
> set up a temporary connection between those clusters, given that there
> is not, in general, a direct link between any two neurons in the brain?
>   You cannot simply "strengthen" the link between your "artichoke"
> neuron and your "basilisk" neuron in order to form the relationship
> caused by my mention of both of them in the same sentence, because, in
> general, there may not be any axons going from one to the other.
>
>
> > "RICHARD>> "How are relationships abstracted?"
> >
> >             By shared features.  He addresses how clusters tend to form
> >             automatically.  These clusters are abstractions.
>
> These are only clusters of "things".  He has to address this issue
> separately for "relationships" which are connections or links between
> things.  The question is about "types" of links, and about how there are
> potentially an infinite number of different types of such links:  how
> are those different types represented and built and used?  Again, a
> simple neural connection is not good enough, because there would only be
> one possible type of relationship in your thoughts.
>
>
> > "RICHARD>> "How does position-independent recognition occur?"
> >
> >             He deals with this.  His nodes are nodes in a hierarchical
> >             memory that provides degrees of position and shape
> >             invariance, or the type mentioned by Hawkins and the Serre
> >             paper I have cited so many times.  Granger's figures 6 and 7
> >             indicates exactly this type of invariance.
>
> I have not looked in detail at this, but how does his position
> invariance scale up?  For example, if I learn the new concept of "floo
> powder", do I now have to build an entire set of neural machinery for
> the all the possible positions on my retina where I might see "floo
> powder"?  If the answer is yes, the mechanism is bankrupt, as I am sure
> you realise:  we do not have that much neural machinery to dedicate to it.
>
>
> > "RICHARD>> "What about the main issue that usually devastates any
> > behaviorist-type proposal:  patterns to be associated with other
> > patterns are first extracted from the input by some (invisible,
> > unacknowledged) preprocessor, but when the nature of this preprocessor
> > is examined carefully, it turns out that its job is far, far more
> > intelligent than the supposed association engine to which it delivers
> > its goods?
> >
> >             What he feeds to his system are things like the output of
> >             Gabor filters.  I don't think a Gabor filter is something
> >             that is "far, far, more intelligent than the supposed
> >             association engine to which it delivers its goods."
>
> He has to show that the system is capable, by itself, of picking up
> objects like the "letter A" in a scene without the programmer of the
> simulation giving it some hint.  The fact that he uses Gabor filters
> does not bear on the issue, as far as I can see.
>
> This issue is more subtle than the others.  Too much for me to go into
> in great detail, due to time constraints.  Suffice it to say that you do
> not really address the issue I had in mind.
>
>
> > This is just an example of how a serious attempt to understand what is
> > good in Granger's paper, and to expand on those good features, overcomes
>
> > a significant number of the objections raised by those whose major
> > motivation seems to be to dismiss it.
>
> I think I have shown that none of my objections were overcome, alas.
>
>
> > Wikipedia, that font of undisputed truth, defines Cognitive science as
> >
> >             "Cognitive science is most simply defined as the scientific
> >             study either of mind or of intelligence (e.g. Luger 1994).
> >             It is an interdisciplinary study drawing from relevant
> >             fields including psychology, philosophy, neuroscience,
> >             linguistics, anthropology, computer science, biology, and
> >             physics"
> >
> > Based on this definition I would say the cognitive science aspect of
> > Granger's paper, although speculative and far from fully fleshed out, is
> > actually quite good.
>
> Cognitive science is more than just saying a few things that seem to
> come from a selction of these fields.
>
> I would welcome further discussion of these issues, but it might be
> better for me to point to some references in which they are discussed
> properly, rather than for me to try to do the whole job here.
>
>
> Richard Loosemore
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Vladimir Nesov                            mailto:[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=56416874-c91bbd

Reply via email to