Richard,

I will only respond to the below copied one of the questions in your last
message because of lack of time.   I pick this example because it was so
“DEEP” (to be heard in your mind with max reverb).  I hoped that if I
could give a halfway reasonable answer to it and if, just maybe, you could
open your mind (and that is one of the main issue in this thread), you
might actually also try to think how your other questions could be
answered.

In response to this “DEEP” question, I ask "How do you, Richard Loosemore,
normally distinguish different instances of a given type."

By distinguishing characteristics?  (This would include things like little
dings on your car or the junk in its back seat that distinquish it from a
similar make and model of the same years and color. )

If so, that is handled by Granger’s system in the manner described in my
response to the question copied below.

Now when you are dealing with objects that have an identical appearance,
such as Diet Coke cans (the example I normally use when I think of this
problem), often the only thing you can distinguish them by is – again –
their distinguishing characteristics.  But in this case the distinguishing
characteristics would be things like their location, orientation, or
perhaps relationship to other objects.  It would also include implications
that can properly be drawn from or about such characteristics for the type
of thing involved.

For example, if you leave a Diet Coke can (can_1) downstairs in your
kitchen and go up to you bedroom and see an identical looking coke can
next to your bed, you would normally assume the can next to your bed was
not can_1, unless you had some explanation for how can_1 was moved next to
your bed.   (For purposes of dealing with the hardest part of the problem
we will assume all coke cans have been opened and have the same amount of
coke with roughly the same level of carbonation.)  If you go back down
stairs and see a Diet Coke can exactly where you left can_1, you will
assume it is can_1, itself, barring some reason to believe the can might
have been replaced with another, such as if you know someone was in your
kitchen during your absence.

All these types of inferences are based on generalities, often important
broad generalities like the persistence of objects, that take the learning
of even more basic or more primiative generalities (such as those needed
for object recognition, understanding the concept of physical objects,
the ability to see similarities and dissimilarities between objects, and
spatial and temporal models), all of which take millions of trillions of
machine opps and weeks or months of experience to learn.  So I hope you
will forgive me and Granger if we don’t explain them in detail.  (Goertzel
in "Hidden Pattern", I think it is, actually gives an example of how an
AGI could learn object persistence.)

However, the whole notion of AGI is built on the premise that such things
can be learned by a machine architecture having certain generalized
capabilities and having something like the physical world to interact in
and with.  Those of us who are bullish on AGI think we already have a
pretty good ideas how to make system that can have the required
capabilities to learn such broad generalities, or at least get us much
closer to such a system, so we can get a much better understanding of what
more is needed, and then try to add it.

With such ideas of how to make an AGI, it become much easier to map the
various aspects of it into known, or hypothesized, operations in the
brain.  The features described in Granger’s paper, when combined with
other previous ideas on how the brain could function as an AGI, would seem
to describe a system having roughly the general capability to learn and
properly inference from all of the basic generalizations of the type I
described above, such as the persistence of objects, and what types of
objects move on their own, and with what probabilities under what
circumstances. For example, Granger's article explains how to learn
patterns, generalizations of pattersn, patterns of generalizations of
patterns, and with something like a hippocampus it could learn episodes,
and then patterns from episodes, and generalizations from patterns from
episodes, and patterns of generalazations from episodes, etc.

Yes, the Granger article, itself, does not describe all of the features
necessary for the brain to act as a general AGI, but when interpreted in
the context of enlightened AGI models, such as Novamente, and the current
knowledge and leading hypotheses in brain science, it is easy to imagine
how what he describes could play a very important role in solving even
mental problems as “DEEP” (again with reverb) as that of determining
whether the Diet Coke can on the table is the one you have been drinking
from, or someone else’s.

Has there been a little hand waving in the above explanation?  Yes, but if
you have a good understanding of AGI and its brain equivalent, you will
understand the amount of hand waving is actually rather limited.

Ed Porter


============= from prior post ====================

> “RICHARD>> “How does it cope with the instance/generic distinction?”
>
>             I assume after the most general cluster, or the cluster
>             having the most activation from the current feature set,
>             spreads its activation through the matrix loop, then the
>             cluster most activated by the remaining features spreads
>             activation through the matrix loop.  This sequence can
>             continue to presumably any desired level of detail supported
>             by the current set of observed, remembered, or imagined
>             features to be communicated in the brain.  The added detail
>             from such a sequence of descriptions would distinguish an
>             instance from a generic description reprsented by just one
>             such description..

A misunnderstanding:  the question is how it can represent multiple
copies of a concept that occur in a situation without getting confused
about which is which.  If the appearance of one chair in a scene causes
the [chair] neuron (or neurons, if they are a cluster) to fire, then
what happens when you walk into a chair factory?  What happens when you
try to understand a sentence in which there are several nouns:  does the
[noun] node fire more than before, and if it does, how does this help
you parse the sentence?

This is a DEEP issue:  you cannot just say that this will be handled by
other neural machinery on top of the basic (neural-cluster =
representation of generic thing) idea, because that "other machinery is
nontrivial, and potentially it will require the original (neural-cluster
= representation of generic thing) idea to be abandoned completely.

-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Monday, October 22, 2007 2:55 PM
To: agi@v2.listbox.com
Subject: Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of
synapses]


Edward W. Porter wrote:
> Dear Readers of the RE: Bogus Neuroscience Thread,
>
> Because I am the one responsible for bringing to the attention of this
> list the Granger article (“Engines of the brain: The computational
> instruction set of human cognition”, by Richard Granger) that has caused

> the recent  kerfuffle, this morning I took the time to do a reasonably
> careful re-read of it.
>
> [snip]
>
> In his Sun 10/21/2007 2:12 PM post Richard Loosemore cited failure to
> answer the following questions as indications of the paper’s
worthlessness.
>
> “RICHARD>> “How does it cope with the instance/generic distinction?”
>
>             I assume after the most general cluster, or the cluster
>             having the most activation from the current feature set,
>             spreads its activation through the matrix loop, then the
>             cluster most activated by the remaining features spreads
>             activation through the matrix loop.  This sequence can
>             continue to presumably any desired level of detail supported
>             by the current set of observed, remembered, or imagined
>             features to be communicated in the brain.  The added detail
>             from such a sequence of descriptions would distinguish an
>             instance from a generic description reprsented by just one
>             such description..

A misunnderstanding:  the question is how it can represent multiple
copies of a concept that occur in a situation without getting confused
about which is which.  If the appearance of one chair in a scene causes
the [chair] neuron (or neurons, if they are a cluster) to fire, then
what happens when you walk into a chair factory?  What happens when you
try to understand a sentence in which there are several nouns:  does the
[noun] node fire more than before, and if it does, how does this help
you parse the sentence?

This is a DEEP issue:  you cannot just say that this will be handled by
other neural machinery on top of the basic (neural-cluster =
representation of generic thing) idea, because that "other machinery is
nontrivial, and potentially it will require the original (neural-cluster
= representation of generic thing) idea to be abandoned completely.

>
> “RICHARD>> “How does it allow top-down processes to operate in the
> recognition process?”
>
>             I don’t think there was anything said about this, but the
>             need for, and presence in the brain of, both top-down and
>             bottom-up processes is so well know as to have properly been
>             assumed.

Granted, but in a system in which the final state is determined by
expectations as well as by incoming input, the dynamics of the system
are potentially completely different, and all of Granger's assertions
about the roles played by various neural structures may have to be
completely abandoned in order to make allowance for that new dynamic.


> “RICHARD>> “How are relationships between instances encoded?” ”
>
>             I assume the readers will understand how it handles temporal
>             relationships (if you add the time dilation and compression
>             mentioned above).  Spatial relationships would come from the
>             topology of V1 (but sensed spatial relationships can also be
>             build via a kohonen net SOM with temporal difference of
>             activiation time as the SOM’s similarity metric).
>             Similarly, other higher order relationships can be built
>             from patterns in the space of hierarchical gen/comp pats
>             networks derived from inputs in these two basic dimensions
>             of space and time plus in the dimensions defined by other
>             sensory, emotional, and motor inputs.  [I consider motor
>             outputs as a type of input].

Again, no:  relationships are extremely dynamic:  any two concepts can
be linked by a relationship at any moment, so the specific question is,
if "things" are represented as clusters of neurons, how does the system
set up a temporary connection between those clusters, given that there
is not, in general, a direct link between any two neurons in the brain?
  You cannot simply "strengthen" the link between your "artichoke"
neuron and your "basilisk" neuron in order to form the relationship
caused by my mention of both of them in the same sentence, because, in
general, there may not be any axons going from one to the other.


> “RICHARD>> “How are relationships abstracted?”
>
>             By shared features.  He addresses how clusters tend to form
>             automatically.  These clusters are abstractions.

These are only clusters of "things".  He has to address this issue
separately for "relationships" which are connections or links between
things.  The question is about "types" of links, and about how there are
potentially an infinite number of different types of such links:  how
are those different types represented and built and used?  Again, a
simple neural connection is not good enough, because there would only be
one possible type of relationship in your thoughts.


> “RICHARD>> “How does position-independent recognition occur?”
>
>             He deals with this.  His nodes are nodes in a hierarchical
>             memory that provides degrees of position and shape
>             invariance, or the type mentioned by Hawkins and the Serre
>             paper I have cited so many times.  Granger’s figures 6 and 7
>             indicates exactly this type of invariance.

I have not looked in detail at this, but how does his position
invariance scale up?  For example, if I learn the new concept of "floo
powder", do I now have to build an entire set of neural machinery for
the all the possible positions on my retina where I might see "floo
powder"?  If the answer is yes, the mechanism is bankrupt, as I am sure
you realise:  we do not have that much neural machinery to dedicate to it.


> “RICHARD>> “What about the main issue that usually devastates any
> behaviorist-type proposal:  patterns to be associated with other
> patterns are first extracted from the input by some (invisible,
> unacknowledged) preprocessor, but when the nature of this preprocessor
> is examined carefully, it turns out that its job is far, far more
> intelligent than the supposed association engine to which it delivers
> its goods?
>
>             What he feeds to his system are things like the output of
>             Gabor filters.  I don’t think a Gabor filter is something
>             that is “far, far, more intelligent than the supposed
>             association engine to which it delivers its goods.”

He has to show that the system is capable, by itself, of picking up
objects like the "letter A" in a scene without the programmer of the
simulation giving it some hint.  The fact that he uses Gabor filters
does not bear on the issue, as far as I can see.

This issue is more subtle than the others.  Too much for me to go into
in great detail, due to time constraints.  Suffice it to say that you do
not really address the issue I had in mind.


> This is just an example of how a serious attempt to understand what is
> good in Granger’s paper, and to expand on those good features, overcomes

> a significant number of the objections raised by those whose major
> motivation seems to be to dismiss it.

I think I have shown that none of my objections were overcome, alas.


> Wikipedia, that font of undisputed truth, defines Cognitive science as
>
>             “Cognitive science is most simply defined as the scientific
>             study either of mind or of intelligence (e.g. Luger 1994).
>             It is an interdisciplinary study drawing from relevant
>             fields including psychology, philosophy, neuroscience,
>             linguistics, anthropology, computer science, biology, and
>             physics”
>
> Based on this definition I would say the cognitive science aspect of
> Granger’s paper, although speculative and far from fully fleshed out, is

> actually quite good.

Cognitive science is more than just saying a few things that seem to
come from a selction of these fields.

I would welcome further discussion of these issues, but it might be
better for me to point to some references in which they are discussed
properly, rather than for me to try to do the whole job here.


Richard Loosemore


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=56439084-4dbced

Reply via email to