Ian,

I talked about the problem of using the CEPT at the beginning of the work .
We discussed this with Matthew. The problem consists in mixing the
structural and semantic data. This approach degrades the recognition and
extraction of semantic knowledge.

Of course, the retina makes the image description by the structural
features and linear transformations. Invariant pattern recognition can be
performed in the framework of these two approaches. I did invariant pattern
recognition in neural networks using linear transformation in 1982. It was
my graduation work.

Thanks for the links.

Regards,
Ivan


ON:Date: Tue, 22 Oct 2013 16:56:37 -0700

Francisco,

 CEPT -> CLA is a very odd transition. I'm pretty sure the SP won't get you
anything useful.

CEPT Isn't a "Retina" it's the World

 Unlike CEPT, the actual retina is not an organized map. It has many copies
of the same small number of feature detectors. Each feature detector it has
is distributed (more or less) evenly across its surface.

 The world, on the other hand, has a coordinate system. Height, width,
depth, etc. Closeness in the real world are defined by these dimensions.
CEPT provides another definition for "closeness" in the context of words.
It is the world of words.

 An object in the CEPT world though is a very very strange thing indeed. It
never moves, and never changes, and because you don't recompute the CEPT
world again and again, you never get a different 'view' of the world.

 It's as if the only thing you could see in your entire world was an apple,
and you never saw it from any other angle, and it never moved or changed.

If the CEPT map is the world, and each word is an object in that world,
what would a retina be?

The Retina

 The retina exists as a set of predefined feature detectors evenly
distributed across its surface. Red, green, blue, and light. The ganglia
then add an initial processing step to get you a second set of evenly
distributed feature detectors for light/dark transitions and all the rest.

 The critical assumption here is that there are a set of common features
that could exist anywhere in the 2D projection of the world. The reason for
that, of course, is because our view changes over time, and objects move in
the real world. The retina evolved as a moving observation platform for a
dynamic world.

 The challenge of the retina and the cortical hierarchy is then to build
*invariant* representations of the world. Because the world is noisy and
dynamic you need all this circuitry to tease out the repeating patterns and
common causes. But the CEPT world isn't like that. It's totally invariant
to begin with. No single object ever moves or is viewed from a different
direction. A given word will always directly map to the same set of bits.

What *could* the SP pick up?

 What you're really hoping for is not that the SP/CLA will give you smaller
and smaller granularity, but that it will discover features that are common
across words. Another way to say this is, you hope that your
self-organizing map has accidentally captured dimensions other than those
used to calculate the centroids and distance metrics. If you already have
metrics along those other axis though, it doesn't make a lot of sense to
use the SP to try to discover them. You can calculate them directly.

Ultimately you want to know how a raven is like a writing desk. But to know
that "Poe wrote on both." you have to be able to perceive the world from a
very odd angle.

The TP

On the other hand the TP is a straight-forward sequence learner that takes
CEPTWorld like representations as inputs. You should easily get sentence
generation, hopefully with a fluent-aphasia like character. The sentences
should be nonsense but not garbage. Of course you could also do this with
n-grams.

Ian
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to