>> There is a biological problem with pooling the way we implemented that I
never resolved.  So it is a work in progress.

Hi Jeff,

Could you expand a little on what biological problem you're referring to
here?

Thanks!

-Mike

_____________
Michael Ferrier
Department of Cognitive, Linguistic and Psychological Sciences, Brown
University
[email protected]


On Thu, Aug 29, 2013 at 2:29 PM, Jeff Hawkins <[email protected]> wrote:

> Here are some thoughts about how to connect CLA’s in a hierarchy.****
>
> ** **
>
> Here are some things we know about the brain.****
>
> ** **
>
> - Layer 3 in the cortex is the primary input layer.  (Sometimes input goes
> to layer 4 and layer 3, but layer 4 projects mostly to layer 3 and layer 4
> doesn’t always exist.  So layer 3 is the primary input layer. It exists
> everywhere.  We will ignore layer 4 for now.)****
>
> ** **
>
> - I believe the CLA represents a good model of what is happening in layer
> 3.****
>
> ** **
>
> - The output (i.e. axons) of layer 3 cells project up the hierarchy
> connecting to the proximal dendrites (SP) of the next region’s layer 3.***
> *
>
> ** **
>
> - This isn’t the complete picture.  The axons  of cells in layer 5 (the
> ones that project to motor areas) spit in two and one branch also projects
> up the hierarchy to layer 3 in the next region.  If we aren’t trying to
> incorporate motor behavior then we can ignore layer 5 and say input goes
> from layer 3 to layer 3 to layer 3, etc.  Or CLA to CLA to CLA, etc.****
>
> ** **
>
> Each cell in layer 3 projects to the next region, so the input to a region
> is the output of all the cells in the previous region’s layer 3.  If we
> consider our default CLA size there would be 64K input bits to the next
> level in the hierarchy.   Because of the distributed nature of knowledge it
> isn’t necessary that all cells in layer 3 project to the next region, as
> long as a good portion do we should be ok.  But assume they all do.****
>
> ** **
>
> 64K is a lot of input bits but the SP in the receiving region can take any
> number of bits and map them onto any number of columns.   That is one of
> the nice features of the SP, it can map an input of any dimension and
> sparsity to an number of columns.****
>
> ** **
>
> That’s it for the “plumbing”.  Now comes the tricky part.****
>
> ** **
>
> We, and many others, believe that a large part of how we recognize things
> in different forms is the brain assumes that patterns that occur next to
> each other in time represent the same thing.  This is where the term
> “temporal pooler” comes from.  We want cells to respond to a sequence of
> patterns that occur over time even though the individual patterns don’t
> have common bits.  The classic case are cells in V1 that respond to a line
> moving across the retina.  These cells have learned to fire for a sequence
> of patterns (a line in different positions as it moves is a sequence).  The
> cell remains active during the sequence.  Thus the outputs of a region are
> changing more slowing than the inputs to a region.  This basic idea is
> assumed to be happening throughout the cortex.  Temporal pooling also makes
> more output bits active at the same time.  So instead of just 40 cells
> active out of 64K you might have hundreds.****
>
> ** **
>
> The CLA was designed to solve the temporal pooling problem.  When we were
> working on vision problems the temporal pooler was the key thing we were
> testing.  We have disabled this feature when using the CLA in a single
> region because makes the system slower.  The temporal pooler without the
> “pooling” is still needed for sequence learning.****
>
> ** **
>
> There is a biological problem with pooling the way we implemented that I
> never resolved.  So it is a work in progress.****
>
> ** **
>
> Conclusion:  to connect two CLAs together in a hierarchy, all the cells in
> the lower region become the input to the next region.  But there are some
> difficult issues you might need to understand to get good results depending
> on the problem.****
>
> Jeff****
>
> ** **
>
> ** **
>
> ** **
>
> *From:* nupic [mailto:[email protected]] *On Behalf Of *Tim
> Boudreau
> *Sent:* Wednesday, August 28, 2013 4:29 PM
> *To:* NuPIC
> *Subject:* [nupic-dev] Inter-layer plumbing****
>
> ** **
>
> Is there a general notion of how layers should be wired together, so that
> one layer becomes input to the next layer?****
>
> ** **
>
> It seems like input into one layer is pretty straightforward - in ascii
> art:****
>
> ** **
>
> bit bit bit bit bit bit bit bit****
>
>  |       |   |       |       |****
>
>  ------proximal dendrite w/ boost factor---> column****
>
> ** **
>
> But it's less clear****
>
>  - If we have the hierarchy input -> layer 1 -> layer 2, what constitutes
> an input bit to layer 2 - the activation of some combination of columns
> from layer 1?****
>
>  - How information about activation in level 2 should reinforce
> connections in layer 1****
>
> ** **
>
> Any thoughts?****
>
> ** **
>
> -Tim****
>
> ** **
>
> -- ****
>
> http://timboudreau.com****
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to