Thanks, Jeff, that's illuminating.  I can see the inhibitory/coordinating 
behavior in how the CLA implements your cases 1 and 2.  I was pondering 
threshold-adjusting mechanisms like you describe in case 3, where lowered 
threshold might help recognition of a predicted pattern from incomplete input 
or vice versa to inhibit.  I like the "suggestibility" example.  

Looks like excitatory appears sufficient for most cases at the complexity level 
we're modeling today and feedback is an area open to further exploration.

I live about an hour outside Portland.  Not planning to attend OSCON, but I 
think I'll get an exhibition pass for Wednesday and drop in on the BoF.  Maybe 
we can steer the conversation to hierarchy, feedback, and inhibition at some 
point.  :-)

-Steve O.

On Jul 22, 2013, at 11:14 AM, "Jeff Hawkins" <[email protected]> wrote:

> Hi Steve,
> Inhibition appears in three places in the biological theory of CLA, at least
> in my head!  We haven't always pointed it out because it isn't necessarily
> useful to think of inhibitory neurons when implementing the CLA in software.
> The literature on inhibitory neurons is not as rich as it is for excitatory
> neurons so it is harder to be precise on this.  
> 
> 1) There are inhibitory neurons that enforce sparsity. (used to enforce
> sparsity)
> 2) There are inhibitory neurons that help all the cells in a column be
> activated together (these inhibitory cells inhibit other inhibitory cells in
> a column).  This shows up in the software by having a column of cells
> activated by the SP.
> 3) There are inhibitory neurons that form inhibitory synapses along the
> distal dendrites.  I speculate that these regulate the dendrite activation
> threshold of the dendrite branches, and therefore control the sparseness of
> the temporal pooler.  If not enough cells are pooling then the threshold
> would be lowered.  We have never implemented or tested this idea.  I imagine
> that when you look at a cloud and I say  "do you see the dog", your cortex
> lowers the threshold of the dendrites to encourage the cortex to recognize
> anything and hopefully see the dog shaped cloud.
> 
> There are six or so different types of inhibitory neurons in cortex so the
> situation is undoubtedly more complex.
> 
> As far as I know all the cells that enter the white matter are excitatory.
> So the feedback projections from one region to another are excitatory.   The
> general consensus is feedback axons form excitatory synapses on the apical
> dendrites of cells in layers 2,3, and 5.  There still could be an inhibitory
> effect but it would be secondary.
> 
> We have not implemented feedback in a hierarchy other than some simple
> experiments before we had the CLA.
> 
> What I think is happening is a higher-level representation projects to lower
> regions and associatively links to it.  In this  way the higher level region
> can tell the lower level region what sequence of activity it should recall.
> This would in effect eliminate alternate possibilities in the lower region.
> Perhaps this addresses your concern
> 
> This would be much easier to discuss in person.
> Jeff
> 
> 
> 
> -----Original Message-----
> From: nupic [mailto:[email protected]] On Behalf Of Steven
> Oberlin
> Sent: Monday, July 22, 2013 10:15 AM
> To: NuPIC general mailing list.
> Subject: [nupic-dev] Inhibition and feedback
> 
> Re-reading the CLA whitepaper, one thing that I've noticed is that the only
> place inhibition appears is in the enforcement of the spacial pooler's
> columnar constraint of winners to encourage SDR encodings of input patterns.
> 
> 
> When arranging HTM regions in a hierarchy, I assume (perhaps incorrectly)
> that some of the feedback from higher-level HTMs to lower-level HTMs would
> be inhibitory, to reduce the likelihood of activations that aren't being
> predicted in the larger context of the higher level sequence being played
> out by the higher-level HTM (if that makes any sense).  However, it seems to
> me that it is not currently possible to provide an inhibitory input into an
> HTM region because of the way input data is gated and summed by the spacial
> pooler, i.e., there is no way to learn that active input bits (1's in the
> input stream) mean recognition of a pattern should be suppressed.  
> 
> I suppose that feedback from higher-level to lower-level HTMs in a hierarchy
> could be excitatory-only, i.e., "1's from above" are learned in the mix of
> input bits by lower-levels to help gate predicted patterns, but then it
> seems to me that we would need a lot of copies of each feedback bit to
> multiply its semantic force so it could have a significant influence on the
> activation sum being computed by the spacial pooler.  This seems
> inefficient, though it makes use of existing learning mechanisms.  
> 
> How is hierarchical feedback intended or imagined to be accomplished?  Is
> inhibition necessary?  Maybe feedback shouldn't even be injected along with
> "ordinary" feed-forward input bits, should instead be a factor in individual
> column boost calculations, or... 
> 
> Perhaps this is an out-of-scope topic.  Let me know if I'm off in the
> weeds...
> 
> -Steve O.
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
> 
> 
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org


_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to