Re-reading the CLA whitepaper, one thing that I've noticed is that the only 
place inhibition appears is in the enforcement of the spacial pooler's columnar 
constraint of winners to encourage SDR encodings of input patterns.  

When arranging HTM regions in a hierarchy, I assume (perhaps incorrectly) that 
some of the feedback from higher-level HTMs to lower-level HTMs would be 
inhibitory, to reduce the likelihood of activations that aren't being predicted 
in the larger context of the higher level sequence being played out by the 
higher-level HTM (if that makes any sense).  However, it seems to me that it is 
not currently possible to provide an inhibitory input into an HTM region 
because of the way input data is gated and summed by the spacial pooler, i.e., 
there is no way to learn that active input bits (1's in the input stream) mean 
recognition of a pattern should be suppressed.  

I suppose that feedback from higher-level to lower-level HTMs in a hierarchy 
could be excitatory-only, i.e., "1's from above" are learned in the mix of 
input bits by lower-levels to help gate predicted patterns, but then it seems 
to me that we would need a lot of copies of each feedback bit to multiply its 
semantic force so it could have a significant influence on the activation sum 
being computed by the spacial pooler.  This seems inefficient, though it makes 
use of existing learning mechanisms.  

How is hierarchical feedback intended or imagined to be accomplished?  Is 
inhibition necessary?  Maybe feedback shouldn't even be injected along with 
"ordinary" feed-forward input bits, should instead be a factor in individual 
column boost calculations, or... 

Perhaps this is an out-of-scope topic.  Let me know if I'm off in the weeds...

-Steve O.
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to