Hi Aseem,



The memory of the sequences is indeed stored in the connections. 




The current input causes a pattern of activity in the region. All cells then 
examine their distal dendrites to see how many of the active cells are 
connected, looking for coincidence on a segment. The pattern of activity thus 
causes a pattern of predictive cells, which is a union of probable patterns of 
activity for the next step.




In the neocortex, this predictive pattern is processed using the layer 
structure, the thalamus, and other things we don't model (because we don't 
really know). 




For now, we use a lookup table for each cell. This lists how often the cell has 
fired for each input in the past, so it gives a probability distribution for 
what that cell "means" when it is predictive. These are combined to create the 
region's predictions.




I've suggested (a long time ago now, must chase it up) that these could be 
replaced by a kind of "feedback" to the inputs/encoders as follows:




1. For each input value (or short range), create a "grandmother cell" G.

2. Connect all cells which the SP makes active on that input to a segment on G, 
or increase the connection's permanence if connected.

3. To extract a prediction, go through each G and count up the number of 
predictive cells are sending signals into it. The best G's are your prediction.




The real neocortex seems to do something like this, in that you can "feel" the 
next perception coming. Unlike my idea, this would require a kind of "tentative 
innervation" of the predicted input pattern rather than my use of a set of 
grannies. I suspect the thalamus is where this magic happens.




The advantages of using the grandmother cells are:




1. It looks a lot more biologically plausible, and we do use a lot of weird 
neurons whose role we don't understand.

2. It's almost certainly vastly more memory-efficient.

3. You can use it to predict all inputs simultaneously, as well as computed 
values. Anything you like in fact.

4. We can reuse the algorithms from the TP, rather than having to figure out 
how to combine all the cell predictions.

5. It's possibly how feedback and motor control work, and is at least a 
candidate for doing downward innervation in hierarchy. If you SP the grannies 
you get a feedback SDR.

6. You can maintain a set of grannies for each step ahead, or have a set of 
dendrites on each granny per step. This allows for simultaneous multi-step 
prediction. 




Regards, 




Fergal Byrne

—
Sent from Mailbox for iPhone

On Fri, Oct 18, 2013 at 9:23 AM, Aseem Hegshetye <[email protected]>
wrote:

> Hi,
> White paper says memory can be stored in a network through connections 
> between cells in different layers.
> Those connections put surrounding cells in predictive state which can infer 
> prediction.
> But Grok uses look up tables for predictions.
> I am curious to know in what ways did look up tables prove to be better than 
> connections between cells.
> Regards
> Aseem Hegshetye
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to