Also, if you play with extensive_temporal_memory_test.py, try adding VERBOSITY 
= 2 at the class level.

On Tue, Sep 9, 2014 at 3:20 PM, Chetan Surpur <[email protected]> wrote:

> Hi Nick,
>>> 1) Why are there no predictive states being seen for the first training 
>>> pass (i.e. seeing the entire sequence once)? Even if activationThreshold 
>>> and minThreshold are set sufficiently low to make segments sensitive, no 
>>> lateral activation happens. Are cells initialized with no segments?
> That's right, cells are initialized with no segments. The segments are formed 
> when a pattern is shown, and newly active cells form connections to 
> previously active cells.
>>> 3) in the second training pass, we go from no predictive cells to perfectly 
>>> predictive cells associated with the next character. I would typically 
>>> expect the network to show scattered predictive cells before it hones in on 
>>> the right prediction (consecutive 10 on-bits in this example). Why the 
>>> abrupt shift in predictive behavior?
> That's because in this example, initialPerm == connectedPerm, so any synapses 
> that are formed are immediately "connected". This allows the sequence to be 
> learned in one pass.
>>> 4) finally, the printCells() function outputs the following. Can you please 
>>> explain what each entry means?
> I'm not sure what the entries mean. However, I would recommend that if you're 
> trying to understand the behavior of the temporal memory, take a look at the 
> new implementation (temporal_memory.py) and tests for it 
> (tutorial_temporal_memory_test.py and extensive_temporal_memory_test.py). 
> They are easier to read and understand, and the implementation is closer to 
> the pure white paper description.
> - Chetan
> On Thursday, Sep 4, 2014 at 1:37 PM, Nicholas Mitri <[email protected]>, 
> wrote:
> Hey all,
> I’d like to dedicate this thread for discussing some TP implementation and 
> practical questions, namely those associated with the introductory file to 
> the TP, hello-tp.py.
> Below is the print out of a TP with 50 columns, 1 cell per column being 
> trained as described in the py file for 2 iterations on the sequence 
> A->B->C->D->E. Each pattern is fed directly into the TP as an active network 
> state.
> I’ve been playing around with the configurations and have a few questions.
> 1) Why are there no predictive states being seen for the first training pass 
> (i.e. seeing the entire sequence once)? Even if activationThreshold and 
> minThreshold are set sufficiently low to make segments sensitive, no lateral 
> activation happens. Are cells initialized with no segments?
> 2) if segments are created during initialization, how is their connectivity 
> to the cells of the region configured? How are permanence values allocated? 
> Same as proximal synapses in the TP?
> 3) in the second training pass, we go from no predictive cells to perfectly 
> predictive cells associated with the next character. I would typically expect 
> the network to show scattered predictive cells before it hones in on the 
> right prediction (consecutive 10 on-bits in this example). Why the abrupt 
> shift in predictive behavior? Is this related to getBestMatchingCell()?
> 4) finally, the printCells() function outputs the following. Can you please 
> explain what each entry means?
> Column 41 Cell 0 : 1 segment(s)
>    Seg #0   ID:41    True 0.2000000 (   3/3   )    0 [30,0]1.00 [31,0]1.00 
> [32,0]1.00 [33,0]1.00 [34,0]1.00 [35,0]1.00 [36,0]1.00 [37,0]1.00 [38,0]1.00 
> [39,0]1.00
> Thanks,
> Nick
> ————————————— PRINT OUT———————————— —————
> All the active and predicted cells:
> Inference Active state
> 1111111111 0000000000 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 1111111111 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 1111111111 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 0000000000 1111111111 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 0000000000 0000000000 1111111111
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> ############  Training Pass #1 Complete   ############
> All the active and predicted cells:
> Inference Active state
> 1111111111 0000000000 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 1111111111 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 1111111111 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 1111111111 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 1111111111 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 1111111111 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 0000000000 1111111111 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 1111111111
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 0000000000 0000000000 1111111111
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> ############  Training Pass #2 Complete   ############
> Hey all,
> I’d like to dedicate this thread for discussing some TP implementation and 
> practical questions, namely those associated with the introductory file to 
> the TP, hello-tp.py.
> Below is the print out of a TP with 50 columns, 1 cell per column being 
> trained as described in the py file for 2 iterations on the sequence 
> A->B->C->D->E. Each pattern is fed directly into the TP as an active network 
> state.
> I’ve been playing around with the configurations and have a few questions.
> 1) Why are there no predictive states being seen for the first training pass 
> (i.e. seeing the entire sequence once)? Even if activationThreshold and 
> minThreshold are set sufficiently low to make segments sensitive, no lateral 
> activation happens. Are cells initialized with no segments?
> 2) if segments are created during initialization, how is their connectivity 
> to the cells of the region configured? How are permanence values allocated? 
> Same as proximal synapses in the TP?
> 3) in the second training pass, we go from no predictive cells to perfectly 
> predictive cells associated with the next character. I would typically expect 
> the network to show scattered predictive cells before it hones in on the 
> right prediction (consecutive 10 on-bits in this example). Why the abrupt 
> shift in predictive behavior? Is this related to getBestMatchingCell()?
> 4) finally, the printCells() function outputs the following. Can you please 
> explain what each entry means?
> Column 41 Cell 0 : 1 segment(s)
>    Seg #0   ID:41    True 0.2000000 (   3/3   )    0 [30,0]1.00 [31,0]1.00 
> [32,0]1.00 [33,0]1.00 [34,0]1.00 [35,0]1.00 [36,0]1.00 [37,0]1.00 [38,0]1.00 
> [39,0]1.00
> Thanks,
> Nick
> ————————————— PRINT OUT———————————— —————
> All the active and predicted cells:
> Inference Active state
> 1111111111 0000000000 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 1111111111 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 1111111111 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 0000000000 1111111111 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 0000000000 0000000000 1111111111
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> ############  Training Pass #1 Complete   ############
> All the active and predicted cells:
> Inference Active state
> 1111111111 0000000000 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 1111111111 0000000000 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 1111111111 0000000000 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 1111111111 0000000000 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 1111111111 0000000000 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 1111111111 0000000000
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 0000000000 1111111111 0000000000
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 1111111111
> All the active and predicted cells:
> Inference Active state
> 0000000000 0000000000 0000000000 0000000000 1111111111
> Inference Predicted state
> 0000000000 0000000000 0000000000 0000000000 0000000000
> ############  Training Pass #2 Complete   ############

Reply via email to