Hi Nick,



I'm not sure what's wrong. The first thing I would check is temporal memory 
parameters.




- Chetan

On Sun, Sep 7, 2014 at 2:04 PM, Nicholas Mitri <[email protected]>
wrote:

> Hey all,
> I m bumping this thread for the after weekend activity and would like to add 
> on some more inquiries. 
> I modified hello_tp.py and added a sequence generator:
> generate_seqs(items, seq_num, seq_length, replacement, reproduceable)
> e.g. generate_seqs(‘ABCDE', 5, 4, True, True) produces the sequences seen in 
> the log appended below. 
> The log shows the generated sequences and the predictions being made. 
> The first prediction (multiple) shows all the possible character prediction 
> i.e. those with sufficient overlap with the predictive state of the network.
> The second one is based on the most likely character as decided by the CLA 
> classifier which I imported into hello_tp.py.
> Accuracy measures are shown at the end of the log. 
> As you can tell, the classifier is performing horribly. I’m not sure if I’m 
> not using it correctly so I’ve attached the modified py file.
> I’d appreciate any comments as to why the classifier is failing to predict so 
> often. 
> Thanks,
> Nick
> ———— STDOUT ———— 
> Training over sequence  ('E', 'C', 'E', 'C', 'C', 'E')
> Training over sequence  ('C', 'A', 'A', 'C', 'E', 'A')
> Training over sequence  ('B', 'C', 'D', 'E', 'A', 'D')
> Training over sequence  ('E', 'B', 'C', 'E', 'C', 'A')
> Training over sequence  ('E', 'C', 'B', 'C', 'E', 'C')
> Testing over sequence  ('E', 'C', 'E', 'C', 'C', 'E')
> Actual value / Multiple Predictions / Best Prediction =   E / none / C
> Actual value / Multiple Predictions / Best Prediction =   C / BC / E
> Actual value / Multiple Predictions / Best Prediction =   E / BE / C
> Actual value / Multiple Predictions / Best Prediction =   C / C / C
> Actual value / Multiple Predictions / Best Prediction =   C / C / E
> Actual value / Multiple Predictions / Best Prediction =   E / E / C
> Testing over sequence  ('C', 'A', 'A', 'C', 'E', 'A')
> Actual value / Multiple Predictions / Best Prediction =   C / none / E
> Actual value / Multiple Predictions / Best Prediction =   A / BE / E
> Actual value / Multiple Predictions / Best Prediction =   A / A / C
> Actual value / Multiple Predictions / Best Prediction =   C / C / E
> Actual value / Multiple Predictions / Best Prediction =   E / E / A
> Actual value / Multiple Predictions / Best Prediction =   A / A / C
> Testing over sequence  ('B', 'C', 'D', 'E', 'A', 'D')
> Actual value / Multiple Predictions / Best Prediction =   B / none / C
> Actual value / Multiple Predictions / Best Prediction =   C / C / D
> Actual value / Multiple Predictions / Best Prediction =   D / D / E
> Actual value / Multiple Predictions / Best Prediction =   E / E / A
> Actual value / Multiple Predictions / Best Prediction =   A / A / D
> Actual value / Multiple Predictions / Best Prediction =   D / D / B
> Testing over sequence  ('E', 'B', 'C', 'E', 'C', 'A')
> Actual value / Multiple Predictions / Best Prediction =   E / none / C
> Actual value / Multiple Predictions / Best Prediction =   B / BC / C
> Actual value / Multiple Predictions / Best Prediction =   C / C / E
> Actual value / Multiple Predictions / Best Prediction =   E / E / C
> Actual value / Multiple Predictions / Best Prediction =   C / C / A
> Actual value / Multiple Predictions / Best Prediction =   A / A / E
> Testing over sequence  ('E', 'C', 'B', 'C', 'E', 'C')
> Actual value / Multiple Predictions / Best Prediction =   E / none / C
> Actual value / Multiple Predictions / Best Prediction =   C / BC / E
> Actual value / Multiple Predictions / Best Prediction =   B / BE / C
> Actual value / Multiple Predictions / Best Prediction =   C / C / E
> Actual value / Multiple Predictions / Best Prediction =   E / E / C
> Actual value / Multiple Predictions / Best Prediction =   C / C / E
> RESULTS:
> --------
> We use two evaluation metrics to quantify the performance of the TP sequence 
> learning.
> The first is based on multiple predictions being equally weighted i.e. 
> without resorting to the CLA Classifier. Here, it is enough for the actual 
> value to be among the multiple predictions being made.
> The second and more strict metric leverages the CLA Classifier and compares 
> the actual value against the most likely prediction.
> Finally, we provide average prediction scores as an indicater of the 
> prediction accuraccy on the character level. Each corrent prediction 
> increments the score by 1/(number of predictions made).
> For example, if prediction is ABC and the actual value is A, the prediction 
> score is incremented by 1/3.
> Prediction Accuracy from metric 1 = 80.00% with an average score of 3.80/5.00
> Prediction Accuracy from metric 2 = 0.00% with an average score of 0.00/5.00
> On Sep 4, 2014, at 11:37 PM, Nicholas Mitri <[email protected]> wrote:
>> Hey all, 
>> 
>> I’d like to dedicate this thread for discussing some TP implementation and 
>> practical questions, namely those associated with the introductory file to 
>> the TP, hello-tp.py. 
>> 
>> Below is the print out of a TP with 50 columns, 1 cell per column being 
>> trained as described in the py file for 2 iterations on the sequence 
>> A->B->C->D->E. Each pattern is fed directly into the TP as an active network 
>> state. 
>> 
>> I’ve been playing around with the configurations and have a few questions. 
>> 
>> 1) Why are there no predictive states being seen for the first training pass 
>> (i.e. seeing the entire sequence once)? Even if activationThreshold and 
>> minThreshold are set sufficiently low to make segments sensitive, no lateral 
>> activation happens. Are cells initialized with no segments?
>> 2) if segments are created during initialization, how is their connectivity 
>> to the cells of the region configured? How are permanence values allocated? 
>> Same as proximal synapses in the TP?
>> 3) in the second training pass, we go from no predictive cells to perfectly 
>> predictive cells associated with the next character. I would typically 
>> expect the network to show scattered predictive cells before it hones in on 
>> the right prediction (consecutive 10 on-bits in this example). Why the 
>> abrupt shift in predictive behavior? Is this related to 
>> getBestMatchingCell()?
>> 4) finally, the printCells() function outputs the following. Can you please 
>> explain what each entry means?
>> 
>> Column 41 Cell 0 : 1 segment(s)
>>    Seg #0   ID:41    True 0.2000000 (   3/3   )    0 [30,0]1.00 [31,0]1.00 
>> [32,0]1.00 [33,0]1.00 [34,0]1.00 [35,0]1.00 [36,0]1.00 [37,0]1.00 [38,0]1.00 
>> [39,0]1.00
>> 
>> Thanks,
>> Nick
>> ————————————— PRINT OUT———————————— ————— 
>>  
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 1111111111 0000000000 0000000000 0000000000 0000000000 
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000 
>> 
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 0000000000 1111111111 0000000000 0000000000 0000000000 
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000 
>> 
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 0000000000 0000000000 1111111111 0000000000 0000000000 
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000 
>> 
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 0000000000 0000000000 0000000000 1111111111 0000000000 
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000 
>> 
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 0000000000 0000000000 0000000000 0000000000 1111111111 
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000 
>> 
>> ############  Training Pass #1 Complete   ############
>> 
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 1111111111 0000000000 0000000000 0000000000 0000000000 
>> Inference Predicted state
>> 0000000000 1111111111 0000000000 0000000000 0000000000 
>> 
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 0000000000 1111111111 0000000000 0000000000 0000000000 
>> Inference Predicted state
>> 0000000000 0000000000 1111111111 0000000000 0000000000 
>> 
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 0000000000 0000000000 1111111111 0000000000 0000000000 
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 1111111111 0000000000 
>> 
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 0000000000 0000000000 0000000000 1111111111 0000000000 
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 1111111111 
>> 
>> All the active and predicted cells:
>> 
>> Inference Active state
>> 0000000000 0000000000 0000000000 0000000000 1111111111 
>> Inference Predicted state
>> 0000000000 0000000000 0000000000 0000000000 0000000000 
>> 
>> ############  Training Pass #2 Complete   ############

Reply via email to