Hey all,

I m bumping this thread for the after weekend activity and would like to add on some more inquiries. 
I modified hello_tp.py and added a sequence generator:

generate_seqs(items, seq_num, seq_length, replacement, reproduceable)

e.g. generate_seqs(‘ABCDE', 5, 4, True, True) produces the sequences seen in the log appended below. 
The log shows the generated sequences and the predictions being made. 
The first prediction (multiple) shows all the possible character prediction i.e. those with sufficient overlap with the predictive state of the network.
The second one is based on the most likely character as decided by the CLA classifier which I imported into hello_tp.py.

Accuracy measures are shown at the end of the log. 
As you can tell, the classifier is performing horribly. I’m not sure if I’m not using it correctly so I’ve attached the modified py file.
I’d appreciate any comments as to why the classifier is failing to predict so often. 

Thanks,
Nick

__author__ = 'Hajj'

#!/usr/bin/env python
import sys
sys.path.append('/Users/Hajj/nta/eng/lib/python2.6/site-packages')
import numpy

from helper import generate_seqs, overlap
from nupic.encoders.random_distributed_scalar import RandomDistributedScalarEncoder as RDSE
from nupic.algorithms.CLAClassifier import CLAClassifier
from nupic.research.TP import TP

# Utility routine for printing the input vector
def formatRow(x):
  s = ''
  for c in range(len(x)):
    if c > 0 and c % 10 == 0:
      s += ' '
    s += str(x[c])
  s += ' '
  return s

#######################################################################
#
# Create Temporal Pooler and CLA Classifier instances with appropriate parameters
tp = TP(numberOfCols=50, cellsPerColumn=10,
                initialPerm=0.5, connectedPerm=0.5,
                minThreshold=5, newSynapseCount=10,
                permanenceInc=0.1, permanenceDec=0.1,
                activationThreshold=7,
                globalDecay=0, burnIn=1,
                checkSynapseConsistency=False,
                pamLength=10)

classifier = CLAClassifier(verbosity=0)

""" NOTES:
 - activationThreshold: A segment is active if it has >= activationThreshold connected
    synapses that are active due to activeState.
 - minThreshold: For the given cell, find the segment with the largest number of active
    synapses. This routine is aggressive in finding the best match. The
    permanence value of synapses is allowed to be below connectedPerm. The number
    of active synapses is allowed to be below activationThreshold, but must be
    above minThreshold.
    """
#######################################################################
#
# Generate sequences to feed to the TP
seq_num = 5
seq_length = 6
replacement = True
reproduceable = False
seqs = generate_seqs('ABCDE', seq_num, seq_length, replacement, reproduceable)

seq_num = len(seqs)
seq_length = len(seqs[0])

#######################################################################
#
# Create input vectors to feed to the temporal pooler.
x = numpy.zeros((5,tp.numberOfCols), dtype="uint32")
x[0,0:10]  = 1   # Input SDR representing "A", corresponding to columns 0-9
x[1,10:20] = 1   # Input SDR representing "B", corresponding to columns 10-19
x[2,20:30] = 1   # Input SDR representing "C", corresponding to columns 20-29
x[3,30:40] = 1   # Input SDR representing "D", corresponding to columns 30-39
x[4,40:50] = 1   # Input SDR representing "E", corresponding to columns 40-49


#######################################################################
# Loop over all generated sequences
record_num = 1;
for s in seqs:

    print 'Training over sequence ', s
    # Send this sequence to the temporal pooler for learning for several passes
    for i in range(10):

      # Send each letter in the sequence in order
      for j in range(seq_length):

        # Fetch the right encoding using the character's ASCII value (ord)
        enc = x[ord(s[j])-ord('A')]

        # Send each vector to the TP, with learning turned off
        tp.compute(enc, enableLearn = True, computeInfOutput = True)

        # Pass active cell indices to classifer with learning turned ON
        active_idx = list(tp.getActiveState().nonzero()[0])
        classifier.compute(record_num, active_idx, {'bucketIdx':ord(s[j])-ord('A'),'actValue':s[j]}, True, True)
        record_num += 1

        # print "\nAll the active and predicted cells:"
        # tp.printStates(printPrevious = False, printLearnState = False)

        # This function prints the segments associated with every cell.
        # If you really want to understand the TP, uncomment this line. By following
        # every step you can get an excellent understanding for exactly how the TP
        # learns.
        # tp.printCells()


      tp.reset()
      # print "\n############  Training Pass #%i Complete   ############\n" %(i+1)


# #######################################################################
#
# Step 3: send the same sequence of vectors and look at predictions made by
# temporal pooler

SeqPredictedCnt1, SeqPredictedCnt2 = 0, 0
PredictionScore1, PredictionScore2 = 0, 0

for s in seqs:

    isSeqPredicted1 = True
    isSeqPredicted2 = True
    prediction = 'none'
    print '\nTesting over sequence ', s

    for j in range(seq_length):

        # Fetch the encoding using the character's ASCII value (ord)
        enc = x[ord(s[j])-ord('A')]

        # Send each vector to the TP, with learning turned off
        tp.compute(enc, enableLearn = False, computeInfOutput = True)

        # Pass active cell indices to classifer with learning turned OFF
        active_idx = list(tp.getActiveState().nonzero()[0])
        results = classifier.compute(record_num, active_idx, {'bucketIdx':ord(s[j])-ord('A'),'actValue':s[j]}, False, True)

        # Compute likelihoods to find the most likely bucket and actual value pair to predict
        likelihoods = results[1]
        max_likelihood_idx = likelihoods.argmax()
        classifier_prediction = results['actualValues'][max_likelihood_idx]
        record_num += 1

        if j>0 and not s[j] == classifier_prediction:
            isSeqPredicted2 = False
        elif s[j] == classifier_prediction:
            PredictionScore2 += 1

        print 'Actual value / Multiple Predictions / Best Prediction =  ',s[j], '/',  prediction, '/', classifier_prediction

        # Find all the predictions made by the TP by bit wise comparison of each character with the
        # predicted columns. A theshold of 7 is used here i.e. an overlap of 7 or more active bits
        # is interpreted as a hit. Then, track the accuracy of the sequence prediction.
        if j>0 and not prediction.__contains__(s[j]):
            isSeqPredicted1 = False
        elif prediction.__contains__(s[j]):
            PredictionScore1 += len(s[j])/len(prediction)

        predictedCells = tp.getPredictedState()
        predictedCols = predictedCells.max(axis=1);

        prediction = ''
        for i in xrange(len(x)):
            if overlap(x[i], predictedCols) > 7:
                prediction += chr(ord('A')+i)


    if isSeqPredicted1:
        SeqPredictedCnt1 += 1
    if isSeqPredicted2:
        SeqPredictedCnt2 += 1

print "\n\nRESULTS:\n--------"
print """We use two evaluation metrics to quantify the performance of the TP sequence learning.
The first is based on multiple predictions being equally weighted i.e. without resorting to the CLA Classifier. Here, it is enough for the actual value to be among the multiple predictions being made.
The second and more strict metric leverages the CLA Classifier and compares the actual value against the most likely prediction.
Finally, we provide average prediction scores as an indicater of the prediction accuraccy on the character level. Each corrent prediction increments the score by 1/(number of predictions made).
For example, if prediction is ABC and the actual value is A, the prediction score is incremented by 1/3."""

print '\nPrediction Accuracy from metric 1 = %0.2f%% with an average score of %0.2f/%i.00' %(SeqPredictedCnt1*100/seq_num, PredictionScore1/float(seq_num), seq_length-1)
print 'Prediction Accuracy from metric 2 = %0.2f%% with an average score of %0.2f/%i.00' %(SeqPredictedCnt2*100/seq_num, PredictionScore2//float(seq_num), seq_length-1)




———— STDOUT ———— 

Training over sequence  ('E', 'C', 'E', 'C', 'C', 'E')
Training over sequence  ('C', 'A', 'A', 'C', 'E', 'A')
Training over sequence  ('B', 'C', 'D', 'E', 'A', 'D')
Training over sequence  ('E', 'B', 'C', 'E', 'C', 'A')
Training over sequence  ('E', 'C', 'B', 'C', 'E', 'C')

Testing over sequence  ('E', 'C', 'E', 'C', 'C', 'E')
Actual value / Multiple Predictions / Best Prediction =   E / none / C
Actual value / Multiple Predictions / Best Prediction =   C / BC / E
Actual value / Multiple Predictions / Best Prediction =   E / BE / C
Actual value / Multiple Predictions / Best Prediction =   C / C / C
Actual value / Multiple Predictions / Best Prediction =   C / C / E
Actual value / Multiple Predictions / Best Prediction =   E / E / C

Testing over sequence  ('C', 'A', 'A', 'C', 'E', 'A')
Actual value / Multiple Predictions / Best Prediction =   C / none / E
Actual value / Multiple Predictions / Best Prediction =   A / BE / E
Actual value / Multiple Predictions / Best Prediction =   A / A / C
Actual value / Multiple Predictions / Best Prediction =   C / C / E
Actual value / Multiple Predictions / Best Prediction =   E / E / A
Actual value / Multiple Predictions / Best Prediction =   A / A / C

Testing over sequence  ('B', 'C', 'D', 'E', 'A', 'D')
Actual value / Multiple Predictions / Best Prediction =   B / none / C
Actual value / Multiple Predictions / Best Prediction =   C / C / D
Actual value / Multiple Predictions / Best Prediction =   D / D / E
Actual value / Multiple Predictions / Best Prediction =   E / E / A
Actual value / Multiple Predictions / Best Prediction =   A / A / D
Actual value / Multiple Predictions / Best Prediction =   D / D / B

Testing over sequence  ('E', 'B', 'C', 'E', 'C', 'A')
Actual value / Multiple Predictions / Best Prediction =   E / none / C
Actual value / Multiple Predictions / Best Prediction =   B / BC / C
Actual value / Multiple Predictions / Best Prediction =   C / C / E
Actual value / Multiple Predictions / Best Prediction =   E / E / C
Actual value / Multiple Predictions / Best Prediction =   C / C / A
Actual value / Multiple Predictions / Best Prediction =   A / A / E

Testing over sequence  ('E', 'C', 'B', 'C', 'E', 'C')
Actual value / Multiple Predictions / Best Prediction =   E / none / C
Actual value / Multiple Predictions / Best Prediction =   C / BC / E
Actual value / Multiple Predictions / Best Prediction =   B / BE / C
Actual value / Multiple Predictions / Best Prediction =   C / C / E
Actual value / Multiple Predictions / Best Prediction =   E / E / C
Actual value / Multiple Predictions / Best Prediction =   C / C / E


RESULTS:
--------
We use two evaluation metrics to quantify the performance of the TP sequence learning.
The first is based on multiple predictions being equally weighted i.e. without resorting to the CLA Classifier. Here, it is enough for the actual value to be among the multiple predictions being made.
The second and more strict metric leverages the CLA Classifier and compares the actual value against the most likely prediction.
Finally, we provide average prediction scores as an indicater of the prediction accuraccy on the character level. Each corrent prediction increments the score by 1/(number of predictions made).
For example, if prediction is ABC and the actual value is A, the prediction score is incremented by 1/3.

Prediction Accuracy from metric 1 = 80.00% with an average score of 3.80/5.00
Prediction Accuracy from metric 2 = 0.00% with an average score of 0.00/5.00


On Sep 4, 2014, at 11:37 PM, Nicholas Mitri <[email protected]> wrote:

Hey all, 

I’d like to dedicate this thread for discussing some TP implementation and practical questions, namely those associated with the introductory file to the TP, hello-tp.py. 

Below is the print out of a TP with 50 columns, 1 cell per column being trained as described in the py file for 2 iterations on the sequence A->B->C->D->E. Each pattern is fed directly into the TP as an active network state. 

I’ve been playing around with the configurations and have a few questions. 

1) Why are there no predictive states being seen for the first training pass (i.e. seeing the entire sequence once)? Even if activationThreshold and minThreshold are set sufficiently low to make segments sensitive, no lateral activation happens. Are cells initialized with no segments?
2) if segments are created during initialization, how is their connectivity to the cells of the region configured? How are permanence values allocated? Same as proximal synapses in the TP?
3) in the second training pass, we go from no predictive cells to perfectly predictive cells associated with the next character. I would typically expect the network to show scattered predictive cells before it hones in on the right prediction (consecutive 10 on-bits in this example). Why the abrupt shift in predictive behavior? Is this related to getBestMatchingCell()?
4) finally, the printCells() function outputs the following. Can you please explain what each entry means?

Column 41 Cell 0 : 1 segment(s)
   Seg #0   ID:41    True 0.2000000 (   3/3   )    0 [30,0]1.00 [31,0]1.00 [32,0]1.00 [33,0]1.00 [34,0]1.00 [35,0]1.00 [36,0]1.00 [37,0]1.00 [38,0]1.00 [39,0]1.00

Thanks,
Nick
————————————— PRINT OUT———————————— ————— 
 
All the active and predicted cells:

Inference Active state
1111111111 0000000000 0000000000 0000000000 0000000000 
Inference Predicted state
0000000000 0000000000 0000000000 0000000000 0000000000 

All the active and predicted cells:

Inference Active state
0000000000 1111111111 0000000000 0000000000 0000000000 
Inference Predicted state
0000000000 0000000000 0000000000 0000000000 0000000000 

All the active and predicted cells:

Inference Active state
0000000000 0000000000 1111111111 0000000000 0000000000 
Inference Predicted state
0000000000 0000000000 0000000000 0000000000 0000000000 

All the active and predicted cells:

Inference Active state
0000000000 0000000000 0000000000 1111111111 0000000000 
Inference Predicted state
0000000000 0000000000 0000000000 0000000000 0000000000 

All the active and predicted cells:

Inference Active state
0000000000 0000000000 0000000000 0000000000 1111111111 
Inference Predicted state
0000000000 0000000000 0000000000 0000000000 0000000000 

############  Training Pass #1 Complete   ############

All the active and predicted cells:

Inference Active state
1111111111 0000000000 0000000000 0000000000 0000000000 
Inference Predicted state
0000000000 1111111111 0000000000 0000000000 0000000000 

All the active and predicted cells:

Inference Active state
0000000000 1111111111 0000000000 0000000000 0000000000 
Inference Predicted state
0000000000 0000000000 1111111111 0000000000 0000000000 

All the active and predicted cells:

Inference Active state
0000000000 0000000000 1111111111 0000000000 0000000000 
Inference Predicted state
0000000000 0000000000 0000000000 1111111111 0000000000 

All the active and predicted cells:

Inference Active state
0000000000 0000000000 0000000000 1111111111 0000000000 
Inference Predicted state
0000000000 0000000000 0000000000 0000000000 1111111111 

All the active and predicted cells:

Inference Active state
0000000000 0000000000 0000000000 0000000000 1111111111 
Inference Predicted state
0000000000 0000000000 0000000000 0000000000 0000000000 

############  Training Pass #2 Complete   ############

Reply via email to