Hello,

I am still working on pattern recognition with CLA. Currently, I tried
to add a TP Region and CLA classifier. But i got a little confused by
the parameters. I am using image as the input. The data stream is like this,
MNIST Dataset=>ImageSensor(with Explorer)=>SP=>TP=>CLAClassifier. I
read the comments in compute of CLAClassifier.
The comment is as follows:
    """
    Process one input sample.
This method is called by outer loop code outside the nupic-engine.
We
    use this instead of the nupic engine compute() because our inputs
and
    outputs aren't fixed size vectors of reals.

    Parameters:
    --------------------------------------------------------------------
    recordNum:  Record number of this input pattern. Record numbers
should
                normally increase sequentially by 1 each time unless
there
                are missing records in the dataset. Knowing this
information
insures that we don't get confused by missing records.
    patternNZ:  list of the active indices from the output below
    classification: dict of the classification information:
                      bucketIdx: index of the encoder bucket
                      actValue:  actual value going into the encoder
    learn:      if true, learn this sample
    infer:      if true, perform inference

    retval:     dict containing inference results, there is one entry
for each
                step in self.steps, where the key is the number of
steps, and
                the value is an array containing the relative
likelihood for
                each bucketIdx starting from bucketIdx 0.

                There is also an entry containing the average actual
value to
                use for each bucket. The key is 'actualValues'.

                for example:
                  {1 :             [0.1, 0.3, 0.2, 0.7],
                   4 :             [0.2, 0.4, 0.3, 0.5],
                   'actualValues': [1.5, 3,5, 5,5, 7.6],
                  }
    """

The problem is about the classification. For image, what is the
classification? The bucketIdx and actValue are ? I'm using
EyeMovements as Explorer for ImageSensor. I assumed the encoder for
image and ImageSensor(with explorer) are the same thing.

Another problem is about Explorer. when the network is running like
"net.run(1)", will the Explorer run a step? or an iteration? It seems
it just runs a step.

Thank you.

An Qi
Tokyo University of Agriculture and Technology - Nakagawa Laboratory
2-24-16 Naka-cho, Koganei-shi, Tokyo 184-8588
[email protected]

Reply via email to