Hello Scott Purdy,

I tested it with SP-KNN and SP-SVM before. now I really want to add TP
and CLA classifier. Let TP learn from the sequence of SDRs, and
predict the next step. and let CLA classifier to decode the
information. I am trying to use Explorer in ImageSensor to generate a
series of data(For example, eyemovements would flash the image 9 times
, and each times it shifts the image 1 pixel). Then from SP, I would
get a series of SDRs even for the same image. and I expect the TP
would learn from these pattern. With images in the same
category(that's a lot of sequences) , I think TP would learn and make
a prediction.

And about "network.run(1)", you mean , for example I'm using
EyeMovements as Explorer, that line will just shift the image 1 pixel,
instead of 9 pixel. Isn't it?

An Qi
Tokyo University of Agriculture and Technology - Nakagawa Laboratory
2-24-16 Naka-cho, Koganei-shi, Tokyo 184-8588
[email protected]

On Mon, 14 Sep 2015 23:39:42 -0700
 Scott Purdy <[email protected]> wrote:
Hi An Qi,

The CLA Classifier is designed for numeric prediction problems, not image classification. If you want to learn more about it then let me know and I can give some more details. But I would recommend using the KNN classifier for this task. To get started, use the code that Subutai put together for
MNIST that lives here:

https://github.com/numenta/nupic.vision/tree/master/nupic/vision/mnist

The readme should have all the info you need to get set up. And the
"network.run(1)" line in that will run one step (single output from
ImageSensor, propagated through all the regions in the network). This
allows you to pull any info or classifications out after each
classification attempt.

Please follow up if you have any problems or questions!

On Mon, Sep 14, 2015 at 1:17 AM, <[email protected]> wrote:

Hello,

I am still working on pattern recognition with CLA. Currently, I tried
to add a TP Region and CLA classifier. But i got a little confused by
the parameters. I am using image as the input. The data stream is like
this,
MNIST Dataset=>ImageSensor(with Explorer)=>SP=>TP=>CLAClassifier. I
read the comments in compute of CLAClassifier.
The comment is as follows:
    """
    Process one input sample.
This method is called by outer loop code outside the nupic-engine.
We
    use this instead of the nupic engine compute() because our inputs
and
    outputs aren't fixed size vectors of reals.

    Parameters:
    --------------------------------------------------------------------
    recordNum:  Record number of this input pattern. Record numbers
should
                normally increase sequentially by 1 each time unless
there
                are missing records in the dataset. Knowing this
information
insures that we don't get confused by missing records.
    patternNZ:  list of the active indices from the output below
    classification: dict of the classification information:
                      bucketIdx: index of the encoder bucket
                      actValue:  actual value going into the encoder
    learn:      if true, learn this sample
    infer:      if true, perform inference

    retval:     dict containing inference results, there is one entry
for each
                step in self.steps, where the key is the number of
steps, and
                the value is an array containing the relative
likelihood for
                each bucketIdx starting from bucketIdx 0.

                There is also an entry containing the average actual
value to
                use for each bucket. The key is 'actualValues'.

                for example:
                  {1 :             [0.1, 0.3, 0.2, 0.7],
                   4 :             [0.2, 0.4, 0.3, 0.5],
                   'actualValues': [1.5, 3,5, 5,5, 7.6],
                  }
    """

The problem is about the classification. For image, what is the
classification? The bucketIdx and actValue are ? I'm using
EyeMovements as Explorer for ImageSensor. I assumed the encoder for
image and ImageSensor(with explorer) are the same thing.

Another problem is about Explorer. when the network is running like
"net.run(1)", will the Explorer run a step? or an iteration? It seems
it just runs a step.

Thank you.

An Qi
Tokyo University of Agriculture and Technology - Nakagawa Laboratory
2-24-16 Naka-cho, Koganei-shi, Tokyo 184-8588
[email protected]




Reply via email to