The version you have uses just spatial pooling, so it only learns from
still images. However, I do have a version with temporal inference as well.
You can either implement it yourself by following the description of it on
my blog (
https://cireneikual.wordpress.com/2014/12/24/continuous-hierarchical-temporal-memory-temporal-inference/)
or you can use CHTMGPU, my main CHTM implementation that I work with, which
has it as well (it is feature complete).
CHTMGPU is the GPU version of CHTM. It is set up to be a reinforcement
learner right now ("AGI"), but it should be simple to use it for
classification as well. I can help you set it up for classification if you
like. It is way, way faster than the CPU version (like 100 times).
Here is a link to the CHTMGPU repository:
https://github.com/222464/ContinuousHTMGPU
On Sun, Jan 4, 2015 at 7:46 PM, <[email protected]> wrote:
> Hello:
>
> Hi, Eric. I just read part of your code. It seems it is RBF network with
> SDR implement. But the result of classification is good. a question about
> the Continuous HTM. Does it learn from dynamic data? the original HTMCLA
> learns from the steam data, a series of sequences.
> .
>
>
>
>
> On Sun, 4 Jan 2015 00:39:47 -0500
> Eric Laukien <[email protected]> wrote:
>
>> The code is available here:
>> https://github.com/222464/AILib/tree/master/Source/rbf
>>
>> It's called SDRRBFNetwork, but it works just like HTM.
>>
>> The plankton example is in Source/Kaggle.cpp, the MNIST example is in
>> Source/Main.cpp at the very end of the file.
>>
>> On Sun, Jan 4, 2015 at 12:29 AM, Michael Klachko <
>> [email protected]>
>> wrote:
>>
>> Very interesting! Can you please share your code, and any instructions on
>>> how to run it, so I could recreate this result? Thanks!
>>>
>>> On Sat, Jan 3, 2015 at 9:12 PM, Eric Laukien <[email protected]>
>>> wrote:
>>>
>>> I just ran it, I got 95% accuracy after training for 15 minutes on a
>>>> single CPU core.
>>>>
>>>> On Sat, Jan 3, 2015 at 10:49 PM, Michael Klachko <
>>>> [email protected]> wrote:
>>>>
>>>> Hey Eric, have you tried your implementation on MNIST or CIFAR? If not,
>>>>> can you please do so and post the results? I remember someone here
>>>>> mentioned he got 60% on MNIST with his version of HTM.
>>>>>
>>>>> On Sat, Jan 3, 2015 at 12:06 AM, Eric Laukien <[email protected]>
>>>>> wrote:
>>>>>
>>>>> Hello,
>>>>>>
>>>>>> The original HTM isn't really suitable for images as far as I know. I
>>>>>> developed an extension specifically to handle image information, it is
>>>>>> called Continuous HTM.
>>>>>> I then made a classifier from this, and applied it to a Kaggle
>>>>>> competition. I got great results within just a few minutes on a
>>>>>> single CPU
>>>>>> core.
>>>>>> The competition (still running) is about classifying 123 species of
>>>>>> plankton based on images.
>>>>>> Here is an image. Keep in mind that 51% accuracy is actually very good
>>>>>> on this competition for a first attempt (world record is something
>>>>>> like
>>>>>> 71%), and I only trained for a few minutes.
>>>>>> Let me know if you are interested, I can share the code with you.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 2, 2015 at 9:40 PM, <[email protected]> wrote:
>>>>>>
>>>>>> Hello.
>>>>>>>
>>>>>>> I'm a master student of TUAT. Currently, I am doing some research
>>>>>>> which is about handwritten character recognition(offline
>>>>>>> recognition).
>>>>>>> and I'm very interested in the HTM. I read the paper "How the Brain
>>>>>>> Might Work: A Hierarchical and Temporal Model for Learning and
>>>>>>> Recognition" and "Pattern Recognition by Hierarchical Temporal
>>>>>>> Memory", which are about the old version of HTM. I think the result
>>>>>>> shows in the paper is good, and i want to test the HTM with some
>>>>>>> other
>>>>>>> dataset (Alphabet, maybe more complex dataset like Chinese
>>>>>>> character),
>>>>>>> then I found the old HTM is obsolete. Now i want to test the HTMCLA
>>>>>>> with MNIST database (It seems someone already did it, but I didn't
>>>>>>> find any paper shows the result).
>>>>>>>
>>>>>>> I found there is a mnist test on the
>>>>>>> github(https://github.com/numenta/nupic.research/blob/
>>>>>>> master/image_test/mnist_test.py).
>>>>>>> Then I dumped all images
>>>>>>> and labels from MNIST database(http://yann.lecun.com/exdb/mnist/)
>>>>>>> and
>>>>>>> try to use it to see if it works. After fixed some error, the program
>>>>>>> could run without any problem. But the result shows that the
>>>>>>> KNNClassifier only learned 1 category.
>>>>>>> "Num categories learned 1"
>>>>>>> The accuracy is lower than 10%.
>>>>>>> Any one knows what kind of problem that is. could any one help me?
>>>>>>>
>>>>>>> Thank you.
>>>>>>>
>>>>>>> An Qi
>>>>>>> Tokyo University of Agriculture and Technology - Nakagawa Laboratory
>>>>>>> 2-24-16 Naka-cho, Koganei-shi, Tokyo 184-8588
>>>>>>> [email protected]
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>
>