Hello everyone, I'm new to this community. I have recently taken interest in HTM. I come from the reinforcement learning scene, so naturally I want to use HTM to perform reinforcement learning. I have designed a new HTM-like algorithm for this task that I would like to share here.
My version of HTM differs mainly from the original in that it operates on continuous states. A cell isn't just on or off, neither is a column, it can also be in-between. The use of continuous states sounds like it would make the algorithm more complicated, but actually it has done the opposite. Continuous HTM, at least in the form I have it now, is much less code than the original (which I also implemented). Aside from being smaller, continuous HTM has some other benefits: It runs really well on the GPU (I have it running nicely on the GPU right now), and the continuous states allow you to encode more information in fewer columns. I don't know how biologically plausible this is, but it may or may not be more practical than standard HTM. I have a blog where I describe my attempts at using HTM to outperform Deepmind's Atari results. In this blog I describe in detail the operation of continuous HTM. Link: http://cireneikual.wordpress.com/ I would like to get your thoughts on this algorithm. I will be releasing the source code soon, so you can experiment with it yourself. I wonder how difficult it would be to modify NuPIC to incorporate continuous states. Thank you for your attention!
