Oh, I forgot to mention: This is on a Radeon HD5970, which is like 4 years
old now.

On Tue, Oct 21, 2014 at 6:03 PM, Eric Laukien <[email protected]> wrote:

> From your post it appears that you are getting around a lot of the issues
>> with implementing the temporal pooler on GPU by essentially just having a
>> single distal segment per cell, which has a synapse (a weight) for each
>> cell in its input area, rather than adding synapses over time as needed?
>
>
> Yes, it starts out with all the synapses it will ever need. Growing new
> connections on the GPU would be rather difficult (although doable).
>
> Do you find that this solution gives you good quality of predictions, even
>> in cases where a pattern may be followed by a large number of subsequent
>> patterns?
>
>
> I don't know how good the predictions really are yet, I just know that it
> works at all. I haven't done any testing beyond just moving a few patterns
> around the screen and predicting them one step ahead of time, which worked.
>
>  Also, what kind of speedup are you seeing for the GPU version vs the CPU
>> version?
>
>
> I haven't measured it for my original discrete HTM, but I get 80 steps per
> second for a region with 16384 columns, 65536 cells, 1.3 million column
> connections and 21 million lateral connections. I haven't really optimized
> it yet though.
>
>
> On Tue, Oct 21, 2014 at 5:25 PM, Michael Ferrier <
> [email protected]> wrote:
>
>> Hi Eric,
>>
>> This is interesting to me, because I'm also working on a continuous
>> version of HTM that runs on the GPU. From your post it appears that you are
>> getting around a lot of the issues with implementing the temporal pooler on
>> GPU by essentially just having a single distal segment per cell, which has
>> a synapse (a weight) for each cell in its input area, rather than adding
>> synapses over time as needed? Do you find that this solution gives you good
>> quality of predictions, even in cases where a pattern may be followed by a
>> large number of subsequent patterns? Also, what kind of speedup are you
>> seeing for the GPU version vs the CPU version?
>>
>> Thanks!
>>
>> Mike
>>
>> _____________
>> Michael Ferrier
>> Department of Cognitive, Linguistic and Psychological Sciences, Brown
>> University
>> [email protected]
>>
>> On Mon, Oct 20, 2014 at 4:27 PM, Eric Laukien <[email protected]>
>> wrote:
>>
>>> Hello everyone,
>>>
>>> I'm new to this community. I have recently taken interest in HTM. I come
>>> from the reinforcement learning scene, so naturally I want to use HTM to
>>> perform reinforcement learning. I have designed a new HTM-like algorithm
>>> for this task that I would like to share here.
>>>
>>> My version of HTM differs mainly from the original in that it operates
>>> on continuous states. A cell isn't just on or off, neither is a column, it
>>> can also be in-between. The use of continuous states sounds like it would
>>> make the algorithm more complicated, but actually it has done the opposite.
>>> Continuous HTM, at least in the form I have it now, is much less code than
>>> the original (which I also implemented).
>>>
>>> Aside from being smaller, continuous HTM has some other benefits: It
>>> runs really well on the GPU (I have it running nicely on the GPU right
>>> now), and the continuous states allow you to encode more information in
>>> fewer columns.
>>>
>>> I don't know how biologically plausible this is, but it may or may not
>>> be more practical than standard HTM.
>>>
>>> I have a blog where I describe my attempts at using HTM to outperform
>>> Deepmind's Atari results. In this blog I describe in detail the operation
>>> of continuous HTM. Link: http://cireneikual.wordpress.com/
>>>
>>> I would like to get your thoughts on this algorithm. I will be releasing
>>> the source code soon, so you can experiment with it yourself.
>>>
>>> I wonder how difficult it would be to modify NuPIC to incorporate
>>> continuous states.
>>>
>>> Thank you for your attention!
>>>
>>
>>
>

Reply via email to