Keras has an implementation of this here:
https://github.com/fchollet/keras/blob/master/keras/layers/local.py

I also implemented my own for the special case of no stride, 'same' padding 
and no dilation:
https://gist.github.com/alexlee-gk/c7fca66945298e69e10a4dad81cbc880


On Thursday, November 19, 2015 at 5:17:12 PM UTC-8, David Warde-Farley 
wrote:
>
> There's nothing in Theano itself for this. The only code I'm aware that 
> can do this is the cuda-convnet wrappers in pylearn2. Those are optimized 
> for Fermi-class (GTX580 etc.) GPUs, so they won't be that speedy on more 
> recent cards, but they should still work.
>
> On Thursday, November 19, 2015 at 3:30:53 PM UTC-5, Md adnan alam khan 
> wrote:
>>
>> I have checked the following thread:
>>
>> https://groups.google.com/forum/#!topicsearchin/theano-users/local$2C$20non-convolutional/theano-users/LYsmR5gFrtA
>>
>> it looks like pylearn2 has that operator-
>>           pylearn2.linear.local_c01b.Local   link:
>> http://deeplearning.net/software/pylearn2/library/linear.html
>>
>> But I think this op is cpu only. 
>> From documentation I found :
>>  "Unlike the other cuda-convnet functionality in pylearn2, this linear 
>> transform has CPU support, provided by TheanoLinear."
>>
>> It would be very helpful for me If I could design similar layer in 
>> theano. So my question is:
>>
>> Does Theano has any op similar to this or is there any way to do 
>> pylearn2.linear.local in GPU?
>>
>> Thanks in Advance
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to