[theano-users] shared weights gradient updates

2016-10-07 Thread John Moore
Hi All, In one of my neural net models, I have shared weights at several different parts of the network. For example, let's say weights are completely shared at layer 1 and layer 3. Will the gradient update from theano.grad sum the gradients of the shared weights? Or will it simply take the

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread Pascal Lamblin
On Fri, Oct 07, 2016, luca.wagner.0...@gmail.com wrote: > Hi Fred, > I did a test using: > > theano.tensor.nnet.conv3d2d import conv3d That's the old conv3d2d code, that should not be needed with cuDNN, and that has some pieces that do not work in float16. These are not the problems we should

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread Frédéric Bastien
On Fri, Oct 7, 2016 at 11:31 AM, Pascal Lamblin wrote: > On Fri, Oct 07, 2016, luca.wagner.0...@gmail.com wrote: > > Hi Fred, > > I did a test using: > > > > theano.tensor.nnet.conv3d2d import conv3d > > That's the old conv3d2d code, that should not be needed with

Re: [theano-users] Unsolvable Theano import error "No module named cPickle" and "No module named graph"

2016-10-07 Thread Rav
what a headache lol! -- --- You received this message because you are subscribed to the Google Groups "theano-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to theano-users+unsubscr...@googlegroups.com. For more options, visit

[theano-users] How to obtain each weight update of BPTT gradient at time t

2016-10-07 Thread John Moore
Hi All, My understanding of BPTT is to unfold the network, take the gradients through time, then average the weight updates. How do I obtain the weight updates at each timestep? I know that scan automatically performs BPTT for you, so that it gives you only one weight update. Any insight

Re: [theano-users] UserWarning: downsample module has been moved to the pool module

2016-10-07 Thread Beatriz G.
Thank you for your help!! Regards. El jueves, 6 de octubre de 2016, 21:14:20 (UTC+2), Pascal Lamblin escribió: > > On Thu, Oct 06, 2016, Beatriz G. wrote: > > I can not use downsamaple.max_pool_2d, how it is called now? > > theano.tensor.signal.pool.pool_2d () or theano.tensor.signal.pool. >

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread luca . wagner . 0812
Hi Fred, I did the test using: theano.tensor.nnet.conv3d2d.conv3d I updated the code with https://github.com/Theano/Theano/pull/4862 .theanorc: [global] floatX =

Re: [theano-users] Error using floatX = float16 to save memory

2016-10-07 Thread luca . wagner . 0812
Hi Fred, I did a test using: theano.tensor.nnet.conv3d2d import conv3d this PR: https://github.com/Theano/Theano/pull/4862 [global] floatX = float16 device=cuda [cuda] root = /usr/local/cuda-7.5 [nvcc] fastmath=True optimizer = fast_compile [dnn.conv] algo_fwd = time_once algo_bwd_filter =