Re: [theano-users] why does this gradient is invalid?

2017-08-09 Thread 佐藤優
I understand. I sincerely thank you. Sato 2017年8月10日木曜日 7時39分21秒 UTC+9 nouiz: > > This is a bug in one Theano optimization: local_dimshuffle_subtensor > > Thanks for the report. I made an issue so that we don't forget it: > > https://github.com/Theano/Theano/issues/6288 > > Frédéric > > On Wed,

Re: [theano-users] why does this gradient is invalid?

2017-08-09 Thread Frédéric Bastien
This is a bug in one Theano optimization: local_dimshuffle_subtensor Thanks for the report. I made an issue so that we don't forget it: https://github.com/Theano/Theano/issues/6288 Frédéric On Wed, Aug 9, 2017 at 4:50 AM 佐藤優 wrote: > I wonder why bellow code is invalid.. >

Re: [theano-users] Split Op (OpFromGraph) to save intermediate results for grad

2017-08-09 Thread Frédéric Bastien
Sorry, but I'm not able to answer this grad question. Hopefully someone else that better understand that part can answer. Fred On Mon, Jul 31, 2017 at 9:43 AM wrote: > I am trying to build an Op with a custom/optimized gradient formula. To > override the automatic

Re: [theano-users] Why is this GpuFromHost call generated?

2017-08-09 Thread Frédéric Bastien
Hi, do you use float? I was meaning float32. The old back-end only suport float32. So if you use float64 or int32, nothing will compute on the GPU. The new back-end support many dtypes including float64 and int*. So it should work better. Note, if you do operation between float32 and int32, the

Re: [theano-users] Error while compiling two theano functions

2017-08-09 Thread Frédéric Bastien
note, I made an issue about this: https://github.com/Theano/Theano/issues/6287 Fred On Mon, Jul 3, 2017 at 7:51 AM Frédéric Bastien wrote: > This is still experimental and we don't have to to work on it now. > > For multiple GPU, you should do data parallelism. The

Re: [theano-users] How to build different average pooling operation I'll call it local average pooling ?

2017-08-09 Thread Jesse Livezey
I think this idea would be something like y = [1, 2, 3, 0] y_current_avgpool = (1 + 2 + 3 + 0) / 4 y_new_avgpool = (1 + 2 + 3) / 3 I'm not sure that there is a simple way to do this currently. You could do sum pooling first, then compute the divisors by looking at the number of non-zero

Re: [theano-users] Why is this GpuFromHost call generated?

2017-08-09 Thread Haining Yu
Thank you Fred. Yes I am using device=gpu0. I will switch to the new backend and test again. On float64, do you mean int64? If yes, am puzzled by that too. In my code I never explicit cast to int64. Instead I use tensor.ivector() to index matrices and cast them explicitly into int32. For

Re: [theano-users] Grouped Convolution Error

2017-08-09 Thread Frédéric Bastien
Their have been a fix in Theano. Can you update and try again? Le lun. 24 juil. 2017 19:56, Michael Klachko a écrit : > I'm trying the new grouped convolutions feature in the latest Theano > version, so I ran a simple convnet with CIFAR-10: 32x32 RGB input images >

Re: [theano-users] Error with theano. This was working fine earlier

2017-08-09 Thread Frédéric Bastien
You changed something in your installation. Try to delete your Theano cache. If that don't fix it try to remove all your Python. You probably have mixed Python in your environment. Le mer. 19 juil. 2017 10:37, SUNITHA a écrit : > Dear All, > > This is the error

Re: [theano-users] Theano ConftestImportFailure error after installing

2017-08-09 Thread Frédéric Bastien
We don't use py.test, but nosetests. Fred Le mar. 8 août 2017 12:12, Sara Saeed a écrit : > > I am new to Ubuntu and I tried to install Theano using Anaconda. > > After tracking some other errors and solving them. I am stuck with this > error, which I don't understand

[theano-users] Theano flatten doesn't give expected output with arguments ndim

2017-08-09 Thread Lalit Pradhan
I have variables 'a' and 'b' which are 'theano.sandbox.cuda.var.CudaNdarraySharedVariable'. I am passing an array of shape ((1,128,300,300)) into 'a' I am passing an array of shape ((1,1,300,300)) into 'b' c = a*b . type(c) = 'theano.tensor.var.TensorVariable' of shape ((1,128,300,300)) c =

Re: [theano-users] Split Op (OpFromGraph) to save intermediate results for grad

2017-08-09 Thread nicolas . granger . m
"forward the precomputed output" means that Op1 already computed the final output, therefore Op2 just has to behaves as identity in the forward pass The intermediate value is already an output of Op1 as shown in the example code, sorry if that wasn't clear. Nicolas Le mardi 8 août 2017

[theano-users] why does this gradient is invalid?

2017-08-09 Thread 佐藤優
I wonder why bellow code is invalid.. from numpy import * import theano.tensor as T x = T.dmatrix("x") mx = x[...,None,:] a = T.ones((1,3)) T.grad(mx[...,0].dot(a).sum(), a).eval({x:ones((5,10)).astype(float32)}) bellow error is emerged.