I understand.
I sincerely thank you.
Sato
2017年8月10日木曜日 7時39分21秒 UTC+9 nouiz:
>
> This is a bug in one Theano optimization: local_dimshuffle_subtensor
>
> Thanks for the report. I made an issue so that we don't forget it:
>
> https://github.com/Theano/Theano/issues/6288
>
> Frédéric
>
> On Wed,
This is a bug in one Theano optimization: local_dimshuffle_subtensor
Thanks for the report. I made an issue so that we don't forget it:
https://github.com/Theano/Theano/issues/6288
Frédéric
On Wed, Aug 9, 2017 at 4:50 AM 佐藤優 wrote:
> I wonder why bellow code is invalid..
>
Sorry, but I'm not able to answer this grad question. Hopefully someone
else that better understand that part can answer.
Fred
On Mon, Jul 31, 2017 at 9:43 AM wrote:
> I am trying to build an Op with a custom/optimized gradient formula. To
> override the automatic
Hi,
do you use float? I was meaning float32. The old back-end only suport
float32. So if you use float64 or int32, nothing will compute on the GPU.
The new back-end support many dtypes including float64 and int*. So it
should work better.
Note, if you do operation between float32 and int32, the
note, I made an issue about this:
https://github.com/Theano/Theano/issues/6287
Fred
On Mon, Jul 3, 2017 at 7:51 AM Frédéric Bastien
wrote:
> This is still experimental and we don't have to to work on it now.
>
> For multiple GPU, you should do data parallelism. The
I think this idea would be something like
y = [1, 2, 3, 0]
y_current_avgpool = (1 + 2 + 3 + 0) / 4
y_new_avgpool = (1 + 2 + 3) / 3
I'm not sure that there is a simple way to do this currently. You could do
sum pooling first, then compute the divisors by looking at the number of
non-zero
Thank you Fred.
Yes I am using device=gpu0. I will switch to the new backend and test again.
On float64, do you mean int64? If yes, am puzzled by that too. In my code I
never explicit cast to int64. Instead I use tensor.ivector() to index
matrices and cast them explicitly into int32. For
Their have been a fix in Theano. Can you update and try again?
Le lun. 24 juil. 2017 19:56, Michael Klachko a
écrit :
> I'm trying the new grouped convolutions feature in the latest Theano
> version, so I ran a simple convnet with CIFAR-10: 32x32 RGB input images
>
You changed something in your installation. Try to delete your Theano
cache. If that don't fix it try to remove all your Python. You probably
have mixed Python in your environment.
Le mer. 19 juil. 2017 10:37, SUNITHA a écrit :
> Dear All,
>
> This is the error
We don't use py.test, but nosetests.
Fred
Le mar. 8 août 2017 12:12, Sara Saeed a écrit :
>
> I am new to Ubuntu and I tried to install Theano using Anaconda.
>
> After tracking some other errors and solving them. I am stuck with this
> error, which I don't understand
I have variables 'a' and 'b' which are
'theano.sandbox.cuda.var.CudaNdarraySharedVariable'.
I am passing an array of shape ((1,128,300,300)) into 'a'
I am passing an array of shape ((1,1,300,300)) into 'b'
c = a*b . type(c) = 'theano.tensor.var.TensorVariable' of shape
((1,128,300,300))
c =
"forward the precomputed output" means that Op1 already computed the final
output, therefore Op2 just has to behaves as identity in the forward pass
The intermediate value is already an output of Op1 as shown in the example
code, sorry if that wasn't clear.
Nicolas
Le mardi 8 août 2017
I wonder why bellow code is invalid..
from numpy import *
import theano.tensor as T
x = T.dmatrix("x")
mx = x[...,None,:]
a = T.ones((1,3))
T.grad(mx[...,0].dot(a).sum(), a).eval({x:ones((5,10)).astype(float32)})
bellow error is emerged.
13 matches
Mail list logo