> when it can just reuse that computation
That's what optimization does. Try running it with device=cpu and
optimizer=fast_run
On Saturday, May 20, 2017 at 11:55:19 PM UTC+8, Alexander Botev wrote:
>
> I have the following code:
>
> >>> a = T.fmatrix()
> >>> b = T.sqr(a)
> >>> c =
I have the following code:
>>> a = T.fmatrix()
>>> b = T.sqr(a)
>>> c = T.nnet.sigmoid(a)
>>> g = T.fmatrix()
>>> d = T.Lop(c, a, g)
>>> f = theano.function([a, g], d)
Using debug print I get:
>>> theano.printing.debugprint(f)
Elemwise{mul} [id A] '' 5
|Elemwise{mul} [id B] '' 3
|
Hi,
On Tuesday, April 11, 2017 at 8:43:48 PM UTC+5:30, nouiz wrote:
>
> It would be great to know why they don't like that implementation.
>
While training an outdated GAN implementation, I used the relu from theano
in the generator and at ~500 epoch the generator loss became 100% and