> when it can just reuse that computation

That's what optimization does. Try running it with device=cpu and 
optimizer=fast_run

On Saturday, May 20, 2017 at 11:55:19 PM UTC+8, Alexander Botev wrote:
>
> I have the following code:
>
> >>> a = T.fmatrix()
> >>> b = T.sqr(a)
> >>> c = T.nnet.sigmoid(a)
> >>> g = T.fmatrix()
> >>> d = T.Lop(c, a, g)
> >>> f = theano.function([a, g], d)
>
> Using debug print I get:
>
> >>> theano.printing.debugprint(f)
> Elemwise{mul} [id A] ''   5
>  |Elemwise{mul} [id B] ''   3
>  | |<TensorType(float32, matrix)> [id C]
>  | |Elemwise{scalar_sigmoid} [id D] ''   1
>  |   |<TensorType(float32, matrix)> [id E]
>  |Elemwise{sub} [id F] ''   4
>    |InplaceDimShuffle{x,x} [id G] ''   2
>    | |TensorConstant{1.0} [id H]
>    |Elemwise{scalar_sigmoid} [id I] ''   0
>      |<TensorType(float32, matrix)> [id E]
>
> My question is why does it compute the Sigmoid 2 times, when it can just 
> reuse that computation? Or if it does this how can I notice it on the 
> graph. I have not switched any of the optimisations.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to