I have been trying to train an NN with little success and started wondering 
whether all instances of OpFromGraph share the underlying shared variables. 
Ie. when doing gradient updates - are my gradients wrt shared variables 
computed over all OpFromGraph nodes or is it done only locally within each 
OpFromGraph instance? 

I would welcome if someone could elaborate on this since the documentation 
on OpFromGraph is very sparse. 

On Friday, 17 March 2017 10:29:16 UTC+2, Šarūnas S. wrote:
>
> I am building a reinforcement learner and I am wondering how to scale it. 
>
> At first I initialised a deep neural network in Keras and convert it to 
> theano computational graph which takes state variables as inputs and 
> outputs an action to make. 
> Then, I wrote a simulator in Theano where at decision points I 
> theano.clone the DNN computational graph. Lastly, I do gradient descent on 
> the DNN parameters in order to get a "good" DNN AI. If I use a proper DNN 
> with many layers and parameters the compilation takes forever and 
> iterations are very slow. 
>
> Then I've tried using OpFromGraph. It seems to reduce my compilation time 
> quite a bit. However, once I looked at the computational graph it seems 
> that OpFromGraph moves everything back to the CPU. 
>
> Given that the op is a DNN which are very GPU friendly I wonder whether 
> there is a way to avoid that? 
>
> Please find my graph at
> https://drive.google.com/open?id=0BzjH-3p3dTNzWU8zS05wMU5STEk
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to