Re: [theano-users] Re: OpFromGraph on GPU

2017-03-24 Thread Frédéric Bastien
I created an issue to document some tradeoff between compile time and run time. Can you try them: https://github.com/Theano/Theano/issues/5762 That would be much simpler to use then OpFromGraph as it isn't ready when not used with inline=True. keep us updated on your results. On Fri, Mar 24,

[theano-users] Re: OpFromGraph on GPU

2017-03-24 Thread Šarūnas S .
I have tried inline=True, but the compilation did not finish in an hour for a test case so I doubt this is a viable option. Could you elaborate on how to construct a GPU only graph? Could I make a normal graph, then compile it - where the optimizations would move it to gpu and then use that for

[theano-users] Re: OpFromGraph on GPU

2017-03-23 Thread Adam Becker
OpFromGraph is still under development. If you want to use it on GPU, the safest way would be setting inline=True at constructor (requires 0.9rc1+). This will cause more compilation time though. Or you can try constructing a GPU only graph by hand and build OfG with that, I didn't test that

[theano-users] Re: OpFromGraph on GPU

2017-03-22 Thread Šarūnas S .
I have been trying to train an NN with little success and started wondering whether all instances of OpFromGraph share the underlying shared variables. Ie. when doing gradient updates - are my gradients wrt shared variables computed over all OpFromGraph nodes or is it done only locally within