Unfortunately, model parallelism using different GPUs in the same
process is still experimental, and not a current focus for development.


On Wed, May 10, 2017, T Qri wrote:
> Hi everyone,
> 
> In brief, I'm implementing a model with 2 seq2seq components which share an 
> encoder. Each of the 2 components are distributed across different GPU 
> devices and the shared encoder is also placed on one of these 2 devices.
> 
> When config.optimizer is set to None, the error doesn't exist anymore but 
> the model runs on multiple CPUs quite slowly.
> 
> The detailed Theano flags are described in `theano-flags` file. And the 
> corresponding error message is in `error-message` file.
> 
> I tried to disable constant_folding and elemwise optimization with 
> `optimizer_excluding=constant_folding elemwise` set but it didn't work.
> 
> I'm using  Theano 0.9.0 version on Ubuntu 14.04. There are 4 GPUs available 
> and 2 of them are involved in my model.
> 
> Thanks for your help.
> 
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.




-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to