Pickling the functions should work fine, as long as they are unpickled
on the same kind of hardware (GPUs of the same generation for instance),
and the same version of Theano.

You can then use the function's .copy() method [1] to swap the shared
variable for the local one.

That being said, if your compilation cache is full (which should happen
after the first launch), then locking should not be an issue.

[1] 
http://deeplearning.net/software/theano/library/compile/function.html#theano.compile.function_module.Function.copy

On Wed, Dec 07, 2016, Michael Harradon wrote:
> Hello all,
> 
> I've found a number of threads on this, but most of them are rather old or 
> refer to functions without updates, so I was hoping to check for some new 
> advice.
> 
> I have a function that performs some training via updates that I run on 16 
> different processes running 16 different GPUs - right now it takes about 
> 5-10 minutes to compile for each process, and it happens serially due to 
> locking in the cache directory, so the GPUs come online one-by-one. Is 
> pickling functions currently a recommended practice? If so, how would one 
> connect to the shared variables in the updates? Is it possible to reassign 
> the update variables in the function object to local shared variables?
> 
> Sorry if this is an already answered question!
> 
> Thanks,
> Michael
> 
> -- 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to theano-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


-- 
Pascal

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to theano-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to