Hi, 

I'm currently using Theano and Lasagne, and I'm always facing the same 
problem : whenever someone else launches a job on the same GPU, my process 
gets kicked out : I  get  "error allocating X bytes of memory". At first, I 
thought it was because I was loading 1 minibatch at a time, and another 
process was grabbing the memory before I could fit the next minibatch on 
the GPU. So, I'm now storing my dataset in a shared variable, and 
referencing it with the "givens" keywork in my theano function. However, 
I'm still facing the same allocation problem. Is there any way to 
preallocate memory such that theano won't have to do so at every single 
training batch iteration ? 


Thanks, 
Lucas

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to