Your process isn't kicked out. With the old GPU back-end, by default when
we free GPU memory, it return to the driver, so another process can take it.

You can use the Theano flag lib.cnmem=N, where if N is greater then 1, it
will the memory it will preallocate and don't return to the driver. If it
is between 0 and 1, then it is a % of the total GPU memory that will be
reserved.

On Thu, May 18, 2017 at 8:32 PM Lucas Caccia <[email protected]> wrote:

> Hi,
>
> I'm currently using Theano and Lasagne, and I'm always facing the same
> problem : whenever someone else launches a job on the same GPU, my process
> gets kicked out : I  get  "error allocating X bytes of memory". At first, I
> thought it was because I was loading 1 minibatch at a time, and another
> process was grabbing the memory before I could fit the next minibatch on
> the GPU. So, I'm now storing my dataset in a shared variable, and
> referencing it with the "givens" keywork in my theano function. However,
> I'm still facing the same allocation problem. Is there any way to
> preallocate memory such that theano won't have to do so at every single
> training batch iteration ?
>
>
> Thanks,
> Lucas
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "theano-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to