Hello,

I'm trying to get OpenGL/CUDA interop working on multiple GPUs on a headless 
Linux system. It works great with a single GPU.  I have a problem running with 
multiple GPUs and I've narrowed it down so that I see the problem when I just 
try to switch the GPU the single GPU version is running on.

I have two X servers running, one on :0.0 using GPU 3 and one on :1.0 using GPU 
4.  If I set environment variables DISPLAY=:0.0 and CUDA_DEVICE=3 everything 
runs great.  If I close my app and run with DISPLAY=:1.0 and CUDA_DEVICE=4 I 
get the error:

...
context =make_default_context(lambda dev: cudagl.make_context(dev))
MemoryError: cuGLCtxCreate failed: out of memory

Interestingly, if I restart the X server on :1.0, then my app runs fine.  
However if I switch back to :0.0/3 I get the same problem.  Restarting the 
first X server then allows me to run on that GPU again.  Of course I'd like to 
be able to switch GPUs without restarting X servers.  And when I move to 
running on multiple GPUs at the same time (with each GPU controlled by a 
separate process, using the multiprocessing module) this problem gets even 
worse.

Just using OpenGL works fine on multiple GPUs at the same time, but creating 
the CUDA/GL context fails with the cuGLCtxCreate error on the second GPU.

Thanks,

Eli
_______________________________________________
PyCUDA mailing list
[email protected]
http://lists.tiker.net/listinfo/pycuda

Reply via email to