Received from Gavin Weiguang Ding on Mon, Oct 06, 2014 at 11:55:49PM EDT: > Hi Lev, > > Thanks for the reply! > My GPUs do support GPUDirect. And I've tested using the "simpleP2P" from > cuda samples. > > I've been trying a little bit on that but without success. I'm new to > pycuda and multiprocessing, excuse me if I ask dumb questions. > > If I understand it right, I need to do pycuda.driver.init() and > make_context() inside each process. > > But to use pycuda.driver.memcpy_peer, I need to pass the context defined in > one process to another. But when I try to pass the context with Pipe or > Queue from multiprocessing, it returns pickling error. > Is this the right way of doing it? assuming pickling error can be solved.
Since CUDA contexts are private, you can't use the context set up in one process in another. In recent versions of CUDA, you can use its IPC API to transfer a GPU memory address from one GPU to another. See https://gist.github.com/lebedov/6408165 for an example of how to use the API (requires pyzmq). -- Lev Givon Bionet Group | Neurokernel Project http://www.columbia.edu/~lev/ http://lebedov.github.io/ http://neurokernel.github.io/ _______________________________________________ PyCUDA mailing list [email protected] http://lists.tiker.net/listinfo/pycuda
