On 24/12/14 20:10, Lev Givon wrote: > Adding the call to MPI.Finalize() made the error go away even when using > gpuarray.to_gpu(); adding the extra mca parameters didn't appear to have any > effect. > My understanding is that the call to MPI.Finalize() should be automatically > registered to be executed when the processes exit; this makes me wonder > whether > my explicitly registering the pycuda method that cleans up the current > context > is causing problems. I'll see what the folks on the mpi4py list have to say.
The order is almost certainly important. If the MPI library allocates some CUDA resources -- or expects to be able to call the CUDA API during MPI_Finalize() -- then it is important that the CUDA context is still valid. Therefore, one must ensure that MPI_Finalize() is called before PyCUDA begins its cleanup. In my experience it is better to manage these things manually and explicitly through a single atexit handler function. Regards, Freddie.
signature.asc
Description: OpenPGP digital signature
_______________________________________________ PyCUDA mailing list [email protected] http://lists.tiker.net/listinfo/pycuda
