Irving Enrique Reyna Nolasco <irvingenrique.reynanola...@kaust.edu.sa>
writes:

>   I am a student in physics. I am pretty new
>   in pycuda. Currently I am interesting in  finit volume methods running on
>   multiple GPUS in a single node. I have not found  relevant  documentation
>   related to this issue, specifically how to communicate different contexts
>   or how to run the same kernel on different devices at the same time.
> Would you   suggest me some literature/documentation about that?

I think the common approach is to have multiple (CPU) threads and have
each thread manage one GPU. Less common (but also possible, if
cumbersome) is to only use one thread and switch contexts. (FWIW,
(Py)OpenCL makes it much easier to talk to multiple devices from a
single thread.)

Lastly, if you're thinking of scaling up, you could just have one MPI
rank per device.

Hope that helps,
Andreas

_______________________________________________
PyCUDA mailing list
PyCUDA@tiker.net
https://lists.tiker.net/listinfo/pycuda

Reply via email to