Hi Robert,

The easiest way is to put the GPUs into compute exclusive mode. This can be accomplished using the nvidia-smi tool and ensures that only one process can use a GPU at any given time.

Alternatively, you can set

[backend-cuda]
device-id = local-rank

which will assign the first MPI rank on the node to the first CUDA device, and so on and so forth. This approach does not require that the GPUs be in compute exclusive mode.

Regards, Freddie.

On 28/02/2017 01:15, Robert Sawko wrote:
Dear All,

I just installed PyFR on our GPU cluster - what a breeze. I can run your
examples and tutorials with CUDA backend and I can see them being
offloaded to the GPU. Metis also appears to cooperate.

My question is how to configure multiple GPUs on a single node. Is there
a way to make sure that say four processes will run on four different
GPUs exclusively?

Also, would be possible to share a larger case as an example to actually
flood the GPUs with work. I am more than willing to give PyFR a try but
it will take a while for me to develop case setting skills.

Many thanks,
Robert

--
You received this message because you are subscribed to the Google
Groups "PyFR Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to [email protected]
<mailto:[email protected]>.
To post to this group, send email to [email protected]
<mailto:[email protected]>.
Visit this group at https://groups.google.com/group/pyfrmailinglist.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "PyFR 
Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send an email to [email protected].
Visit this group at https://groups.google.com/group/pyfrmailinglist.
For more options, visit https://groups.google.com/d/optout.

Reply via email to