-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Zhen,

On 15/04/2015 06:36, Freddie Witherden wrote:
> I am curious about how to use the multiple CUDA GPU cards in a
> single node. Every-time I use MPI with CUDA, there is only one GPU
> per node could be used.
> 
> I have checked out some information online, and find that, this
> problem seems related with the MPI implementation adopted (For me,
> I used Intel MPI), as well as how the program itself is written.

By default the CUDA backend uses a 'round-robin' strategy to decide
which GPU to use.  The strategy tries to create a CUDA context on each
CUDA capable device in the system until one succeeds.  It is intended
to be used when the GPUs are in 'compute exclusive' mode.

Alternatively, you can set the device-id key in [backend-cuda] to be
'local-rank'.  Here PyFR will use the node-local MPI rank to determine
which CUDA device to use.

Further information on these options can be found in the user guide.

Regards, Freddie.
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org

iEYEARECAAYFAlUuQHIACgkQ/J9EM/uoqVdxOgCdE/cSLLnkJfKm1RaQmpKZWr7R
VNQAoIBtNTVRwOIP/VQNezt/m5BTkeCx
=VdLR
-----END PGP SIGNATURE-----

-- 
You received this message because you are subscribed to the Google Groups "PyFR 
Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send an email to [email protected].
Visit this group at http://groups.google.com/group/pyfrmailinglist.
For more options, visit https://groups.google.com/d/optout.

Reply via email to