Hi Zhen,

On 17/04/15 15:53, Zhen Zhang wrote:
> Thanks a lot, but you may misunderstand my idea. Assuming I have two
> CUDA GPUs on a single node, and I want to partition a mesh into two
> parts and solve two partition on two GPUs simultaneously.

This is exactly what you want:

[backend-cuda]
device-id = local-rank

for.  (Or use the compute-exclusive solution as outlined by Brian.)

> I can set the `devid`, but it is a sole number, which targets at only
> one GPU. But for local-rank, MPI ( I used MVAPICH2) gives all two
> processes to a single card.

Can you attach the config file you're using?

Also, as an aside it is perhaps also worth noting that almost all config
file options support expansion of environmental variables.  So with
MVAPICH2:

[backend-cuda]
device-id = ${MV2_COMM_WORLD_LOCAL_RANK}

is basically the same as local-rank.

Regards, Freddie.

-- 
You received this message because you are subscribed to the Google Groups "PyFR 
Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send an email to [email protected].
Visit this group at http://groups.google.com/group/pyfrmailinglist.
For more options, visit https://groups.google.com/d/optout.

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to