Thank you, Brian and Vincent!

I tried Brian's solution (use nvidia-smi to set the mode) and it works.  I 
searched a lot about ranks and binding but the solution turned out to be so 
simple (and a little hack maybe? since it also eliminated the possibility 
of run multiple processes evenly on multiple cards)

And as a (maybe) off-topic discussion, take this solution as an example, 
how is the communication between processes done? Will the data from one 
card be move t the main memory and moved towards another cards? Or the 
cards can go the PCIe Bus directly?

P.S. The direct communication seems to be called the GPUDirect technology. 


Thanks!

Zhen

On Friday, April 17, 2015 at 11:12:36 PM UTC+8, Freddie Witherden wrote:
>
> Hi Zhen, 
>
> On 17/04/15 15:53, Zhen Zhang wrote: 
> > Thanks a lot, but you may misunderstand my idea. Assuming I have two 
> > CUDA GPUs on a single node, and I want to partition a mesh into two 
> > parts and solve two partition on two GPUs simultaneously. 
>
> This is exactly what you want: 
>
> [backend-cuda] 
> device-id = local-rank 
>
> for.  (Or use the compute-exclusive solution as outlined by Brian.) 
>
> > I can set the `devid`, but it is a sole number, which targets at only 
> > one GPU. But for local-rank, MPI ( I used MVAPICH2) gives all two 
> > processes to a single card. 
>
> Can you attach the config file you're using? 
>
> Also, as an aside it is perhaps also worth noting that almost all config 
> file options support expansion of environmental variables.  So with 
> MVAPICH2: 
>
> [backend-cuda] 
> device-id = ${MV2_COMM_WORLD_LOCAL_RANK} 
>
> is basically the same as local-rank. 
>
> Regards, Freddie. 
>
>

-- 
You received this message because you are subscribed to the Google Groups "PyFR 
Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send an email to [email protected].
Visit this group at http://groups.google.com/group/pyfrmailinglist.
For more options, visit https://groups.google.com/d/optout.

Reply via email to