I have a distinct impression I already asked this question, but I can't find it
in the archives. Apologies, if I am repeating myself.

I am working on a cluster which has multiple GPUs per node 4-6 to be more
precise. I got PV to work with EGL backend and in a client server mode and it's
been a really good experience so far but one always wants more... It seems to
me that I am using only one GPU even if I am running in parallel.

I am looking for some advice here. I am using this website as a
and I am using OpenMPI to do my MPI.

# 1. This works for me but is using a single GPU
mpirun -report-bindings -map-by core -bind-to core \
    -np 20 pvserver -sp=22221 --disable-xdisplay-test

# 2. This probably doesn't work, but I am not sure...
mpirun -report-bindings -map-by core -bind-to core \
    -np 10 pvserver -sp=22221 --disable-xdisplay-test --egl-device-index=0 : \
    -np 10 pvserver -disable-xdisplay-test --egl-device-index=1

General question. Is it ok to drop -display option with EGL?

With option 1. I am afraid that the process are spread out across both sockets
and I think there may be a slight communication overhead between processes
on one socket and the GPU plugged in on another one, but perhaps I am being
paranoid. How can I measure this?

With option 2: am I just abusing the syntax or this the right way to do
it? nvidia-smi tells me I am using two GPUs now. How to make sure that the
processes talking to an nth GPU are on bound to the right socket? Is single -sp
option correct there?

And perhaps a silly general question. How can I make a fair benchmark of this?

Best wishes,
Ghosts of the upper atmosphere
Powered by www.kitware.com

Visit other Kitware open-source projects at 

Please keep messages on-topic and check the ParaView Wiki at: 

Search the list archives at: http://markmail.org/search/?q=ParaView

Follow this link to subscribe/unsubscribe:

Reply via email to