Unfortunately, paffinity doesn't know anything about assigning threads to cores. This is actually a behavior of Linux, which only allows paffinity to be set at the process level. So, when you set paffinity on a process, you bind all threads of that process to the specified core(s). You cannot specify that a thread be given a specific core.

In this case, your two threads/process are sharing the same core and thus contending for it. As you'd expect in that situation, one thread gets the vast majority of the attention, while the other thread is mostly idle.

If you can upgrade to the beta 1.3 release, try using the slot mapping to assign multiple cores to each process. This will ensure that the threads for that process have exclusive access to those cores, but will not bind a particular thread to one core - the threads can "move around" across the specified set of cores. Your threads will then be able to run without interfering with each other.

Ralph


On Nov 18, 2008, at 9:18 AM, Gabriele Fatigati wrote:

Dear OpenMPI developers,
i have a strange problem with mixed program MPI+OPENMP over OpenMPI
1.2.6. I'm using PJL TASK  GEOMETRY in LSF Scheduler, setting 2 MPI
process every compute node, and 2 OMP threads per process. Using
paffinity and maffinity, i've noted that over every node, i have 2
thread that works 100%, and 2 threads doesn't works, or works very
few.

If i disable paffinity and maffinity, 4 threads works well, without
load imbalance.
I don't understand this issue: paffinity and maffinity should map
every thread over a specific core, optimizing the cache flow, but i
have this without settings there!

Can i use paffinity and maffinity in mixed MPI+OpenMP program? Or it
works only over MPI thread?

Thanks in advance.


--
Ing. Gabriele Fatigati

CINECA Systems & Tecnologies Department

Supercomputing  Group

Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy

www.cineca.it                    Tel:   +39 051 6171722

g.fatig...@cineca.it
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to