I don't think that they conflict with our paffinity module and setting. My understanding is that if you set a new affinity mask, it simply overwrites the previous setting. So in the worst case it voids the setting made by Open MPI, but I don't think that it should cause 'problems'. Admittedly, I haven't tried the library and the function calls yet, I just learned relatively recently about them...

Thanks
Edga

Ralph Castain wrote:
Interesting - learn something new every day! :-)

How does this interact with OMPI's paffinity/maffinity assignments? With the rank/slot mapping and binding system?

Should users -not- set paffinity if they include these numa calls in their code?

Can we detect any potential conflict in OMPI and avoid setting paffinity_alone? Reason I ask: many systems set paffinity_alone in the default mca param file because they always assign dedicated nodes to users. While users can be told to be sure to turn it "off" when using these calls, it seems inevitable that they will forget - and complaints will appear.

Thanks
Ralph



On Nov 20, 2008, at 7:34 AM, Edgar Gabriel wrote:

if you look at recent versions of libnuma, there are two functions called numa_run_on_node() and numa_run_on_node_mask(), which allow thread-based assignments to CPUs....

Thanks
Edgar

Gabriele Fatigati wrote:
Is there a way to assign one thread to one core? Also from code, not
necessary with OpenMPI option.
Thanks.
2008/11/19 Stephen Wornom <stephen.wor...@sophia.inria.fr>:
Gabriele Fatigati wrote:
Ok,
but in Ompi 1.3 how can i enable it?

This may not be relevant, but I could not get a hybrid mpi+OpenMP code to
work correctly.
Would my problem be related to Gabriele's and perhaps fixed in openmpi 1.3?
Stephen
2008/11/18 Ralph Castain <r...@lanl.gov>:

I am afraid it is only available in 1.3 - we didn't backport it to the
1.2
series


On Nov 18, 2008, at 10:06 AM, Gabriele Fatigati wrote:


Hi,
how can i set "slot mapping" as you told me? With TASK GEOMETRY? Or is
a new 1.3 OpenMPI feature?

Thanks.

2008/11/18 Ralph Castain <r...@lanl.gov>:

Unfortunately, paffinity doesn't know anything about assigning threads
to
cores. This is actually a behavior of Linux, which only allows
paffinity
to
be set at the process level. So, when you set paffinity on a process,
you
bind all threads of that process to the specified core(s). You cannot
specify that a thread be given a specific core.

In this case, your two threads/process are sharing the same core and
thus
contending for it. As you'd expect in that situation, one thread gets
the
vast majority of the attention, while the other thread is mostly idle.

If you can upgrade to the beta 1.3 release, try using the slot mapping
to
assign multiple cores to each process. This will ensure that the
threads
for
that process have exclusive access to those cores, but will not bind a particular thread to one core - the threads can "move around" across
the
specified set of cores. Your threads will then be able to run without
interfering with each other.

Ralph


On Nov 18, 2008, at 9:18 AM, Gabriele Fatigati wrote:


Dear OpenMPI developers,
i have a strange problem with mixed program MPI+OPENMP over OpenMPI 1.2.6. I'm using PJL TASK GEOMETRY in LSF Scheduler, setting 2 MPI
process every compute node, and 2 OMP threads per process. Using
paffinity and maffinity, i've noted that over every node, i have 2
thread that works 100%, and 2 threads doesn't works, or works very
few.

If i disable paffinity and maffinity, 4 threads works well, without
load imbalance.
I don't understand this issue: paffinity and maffinity should map
every thread over a specific core, optimizing the cache flow, but i
have this without settings there!

Can i use paffinity and maffinity in mixed MPI+OpenMP program? Or it
works only over MPI thread?

Thanks in advance.


--
Ing. Gabriele Fatigati

CINECA Systems & Tecnologies Department

Supercomputing  Group

Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy

www.cineca.it                    Tel:   +39 051 6171722

g.fatig...@cineca.it
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Ing. Gabriele Fatigati

CINECA Systems & Tecnologies Department

Supercomputing  Group

Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy

www.cineca.it                    Tel:   +39 051 6171722

g.fatig...@cineca.it
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users







--
stephen.wor...@sophia.inria.fr
2004 route des lucioles - BP93
Sophia Antipolis
06902 CEDEX

Tel: 04 92 38 50 54
Fax: 04 97 15 53 51


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

--
Edgar Gabriel
Assistant Professor
Parallel Software Technologies Lab      http://pstl.cs.uh.edu
Department of Computer Science          University of Houston
Philip G. Hoffman Hall, Room 524        Houston, TX-77204, USA
Tel: +1 (713) 743-3857                  Fax: +1 (713) 743-3335

Reply via email to