Hi,

Sorry for not answering sooner,

In Open MPI 1.3 we added a paffinity mapping module.

The syntax is quite simple and flexible:

rank N=hostA slot=socket:core_range

rank M=hostB slot=cpu

see the fallowing example:

ex:

#mpirun -rf rankfile_name ./app

#cat rankfile_name

rank 0=host1 slot=0

rank 1=host2 slot=0:*

rank 2=host3 slot=1:0,1

rank 3=host3 slot=1:2-3

rank 4=host1 slot=1:0,0:2

explanation:

Let's assume we have Quad core Dual CPU machines named host1,host2,host3

Using the rankfile above we get rank 0 running on CPU#0  ( cat
/proc/cpu_info you see what is CPU #0 )

rank 1 will run on all cores of socket #0

rank 2 will run on host3 socket #1, cores 0,1

rank 3 will run on host3 socket #1, cores from #2 to #3

rank 4 will run on host1 socket1:core0 and socket0:core2

So, using threads you probably should use slot=0:*, this way all threads
will run on all cores of socket #0 ( or any other specified ).

or coma separated list of exact pairs like rank4 in the example above.

you can also use -mca paffinity_base_verbose 10 to see the mapping that took
place in the job.

Best Regards.

Lenny.


On 11/20/08, Ralph Castain <r...@lanl.gov> wrote:
>
> At the very least, you would have to call these functions -after- MPI_Init
> so they could override what OMPI did.
>
>
> On Nov 20, 2008, at 8:03 AM, Gabriele Fatigati wrote:
>
>  And in the hybrid program MPi+OpenMP?
>> Are these considerations still good?
>>
>> 2008/11/20 Edgar Gabriel <gabr...@cs.uh.edu>:
>>
>>> I don't think that they conflict with our paffinity module and setting.
>>> My
>>> understanding is that if you set a new affinity mask, it simply
>>> overwrites
>>> the previous setting. So in the worst case it voids the setting made by
>>> Open
>>> MPI, but I don't think that it should cause 'problems'. Admittedly, I
>>> haven't tried the library and the function calls yet, I just learned
>>> relatively recently about them...
>>>
>>> Thanks
>>> Edga
>>>
>>> Ralph Castain wrote:
>>>
>>>>
>>>> Interesting - learn something new every day! :-)
>>>>
>>>> How does this interact with OMPI's paffinity/maffinity assignments? With
>>>> the rank/slot mapping and binding system?
>>>>
>>>> Should users -not- set paffinity if they include these numa calls in
>>>> their
>>>> code?
>>>>
>>>> Can we detect any potential conflict in OMPI and avoid setting
>>>> paffinity_alone? Reason I ask: many systems set paffinity_alone in the
>>>> default mca param file because they always assign dedicated nodes to
>>>> users.
>>>> While users can be told to be sure to turn it "off" when using these
>>>> calls,
>>>> it seems inevitable that they will forget - and complaints will appear.
>>>>
>>>> Thanks
>>>> Ralph
>>>>
>>>>
>>>>
>>>> On Nov 20, 2008, at 7:34 AM, Edgar Gabriel wrote:
>>>>
>>>>  if you look at recent versions of libnuma, there are two functions
>>>>> called
>>>>> numa_run_on_node() and numa_run_on_node_mask(), which allow
>>>>> thread-based
>>>>> assignments to CPUs....
>>>>>
>>>>> Thanks
>>>>> Edgar
>>>>>
>>>>> Gabriele Fatigati wrote:
>>>>>
>>>>>>
>>>>>> Is there a way to assign one thread to one core? Also from code, not
>>>>>> necessary with OpenMPI option.
>>>>>> Thanks.
>>>>>> 2008/11/19 Stephen Wornom <stephen.wor...@sophia.inria.fr>:
>>>>>>
>>>>>>>
>>>>>>> Gabriele Fatigati wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> Ok,
>>>>>>>> but in Ompi 1.3 how can i enable it?
>>>>>>>>
>>>>>>>>  This may not be relevant, but I could not get a hybrid mpi+OpenMP
>>>>>>> code
>>>>>>> to
>>>>>>> work correctly.
>>>>>>> Would my problem be related to Gabriele's and perhaps fixed in
>>>>>>> openmpi
>>>>>>> 1.3?
>>>>>>> Stephen
>>>>>>>
>>>>>>>>
>>>>>>>> 2008/11/18 Ralph Castain <r...@lanl.gov>:
>>>>>>>>
>>>>>>>>  I am afraid it is only available in 1.3 - we didn't backport it to
>>>>>>>>> the
>>>>>>>>> 1.2
>>>>>>>>> series
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Nov 18, 2008, at 10:06 AM, Gabriele Fatigati wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>  Hi,
>>>>>>>>>> how can i set "slot mapping" as you told me? With TASK GEOMETRY?
>>>>>>>>>> Or
>>>>>>>>>> is
>>>>>>>>>> a new 1.3 OpenMPI feature?
>>>>>>>>>>
>>>>>>>>>> Thanks.
>>>>>>>>>>
>>>>>>>>>> 2008/11/18 Ralph Castain <r...@lanl.gov>:
>>>>>>>>>>
>>>>>>>>>>  Unfortunately, paffinity doesn't know anything about assigning
>>>>>>>>>>> threads
>>>>>>>>>>> to
>>>>>>>>>>> cores. This is actually a behavior of Linux, which only allows
>>>>>>>>>>> paffinity
>>>>>>>>>>> to
>>>>>>>>>>> be set at the process level. So, when you set paffinity on a
>>>>>>>>>>> process,
>>>>>>>>>>> you
>>>>>>>>>>> bind all threads of that process to the specified core(s). You
>>>>>>>>>>> cannot
>>>>>>>>>>> specify that a thread be given a specific core.
>>>>>>>>>>>
>>>>>>>>>>> In this case, your two threads/process are sharing the same core
>>>>>>>>>>> and
>>>>>>>>>>> thus
>>>>>>>>>>> contending for it. As you'd expect in that situation, one thread
>>>>>>>>>>> gets
>>>>>>>>>>> the
>>>>>>>>>>> vast majority of the attention, while the other thread is mostly
>>>>>>>>>>> idle.
>>>>>>>>>>>
>>>>>>>>>>> If you can upgrade to the beta 1.3 release, try using the slot
>>>>>>>>>>> mapping
>>>>>>>>>>> to
>>>>>>>>>>> assign multiple cores to each process. This will ensure that the
>>>>>>>>>>> threads
>>>>>>>>>>> for
>>>>>>>>>>> that process have exclusive access to those cores, but will not
>>>>>>>>>>> bind a
>>>>>>>>>>> particular thread to one core - the threads can "move around"
>>>>>>>>>>> across
>>>>>>>>>>> the
>>>>>>>>>>> specified set of cores. Your threads will then be able to run
>>>>>>>>>>> without
>>>>>>>>>>> interfering with each other.
>>>>>>>>>>>
>>>>>>>>>>> Ralph
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Nov 18, 2008, at 9:18 AM, Gabriele Fatigati wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>  Dear OpenMPI developers,
>>>>>>>>>>>> i have a strange problem with mixed program MPI+OPENMP over
>>>>>>>>>>>> OpenMPI
>>>>>>>>>>>> 1.2.6. I'm using PJL TASK  GEOMETRY in LSF Scheduler, setting 2
>>>>>>>>>>>> MPI
>>>>>>>>>>>> process every compute node, and 2 OMP threads per process. Using
>>>>>>>>>>>> paffinity and maffinity, i've noted that over every node, i have
>>>>>>>>>>>> 2
>>>>>>>>>>>> thread that works 100%, and 2 threads doesn't works, or works
>>>>>>>>>>>> very
>>>>>>>>>>>> few.
>>>>>>>>>>>>
>>>>>>>>>>>> If i disable paffinity and maffinity, 4 threads works well,
>>>>>>>>>>>> without
>>>>>>>>>>>> load imbalance.
>>>>>>>>>>>> I don't understand this issue: paffinity and maffinity should
>>>>>>>>>>>> map
>>>>>>>>>>>> every thread over a specific core, optimizing the cache flow,
>>>>>>>>>>>> but
>>>>>>>>>>>> i
>>>>>>>>>>>> have this without settings there!
>>>>>>>>>>>>
>>>>>>>>>>>> Can i use paffinity and maffinity in mixed MPI+OpenMP program?
>>>>>>>>>>>> Or
>>>>>>>>>>>> it
>>>>>>>>>>>> works only over MPI thread?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks in advance.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Ing. Gabriele Fatigati
>>>>>>>>>>>>
>>>>>>>>>>>> CINECA Systems & Tecnologies Department
>>>>>>>>>>>>
>>>>>>>>>>>> Supercomputing  Group
>>>>>>>>>>>>
>>>>>>>>>>>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>>>>>>>>>>>
>>>>>>>>>>>> www.cineca.it                    Tel:   +39 051 6171722
>>>>>>>>>>>>
>>>>>>>>>>>> g.fatig...@cineca.it
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> users mailing list
>>>>>>>>>>>> us...@open-mpi.org
>>>>>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>>>>>>
>>>>>>>>>>>>  _______________________________________________
>>>>>>>>>>> users mailing list
>>>>>>>>>>> us...@open-mpi.org
>>>>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>  --
>>>>>>>>>> Ing. Gabriele Fatigati
>>>>>>>>>>
>>>>>>>>>> CINECA Systems & Tecnologies Department
>>>>>>>>>>
>>>>>>>>>> Supercomputing  Group
>>>>>>>>>>
>>>>>>>>>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>>>>>>>>>
>>>>>>>>>> www.cineca.it                    Tel:   +39 051 6171722
>>>>>>>>>>
>>>>>>>>>> g.fatig...@cineca.it
>>>>>>>>>> _______________________________________________
>>>>>>>>>> users mailing list
>>>>>>>>>> us...@open-mpi.org
>>>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>>>>
>>>>>>>>>>  _______________________________________________
>>>>>>>>> users mailing list
>>>>>>>>> us...@open-mpi.org
>>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>>> stephen.wor...@sophia.inria.fr
>>>>>>> 2004 route des lucioles - BP93
>>>>>>> Sophia Antipolis
>>>>>>> 06902 CEDEX
>>>>>>>
>>>>>>> Tel: 04 92 38 50 54
>>>>>>> Fax: 04 97 15 53 51
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> us...@open-mpi.org
>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>
>>>>>>>  _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>
>>>
>>> --
>>> Edgar Gabriel
>>> Assistant Professor
>>> Parallel Software Technologies Lab      http://pstl.cs.uh.edu
>>> Department of Computer Science          University of Houston
>>> Philip G. Hoffman Hall, Room 524        Houston, TX-77204, USA
>>> Tel: +1 (713) 743-3857                  Fax: +1 (713) 743-3335
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>>
>>>
>>
>>
>> --
>> Ing. Gabriele Fatigati
>>
>> CINECA Systems & Tecnologies Department
>>
>> Supercomputing  Group
>>
>> Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
>>
>> www.cineca.it                    Tel:   +39 051 6171722
>>
>> g.fatig...@cineca.it
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to