Re: [OMPI users] Run on dual-socket system

2022-11-26 Thread Gilles Gouaillardet via users
Arham,

It should be balanced: the default mapping is to allocate NUMA packages
round robin.

you can
mpirun --report-bindings -n 28 true
to have Open MPI report the bindings

or

mpirun --tag-output -n 28 grep Cpus_allowed_list /proc/self/status

to have each task report which physical cpu it is bound.


Cheers,

Gilles

On Sat, Nov 26, 2022 at 5:38 PM Arham Amouei via users <
users@lists.open-mpi.org> wrote:

> Hi
>
> If I run a code with
>
> mpirun -n 28 ./code
>
> Is it guaranteed that Open MPI and/or OS give equal number of processes to
> each socket? Or I have to use some mpirun options?
>
> Running the code with the command given above, one socket gets much hotter
> than the other (60°C vs 80°C). I'm sure that the code itself divides the
> job equally among the processes.
>
> The system is Dell Precision 7910. Two Xeon E5-2680 v4 and two 16GB 2400
> RAM modules are installed. There are a total number of 28 physical cores.
> The total number of logical cores is 56. The OS is Ubuntu 22.04.
>
> Thank in advance
> Arham
>
>
>
>


[OMPI users] Run on dual-socket system

2022-11-26 Thread Arham Amouei via users
Hi

If I run a code with

mpirun -n 28 ./code

Is it guaranteed that Open MPI and/or OS give equal number of processes to
each socket? Or I have to use some mpirun options?

Running the code with the command given above, one socket gets much hotter
than the other (60°C vs 80°C). I'm sure that the code itself divides the
job equally among the processes.

The system is Dell Precision 7910. Two Xeon E5-2680 v4 and two 16GB 2400
RAM modules are installed. There are a total number of 28 physical cores.
The total number of logical cores is 56. The OS is Ubuntu 22.04.

Thank in advance
Arham