You just need to tell mpirun that you want your procs to be bound to cores, not 
socket (which is the default).

Add "--bind-to core" to your mpirun cmd line


On Oct 10, 2021, at 11:17 PM, Chang Liu via users <users@lists.open-mpi.org 
<mailto:users@lists.open-mpi.org> > wrote:

Yes they are. This is an interactive job from

salloc -N 1 --ntasks-per-node=64 --cpus-per-task=2 --gpus-per-node=4 --gpu-mps 
--time=24:00:00

Chang

On 10/11/21 2:09 AM, Åke Sandgren via users wrote:
On 10/10/21 5:38 PM, Chang Liu via users wrote:
OMPI v4.1.1-85-ga39a051fd8

% srun bash -c "cat /proc/self/status|grep Cpus_allowed_list"
Cpus_allowed_list:      58-59
Cpus_allowed_list:      106-107
Cpus_allowed_list:      110-111
Cpus_allowed_list:      114-115
Cpus_allowed_list:      16-17
Cpus_allowed_list:      36-37
Cpus_allowed_list:      54-55
...

% mpirun bash -c "cat /proc/self/status|grep Cpus_allowed_list"
Cpus_allowed_list:      0-127
Cpus_allowed_list:      0-127
Cpus_allowed_list:      0-127
Cpus_allowed_list:      0-127
Cpus_allowed_list:      0-127
Cpus_allowed_list:      0-127
Cpus_allowed_list:      0-127
...
Was that run in the same batch job? If not, the data is useless.

-- 
Chang Liu
Staff Research Physicist
+1 609 243 3438
c...@pppl.gov <mailto:c...@pppl.gov> 
Princeton Plasma Physics Laboratory
100 Stellarator Rd, Princeton NJ 08540, USA

Reply via email to