Christopher,

I do not think Open MPI explicitly asks SLURM which cores have been
assigned on each node.
So if you are planning to run multiple jobs on the same node, your best bet
is probably to have SLURM
use cpusets.

Cheers,

Gilles

On Sat, Feb 24, 2024 at 7:25 AM Christopher Daley via users <
users@lists.open-mpi.org> wrote:

> Dear Support,
>
> I'm seeking clarification about the expected behavior of mpirun in Slurm
> jobs.
>
> Our setup consists of using Slurm for resource allocation and OpenMPI
> mpirun to launch MPI applications. We have found that when two Slurm jobs
> have been allocated different cores on the same compute node that the MPI
> ranks in Slurm job 1 map to the same cores as Slurm job 2. It appears that
> OpenMPI mpirun is not considering the details of the Slurm allocation. We
> get expected behavior when srun is employed as the MPI launcher instead of
> mpirun, i.e. the MPI ranks in Slurm job 1 use different cores than the MPI
> ranks in Slurm job 2.
>
> We have observed this with OpenMPI-4.1.6 and OpenMPI-5.0.2. Should we
> expect that the mpirun in each job will only use the exact cores that were
> allocated by Slurm?
>
> Thanks,
> Chris
>

Reply via email to