Re: [OMPI users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

2017-12-19 Thread Charles A Taylor
> Or one could tell OMPI to do what you really want it to do using map-by and > bind-to options, perhaps putting them in the default MCA param file. Nod. Agreed, but far too complicated for 98% of our users. > > Or you could enable cgroups in slurm so that OMPI sees the binding envelope - >

Re: [OMPI users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

2017-12-19 Thread r...@open-mpi.org
> On Dec 19, 2017, at 8:46 AM, Charles A Taylor wrote: > > Hi All, > > I’m glad to see this come up. We’ve used OpenMPI for a long time and > switched to SLURM (from torque+moab) about 2.5 years ago. At the time, I had > a lot of questions about running MPI jobs under

Re: [OMPI users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

2017-12-19 Thread Charles A Taylor
Hi All, I’m glad to see this come up. We’ve used OpenMPI for a long time and switched to SLURM (from torque+moab) about 2.5 years ago. At the time, I had a lot of questions about running MPI jobs under SLURM and good information seemed to be scarce - especially regarding “srun”. I’ll just

Re: [OMPI users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

2017-12-19 Thread Prentice Bisbal
Ralph, Thank your very much for your response. I'll pass this along to my users. Sounds lie we might need to do some testing of our own. We're still using Slurm 15.08, but planning to upgrade to 17.11 soon, so it sounds like we'll get some performance benefits from doing so. Prentice On

Re: [OMPI users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

2017-12-18 Thread r...@open-mpi.org
We have had reports of applications running faster when executing under OMPI’s mpiexec versus when started by srun. Reasons aren’t entirely clear, but are likely related to differences in mapping/binding options (OMPI provides a very large range compared to srun) and optimization flags provided

[OMPI users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

2017-12-18 Thread Prentice Bisbal
Greeting OpenMPI users and devs! We use OpenMPI with Slurm as our scheduler, and a user has asked me this: should they use mpiexec/mpirun or srun to start their MPI jobs through Slurm? My inclination is to use mpiexec, since that is the only method that's (somewhat) defined in the MPI