[OMPI users] OpenFabrics warning

2018-11-12 Thread Andrei Berceanu
Hi all, Running a CUDA+MPI application on a node with 2 K80 GPUs, I get the following warnings: -- WARNING: There is at least non-excluded one OpenFabrics device found, but there are no active ports detected (or Open MPI was

Re: [OMPI users] OpenFabrics warning

2018-11-12 Thread Andrei Berceanu
The node has an IB card, but it is a stand-alone node, disconnected from the rest of the cluster. I am using OMPI to communicate internally between the GPUs of this node (and not between nodes). So how can I disable the IB? ___ users mailing list

Re: [OMPI users] OpenFabrics warning

2018-11-12 Thread Michael Di Domenico
On Mon, Nov 12, 2018 at 8:08 AM Andrei Berceanu wrote: > > Running a CUDA+MPI application on a node with 2 K80 GPUs, I get the following > warnings: > > -- > WARNING: There is at least non-excluded one OpenFabrics device

Re: [OMPI users] OMPI 3.1.x, PMIx, SLURM, and mpiexec/mpirun

2018-11-12 Thread Ralph H Castain
mpirun should definitely still work in parallel with srun - they aren’t mutually exclusive. OMPI 3.1.2 contains PMIx v2.1.3. The problem here is that you built Slurm against PMIx v2.0.2, which is not cross-version capable. You can see the cross-version situation here:

Re: [OMPI users] OpenFabrics warning

2018-11-12 Thread Andrei Berceanu
Problem solved, thank you! Best, Andrei On Mon, Nov 12, 2018 at 6:33 PM Gilles Gouaillardet < gilles.gouaillar...@gmail.com> wrote: > Andrei, > > you can > > mpirun --mca btl ^openib ... > > in order to "disable" infiniband > > > Cheers, > > Gilles > On Mon, Nov 12, 2018 at 9:52 AM Andrei

Re: [OMPI users] OpenFabrics warning

2018-11-12 Thread Gilles Gouaillardet
Andrei, you can mpirun --mca btl ^openib ... in order to "disable" infiniband Cheers, Gilles On Mon, Nov 12, 2018 at 9:52 AM Andrei Berceanu wrote: > > The node has an IB card, but it is a stand-alone node, disconnected from the > rest of the cluster. > I am using OMPI to communicate

Re: [OMPI users] OMPI 3.1.x, PMIx, SLURM, and mpiexec/mpirun

2018-11-12 Thread Bennet Fauber
Thanks, Ralph, I did try to build OMPI against the PMIx 2.0.2 -- using the configure option --with-pmix=/opt/pmix/2.0.2, but it sounds like the better route would be to upgrade to PMIx 2.1. Thanks, and I'll give it a try! -- bennet On Mon, Nov 12, 2018 at 12:42 PM Ralph H Castain wrote: >