Thanks, Ralph,
I did try to build OMPI against the PMIx 2.0.2 -- using the configure
option --with-pmix=/opt/pmix/2.0.2, but it sounds like the better route
would be to upgrade to PMIx 2.1.
Thanks, and I'll give it a try!
-- bennet
On Mon, Nov 12, 2018 at 12:42 PM Ralph H Castain wrote:
> mp
mpirun should definitely still work in parallel with srun - they aren’t
mutually exclusive. OMPI 3.1.2 contains PMIx v2.1.3.
The problem here is that you built Slurm against PMIx v2.0.2, which is not
cross-version capable. You can see the cross-version situation here:
https://pmix.org/support/f
Problem solved, thank you!
Best,
Andrei
On Mon, Nov 12, 2018 at 6:33 PM Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> Andrei,
>
> you can
>
> mpirun --mca btl ^openib ...
>
> in order to "disable" infiniband
>
>
> Cheers,
>
> Gilles
> On Mon, Nov 12, 2018 at 9:52 AM Andrei Bercea
Andrei,
you can
mpirun --mca btl ^openib ...
in order to "disable" infiniband
Cheers,
Gilles
On Mon, Nov 12, 2018 at 9:52 AM Andrei Berceanu
wrote:
>
> The node has an IB card, but it is a stand-alone node, disconnected from the
> rest of the cluster.
> I am using OMPI to communicate intern
The node has an IB card, but it is a stand-alone node, disconnected from
the rest of the cluster.
I am using OMPI to communicate internally between the GPUs of this node
(and not between nodes).
So how can I disable the IB?
___
users mailing list
users@li
On Mon, Nov 12, 2018 at 8:08 AM Andrei Berceanu
wrote:
>
> Running a CUDA+MPI application on a node with 2 K80 GPUs, I get the following
> warnings:
>
> --
> WARNING: There is at least non-excluded one OpenFabrics device foun
Hi all,
Running a CUDA+MPI application on a node with 2 K80 GPUs, I get the
following warnings:
--
WARNING: There is at least non-excluded one OpenFabrics device found,
but there are no active ports detected (or Open MPI was