Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-02-17 Thread Gilles Gouaillardet via users
Angel,

Infiniband detection likely fails before checking expanded verbs.
Please compress and post the full configure output


Cheers,

Gilles

On Fri, Feb 18, 2022 at 12:02 AM Angel de Vicente via users <
users@lists.open-mpi.org> wrote:

> Hi,
>
> I'm trying to compile the latest OpenMPI version with Infiniband support
> in our local cluster, but didn't get very far (since I'm installing this
> via Spack, I also asked in their support group).
>
> I'm doing the installation via Spack, which is issuing the following
> .configure step (see the options given for --with-knem, --with-hcoll and
> --with-mxm):
>
> ,
> | configure'
> |
> '--prefix=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/openmpi-4.1.1-jsvbusyjgthr2d6oyny5klt62gm6ma2u'
> | '--enable-shared' '--disable-silent-rules' '--disable-builtin-atomics'
> | '--enable-static' '--without-pmi'
> |
> '--with-zlib=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/zlib-1.2.11-hrstx5ffrg4f4k3xc2anyxed3mmgdcoz'
> | '--enable-mpi1-compatibility' '--with-knem=/opt/knem-1.1.2.90mlnx2'
> | '--with-hcoll=/opt/mellanox/hcoll' '--without-psm' '--without-ofi'
> | '--without-cma' '--without-ucx' '--without-fca'
> | '--with-mxm=/opt/mellanox/mxm' '--without-verbs' '--without-xpmem'
> | '--without-psm2' '--without-alps' '--without-lsf' '--without-sge'
> | '--without-slurm' '--without-tm' '--without-loadleveler'
> | '--disable-memchecker'
> |
> '--with-libevent=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/libevent-2.1.12-yd5l4tjmnigv6dqlv5afpn4zc6ekdchc'
> |
> '--with-hwloc=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/hwloc-2.6.0-bfnt4g3givflydpe5d2iglyupgbzxbfn'
> | '--disable-java' '--disable-mpi-java' '--without-cuda'
> | '--enable-wrapper-rpath' '--disable-wrapper-runpath' '--disable-mpi-cxx'
> | '--disable-cxx-exceptions'
> |
> '--with-wrapper-ldflags=-Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib/gcc/x86_64-pc-linux-gnu/9.3.0
> |
> -Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib64'
> `
>
> Later on in the configuration phase I see:
>
> ,
> | --- MCA component btl:openib (m4 configuration macro)
> | checking for MCA component btl:openib compile mode... static
> | checking whether expanded verbs are available... yes
> | checking whether IBV_EXP_ATOMIC_HCA_REPLY_BE is declared... yes
> | checking whether IBV_EXP_QP_CREATE_ATOMIC_BE_REPLY is declared... yes
> | checking whether ibv_exp_create_qp is declared... yes
> | checking whether ibv_exp_query_device is declared... yes
> | checking whether IBV_EXP_QP_INIT_ATTR_ATOMICS_ARG is declared... yes
> | checking for struct ibv_exp_device_attr.ext_atom... yes
> | checking for struct ibv_exp_device_attr.exp_atomic_cap... yes
> | checking if MCA component btl:openib can compile... no
> `
>
> This is the first time I try to compile OpenMPI this way, and I get a
> bit confused with what each bit is doing, but it looks like it goes
> through the moves to get the btl:openib built, but then for some reason
> it cannot compile it.
>
> Any suggestions/pointers?
>
> Many thanks,
> --
> Ángel de Vicente
>
> Tel.: +34 922 605 747
> Web.: http://research.iac.es/proyecto/polmag/
>
> -
> AVISO LEGAL: Este mensaje puede contener información confidencial y/o
> privilegiada. Si usted no es el destinatario final del mismo o lo ha
> recibido por error, por favor notifíquelo al remitente inmediatamente.
> Cualquier uso no autorizadas del contenido de este mensaje está
> estrictamente prohibida. Más información en:
> https://www.iac.es/es/responsabilidad-legal
> DISCLAIMER: This message may contain confidential and / or privileged
> information. If you are not the final recipient or have received it in
> error, please notify the sender immediately. Any unauthorized use of the
> content of this message is strictly prohibited. More information:
> https://www.iac.es/en/disclaimer
>


Re: [OMPI users] Check equality of a value in all MPI ranks

2022-02-17 Thread Niranda Perera via users
Thanks Joseph! I think that's a nifty trick! :-)

On Thu, Feb 17, 2022 at 4:57 PM Joseph Schuchart via users <
users@lists.open-mpi.org> wrote:

> Hi Niranda,
>
> A pattern I have seen in several places is to allreduce the pair p =
> {-x,x} with MPI_MIN or MPI_MAX. If in the resulting pair p[0] == -p[1],
> then everyone has the same value. If not, at least one rank had a
> different value. Example:
>
> ```
> bool is_same(int x) {
>int p[2];
>p[0] = -x;
>p[1] = x;
>MPI_Allreduce(MPI_IN_PLACE, p, 2, MPI_INT, MPI_MIN, MPI_COMM_WORLD);
>return (p[0] == -p[1]);
> }
> ```
>
> HTH,
> Joseph
>
> On 2/17/22 16:40, Niranda Perera via users wrote:
> > Hi all,
> >
> > Say I have some int `x`. I want to check if all MPI ranks get the same
> > value for `x`. What's a good way to achieve this using MPI collectives?
> >
> > The simplest I could think of is, broadcast rank0's `x`, do the
> > comparison, and allreduce-LAND the comparison result. This requires
> > two collective operations.
> > ```python
> > ...
> > x = ... # each rank may produce different values for x
> > x_bcast = comm.bcast(x, root=0)
> > all_equal = comm.allreduce(x==x_bcast, op=MPI.LAND)
> > if not all_equal:
> >raise Exception()
> > ...
> > ```
> > Is there a better way to do this?
> >
> >
> > --
> > Niranda Perera
> > https://niranda.dev/
> > @n1r44 
> >
>
>

-- 
Niranda Perera
https://niranda.dev/
@n1r44 


Re: [OMPI users] Check equality of a value in all MPI ranks

2022-02-17 Thread Joseph Schuchart via users

Hi Niranda,

A pattern I have seen in several places is to allreduce the pair p = 
{-x,x} with MPI_MIN or MPI_MAX. If in the resulting pair p[0] == -p[1], 
then everyone has the same value. If not, at least one rank had a 
different value. Example:


```
bool is_same(int x) {
  int p[2];
  p[0] = -x;
  p[1] = x;
  MPI_Allreduce(MPI_IN_PLACE, p, 2, MPI_INT, MPI_MIN, MPI_COMM_WORLD);
  return (p[0] == -p[1]);
}
```

HTH,
Joseph

On 2/17/22 16:40, Niranda Perera via users wrote:

Hi all,

Say I have some int `x`. I want to check if all MPI ranks get the same 
value for `x`. What's a good way to achieve this using MPI collectives?


The simplest I could think of is, broadcast rank0's `x`, do the 
comparison, and allreduce-LAND the comparison result. This requires 
two collective operations.

```python
...
x = ... # each rank may produce different values for x
x_bcast = comm.bcast(x, root=0)
all_equal = comm.allreduce(x==x_bcast, op=MPI.LAND)
if not all_equal:
   raise Exception()
...
```
Is there a better way to do this?


--
Niranda Perera
https://niranda.dev/
@n1r44 





[OMPI users] Check equality of a value in all MPI ranks

2022-02-17 Thread Niranda Perera via users
Hi all,

Say I have some int `x`. I want to check if all MPI ranks get the same
value for `x`. What's a good way to achieve this using MPI collectives?

The simplest I could think of is, broadcast rank0's `x`, do the comparison,
and allreduce-LAND the comparison result. This requires two collective
operations.
```python
...
x = ... # each rank may produce different values for x
x_bcast = comm.bcast(x, root=0)
all_equal = comm.allreduce(x==x_bcast, op=MPI.LAND)
if not all_equal:
   raise Exception()
...
```
Is there a better way to do this?


-- 
Niranda Perera
https://niranda.dev/
@n1r44 


[OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-02-17 Thread Angel de Vicente via users
Hi,

I'm trying to compile the latest OpenMPI version with Infiniband support
in our local cluster, but didn't get very far (since I'm installing this
via Spack, I also asked in their support group).

I'm doing the installation via Spack, which is issuing the following
.configure step (see the options given for --with-knem, --with-hcoll and
--with-mxm):

,
| configure'
| 
'--prefix=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/openmpi-4.1.1-jsvbusyjgthr2d6oyny5klt62gm6ma2u'
| '--enable-shared' '--disable-silent-rules' '--disable-builtin-atomics'
| '--enable-static' '--without-pmi'
| 
'--with-zlib=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/zlib-1.2.11-hrstx5ffrg4f4k3xc2anyxed3mmgdcoz'
| '--enable-mpi1-compatibility' '--with-knem=/opt/knem-1.1.2.90mlnx2'
| '--with-hcoll=/opt/mellanox/hcoll' '--without-psm' '--without-ofi'
| '--without-cma' '--without-ucx' '--without-fca'
| '--with-mxm=/opt/mellanox/mxm' '--without-verbs' '--without-xpmem'
| '--without-psm2' '--without-alps' '--without-lsf' '--without-sge'
| '--without-slurm' '--without-tm' '--without-loadleveler'
| '--disable-memchecker'
| 
'--with-libevent=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/libevent-2.1.12-yd5l4tjmnigv6dqlv5afpn4zc6ekdchc'
| 
'--with-hwloc=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/hwloc-2.6.0-bfnt4g3givflydpe5d2iglyupgbzxbfn'
| '--disable-java' '--disable-mpi-java' '--without-cuda'
| '--enable-wrapper-rpath' '--disable-wrapper-runpath' '--disable-mpi-cxx'
| '--disable-cxx-exceptions'
| 
'--with-wrapper-ldflags=-Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib/gcc/x86_64-pc-linux-gnu/9.3.0
| 
-Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib64'
`

Later on in the configuration phase I see:

,
| --- MCA component btl:openib (m4 configuration macro)
| checking for MCA component btl:openib compile mode... static
| checking whether expanded verbs are available... yes
| checking whether IBV_EXP_ATOMIC_HCA_REPLY_BE is declared... yes
| checking whether IBV_EXP_QP_CREATE_ATOMIC_BE_REPLY is declared... yes
| checking whether ibv_exp_create_qp is declared... yes
| checking whether ibv_exp_query_device is declared... yes
| checking whether IBV_EXP_QP_INIT_ATTR_ATOMICS_ARG is declared... yes
| checking for struct ibv_exp_device_attr.ext_atom... yes
| checking for struct ibv_exp_device_attr.exp_atomic_cap... yes
| checking if MCA component btl:openib can compile... no
`

This is the first time I try to compile OpenMPI this way, and I get a
bit confused with what each bit is doing, but it looks like it goes
through the moves to get the btl:openib built, but then for some reason
it cannot compile it.

Any suggestions/pointers?

Many thanks,
-- 
Ángel de Vicente

Tel.: +34 922 605 747
Web.: http://research.iac.es/proyecto/polmag/
-
AVISO LEGAL: Este mensaje puede contener información confidencial y/o 
privilegiada. Si usted no es el destinatario final del mismo o lo ha recibido 
por error, por favor notifíquelo al remitente inmediatamente. Cualquier uso no 
autorizadas del contenido de este mensaje está estrictamente prohibida. Más 
información en: https://www.iac.es/es/responsabilidad-legal
DISCLAIMER: This message may contain confidential and / or privileged 
information. If you are not the final recipient or have received it in error, 
please notify the sender immediately. Any unauthorized use of the content of 
this message is strictly prohibited. More information:  
https://www.iac.es/en/disclaimer


Re: [OMPI users] Verbose logging options to track IB communication issues

2022-02-17 Thread John Hearns via users
I would start at a lower level.  Clear your error counters then run some
fabric over the network, maybe using an IMB or OSU benchmark.
Then look to see if any ports are very noisy - that usually indicates a
cable needing a reseat or replacement.

Now start at a low level. Run IMB or OSU bandwidth or latency tests between
pairs of nodes. Are any nodes particularly slow?

Now run tests between groups of nodes which share a leaf switch.

Finally, if this is really a problem which is being triggered by an
application start by bisecting your network.  Run the application on half
the nodes, then the other half.  My hunch is that you will find faulty
cables.
I can of course be very wrong and it is something that this application
triggers.






On Wed, 16 Feb 2022 at 19:28, Shan-ho Tsai via users <
users@lists.open-mpi.org> wrote:

>
> Greetings,
>
> We are troubleshooting an IB network fabric issue that is causing some of
> our MPI applications to failed with errors like this:
>
> --
> The InfiniBand retry count between two MPI processes has been
> exceeded.  "Retry count" is defined in the InfiniBand spec 1.2(section 
> 12.7.38):
>
> The total number of times that the sender wishes the receiver to
> retry timeout, packet sequence, etc. errors before posting a
> completion error.
>
> This error typically means that there is something awry within the
> InfiniBand fabric itself.  You should note the hosts on which this
> error has occurred; it has been observed that rebooting or removing a
> particular host from the job can sometimes resolve this issue.
>
> Two MCA parameters can be used to control Open MPI's behavior with
> respect to the retry count:
>
> * btl_openib_ib_retry_count - The number of times the sender will
>   attempt to retry (defaulted to 7, the maximum value).
> * btl_openib_ib_timeout - The local ACK timeout parameter (defaulted
>   to 20).  The actual timeout value used is calculated as:
>
>  4.096 microseconds * (2^btl_openib_ib_timeout)
>
>   See the InfiniBand spec 1.2 (section 12.7.34) for more details.
>
> Below is some information about the host that raised the error and the
> peer to which it was connected:
>
>   Local host:   a3-6
>   Local device: mlx5_0
>   Peer host:a3-14
>
> You may need to consult with your system administrator to get this
> problem fixed.
> --
>
>
> I would like to enable verbose logging for the MPI application to see if
> that could help us pinpoint the IB communication issue (or the nodes with
> the issue).
>
> I see many verbose logging options reported by "ompi_info -a | grep
> verbose", but I am not sure which one(s) could be helpful here. Would any
> of them be useful here or are there any other ways to enable verbose
> logging to help with tracking down the issue?
>
> Thank you so much in advance.
>
> Best regards,
>
> 
> Shan-Ho Tsai
> University of Georgia, Athens GA
>
>
>