Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-03-11 Thread Angel de Vicente via users
Hello,


Joshua Ladd  writes:

> These are very, very old versions of UCX and HCOLL installed in your
> environment. Also, MXM was deprecated years ago in favor of UCX. What
> version of MOFED is installed (run ofed_info -s)? What HCA generation
> is present (run ibstat).

MOFED is: MLNX_OFED_LINUX-4.1-1.0.2.0

As for the HCA generation, we don't seem to have the command ibstat
installed, any other way to get this info? But I *think* they are
ConnectX-3. 


> > Stupid answer from me. If latency/bandwidth numbers are bad then check
> > that you are really running over the interface that you think you
> > should be. You could be falling back to running over Ethernet.

apparently the problem with my first attempt was that I was installing a
very bare version of UCX. I re-did the installation with the following
configuration:

,
| 
'--prefix=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/ucx-1.11.2-67aihiwsolnad6aqt2ei6j6iaptqgecf'
| '--enable-mt' '--enable-cma' '--disable-params-check' '--with-avx'
| '--enable-optimizations' '--disable-assertions' '--disable-logging'
| '--with-pic' '--with-rc' '--with-ud' '--with-dc' '--without-mlx5-dv'
| '--with-ib-hw-tm' '--with-dm' '--with-cm' '--without-rocm'
| '--without-java' '--without-cuda' '--without-gdrcopy' '--with-knem'
| '--without-xpmem'
`


and now the numbers are very good, most of the time better than the
"native" OpenMPI provided in the cluster.


So now I wanted to try another combination, using the Intel compiler
instead of gnu one. Apparently everything was compiled OK, and when I
try to run the OSU Microbenchmaks I have no problems with the
point-to-point benchmarks, but I get Segmentation Faults:

,
| load intel/2018.2 Set Intel compilers (LICENSE NEEDED! Please, contact 
support if you have any issue with license)
| /scratch/slurm/job1182830/slurm_script: line 59: unalias: despacktivate: not 
found
| [s01r2b22:26669] MCW rank 0 bound to socket 0[core 0[hwt 0]]: 
[B/././././././.][./././././././.]
| [s01r2b23:20286] MCW rank 1 bound to socket 0[core 0[hwt 0]]: 
[B/././././././.][./././././././.]
| [s01r2b22:26681:0] Caught signal 11 (Segmentation fault)
| [s01r2b23:20292:0] Caught signal 11 (Segmentation fault)
|  backtrace 
|  2 0x001c mxm_handle_error()  
/var/tmp/OFED_topdir/BUILD/mxm-3.6.3102/src/mxm/util/debug/debug.c:641
|  3 0x0010055c mxm_error_signal_handler()  
/var/tmp/OFED_topdir/BUILD/mxm-3.6.3102/src/mxm/util/debug/debug.c:616
|  4 0x00034950 killpg()  ??:0
|  5 0x000a7d41 PMPI_Comm_rank()  ??:0
|  6 0x00402e56 main()  ??:0
|  7 0x000206e5 __libc_start_main()  ??:0
|  8 0x00402ca9 _start()  
/home/abuild/rpmbuild/BUILD/glibc-2.22/csu/../sysdeps/x86_64/start.S:118
| ===
|  backtrace 
|  2 0x001c mxm_handle_error()  
/var/tmp/OFED_topdir/BUILD/mxm-3.6.3102/src/mxm/util/debug/debug.c:641
|  3 0x0010055c mxm_error_signal_handler()  
/var/tmp/OFED_topdir/BUILD/mxm-3.6.3102/src/mxm/util/debug/debug.c:616
|  4 0x00034950 killpg()  ??:0
|  5 0x000a7d41 PMPI_Comm_rank()  ??:0
|  6 0x00402e56 main()  ??:0
|  7 0x000206e5 __libc_start_main()  ??:0
|  8 0x00402ca9 _start()  
/home/abuild/rpmbuild/BUILD/glibc-2.22/csu/../sysdeps/x86_64/start.S:118
| ===
`


Any idea how I could try to debug/solve this?

Thanks,
-- 
Ángel de Vicente

Tel.: +34 922 605 747
Web.: http://research.iac.es/proyecto/polmag/
-
AVISO LEGAL: Este mensaje puede contener información confidencial y/o 
privilegiada. Si usted no es el destinatario final del mismo o lo ha recibido 
por error, por favor notifíquelo al remitente inmediatamente. Cualquier uso no 
autorizadas del contenido de este mensaje está estrictamente prohibida. Más 
información en: https://www.iac.es/es/responsabilidad-legal
DISCLAIMER: This message may contain confidential and / or privileged 
information. If you are not the final recipient or have received it in error, 
please notify the sender immediately. Any unauthorized use of the content of 
this message is strictly prohibited. More information:  
https://www.iac.es/en/disclaimer


Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-03-01 Thread Joshua Ladd via users
These are very, very old versions of UCX and HCOLL installed in your
environment. Also, MXM was deprecated years ago in favor of UCX. What
version of MOFED is installed (run ofed_info -s)? What HCA generation is
present (run ibstat).

Josh

On Tue, Mar 1, 2022 at 6:42 AM Angel de Vicente via users <
users@lists.open-mpi.org> wrote:

> Hello,
>
> John Hearns via users  writes:
>
> > Stupid answer from me. If latency/bandwidth numbers are bad then check
> > that you are really running over the interface that you think you
> > should be. You could be falling back to running over Ethernet.
>
> I'm quite out of my depth here, so all answers are helpful, as I might have
> skipped something very obvious.
>
> In order to try and avoid the possibility of falling back to running
> over Ethernet, I submitted the job with:
>
> mpirun -n 2 --mca btl ^tcp osu_latency
>
> which gives me the following error:
>
> ,
> | At least one pair of MPI processes are unable to reach each other for
> | MPI communications.  This means that no Open MPI device has indicated
> | that it can be used to communicate between these processes.  This is
> | an error; Open MPI requires that all MPI processes be able to reach
> | each other.  This error can sometimes be the result of forgetting to
> | specify the "self" BTL.
> |
> |   Process 1 ([[37380,1],1]) is on host: s01r1b20
> |   Process 2 ([[37380,1],0]) is on host: s01r1b19
> |   BTLs attempted: self
> |
> | Your MPI job is now going to abort; sorry.
> `
>
> This is certainly not happening when I use the "native" OpenMPI,
> etc. provided in the cluster. I have not knowingly specified anywhere
> not to support "self", so I have no clue what might be going on, as I
> assumed that "self" was always built for OpenMPI.
>
> Any hints on what (and where) I should look for?
>
> Many thanks,
> --
> Ángel de Vicente
>
> Tel.: +34 922 605 747
> Web.: http://research.iac.es/proyecto/polmag/
>
> -
> AVISO LEGAL: Este mensaje puede contener información confidencial y/o
> privilegiada. Si usted no es el destinatario final del mismo o lo ha
> recibido por error, por favor notifíquelo al remitente inmediatamente.
> Cualquier uso no autorizadas del contenido de este mensaje está
> estrictamente prohibida. Más información en:
> https://www.iac.es/es/responsabilidad-legal
> DISCLAIMER: This message may contain confidential and / or privileged
> information. If you are not the final recipient or have received it in
> error, please notify the sender immediately. Any unauthorized use of the
> content of this message is strictly prohibited. More information:
> https://www.iac.es/en/disclaimer
>


Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-03-01 Thread Angel de Vicente via users
Hello,

John Hearns via users  writes:

> Stupid answer from me. If latency/bandwidth numbers are bad then check
> that you are really running over the interface that you think you
> should be. You could be falling back to running over Ethernet.

I'm quite out of my depth here, so all answers are helpful, as I might have
skipped something very obvious.

In order to try and avoid the possibility of falling back to running
over Ethernet, I submitted the job with:

mpirun -n 2 --mca btl ^tcp osu_latency

which gives me the following error:

,
| At least one pair of MPI processes are unable to reach each other for
| MPI communications.  This means that no Open MPI device has indicated
| that it can be used to communicate between these processes.  This is
| an error; Open MPI requires that all MPI processes be able to reach
| each other.  This error can sometimes be the result of forgetting to
| specify the "self" BTL.
| 
|   Process 1 ([[37380,1],1]) is on host: s01r1b20
|   Process 2 ([[37380,1],0]) is on host: s01r1b19
|   BTLs attempted: self
| 
| Your MPI job is now going to abort; sorry.
`

This is certainly not happening when I use the "native" OpenMPI,
etc. provided in the cluster. I have not knowingly specified anywhere
not to support "self", so I have no clue what might be going on, as I
assumed that "self" was always built for OpenMPI.

Any hints on what (and where) I should look for?

Many thanks,
-- 
Ángel de Vicente

Tel.: +34 922 605 747
Web.: http://research.iac.es/proyecto/polmag/
-
AVISO LEGAL: Este mensaje puede contener información confidencial y/o 
privilegiada. Si usted no es el destinatario final del mismo o lo ha recibido 
por error, por favor notifíquelo al remitente inmediatamente. Cualquier uso no 
autorizadas del contenido de este mensaje está estrictamente prohibida. Más 
información en: https://www.iac.es/es/responsabilidad-legal
DISCLAIMER: This message may contain confidential and / or privileged 
information. If you are not the final recipient or have received it in error, 
please notify the sender immediately. Any unauthorized use of the content of 
this message is strictly prohibited. More information:  
https://www.iac.es/en/disclaimer


Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-03-01 Thread John Hearns via users
Stupid answer from me. If latency/bandwidth numbers are bad then check that
you are really running over the interface that you think you should be. You
could be falling back to running over Ethernet.

On Mon, 28 Feb 2022 at 20:10, Angel de Vicente via users <
users@lists.open-mpi.org> wrote:

> Hello,
>
> "Jeff Squyres (jsquyres)"  writes:
>
> > I'd recommend against using Open MPI v3.1.0 -- it's quite old.  If you
> > have to use Open MPI v3.1.x, I'd at least suggest using v3.1.6, which
> > has all the rolled-up bug fixes on the v3.1.x series.
> >
> > That being said, Open MPI v4.1.2 is the most current.  Open MPI v4.1.2
> does
> > restrict which versions of UCX it uses because there are bugs in the
> older
> > versions of UCX.  I am not intimately familiar with UCX -- you'll need
> to ask
> > Nvidia for support there -- but I was under the impression that it's
> just a
> > user-level library, and you could certainly install your own copy of UCX
> to use
> > with your compilation of Open MPI.  I.e., you're not restricted to
> whatever UCX
> > is installed in the cluster system-default locations.
>
> I did follow your advice, so I compiled my own version of UCX (1.11.2)
> and OpenMPI v4.1.1, but for some reason the latency / bandwidth numbers
> are really bad compared to the previous ones, so something is wrong, but
> not sure how to debug it.
>
> > I don't know why you're getting MXM-specific error messages; those don't
> appear
> > to be coming from Open MPI (especially since you configured Open MPI with
> > --without-mxm).  If you can upgrade to Open MPI v4.1.2 and the latest
> UCX, see
> > if you are still getting those MXM error messages.
>
> In this latest attempt, yes, the MXM error messages are still there.
>
> Cheers,
> --
> Ángel de Vicente
>
> Tel.: +34 922 605 747
> Web.: http://research.iac.es/proyecto/polmag/
>
> -
> AVISO LEGAL: Este mensaje puede contener información confidencial y/o
> privilegiada. Si usted no es el destinatario final del mismo o lo ha
> recibido por error, por favor notifíquelo al remitente inmediatamente.
> Cualquier uso no autorizadas del contenido de este mensaje está
> estrictamente prohibida. Más información en:
> https://www.iac.es/es/responsabilidad-legal
> DISCLAIMER: This message may contain confidential and / or privileged
> information. If you are not the final recipient or have received it in
> error, please notify the sender immediately. Any unauthorized use of the
> content of this message is strictly prohibited. More information:
> https://www.iac.es/en/disclaimer
>


Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-02-28 Thread Angel de Vicente via users
Hello,

"Jeff Squyres (jsquyres)"  writes:

> I'd recommend against using Open MPI v3.1.0 -- it's quite old.  If you
> have to use Open MPI v3.1.x, I'd at least suggest using v3.1.6, which
> has all the rolled-up bug fixes on the v3.1.x series.
>
> That being said, Open MPI v4.1.2 is the most current.  Open MPI v4.1.2 does
> restrict which versions of UCX it uses because there are bugs in the older
> versions of UCX.  I am not intimately familiar with UCX -- you'll need to ask
> Nvidia for support there -- but I was under the impression that it's just a
> user-level library, and you could certainly install your own copy of UCX to 
> use
> with your compilation of Open MPI.  I.e., you're not restricted to whatever 
> UCX
> is installed in the cluster system-default locations.

I did follow your advice, so I compiled my own version of UCX (1.11.2)
and OpenMPI v4.1.1, but for some reason the latency / bandwidth numbers
are really bad compared to the previous ones, so something is wrong, but
not sure how to debug it. 

> I don't know why you're getting MXM-specific error messages; those don't 
> appear
> to be coming from Open MPI (especially since you configured Open MPI with
> --without-mxm).  If you can upgrade to Open MPI v4.1.2 and the latest UCX, see
> if you are still getting those MXM error messages.

In this latest attempt, yes, the MXM error messages are still there.

Cheers,
-- 
Ángel de Vicente

Tel.: +34 922 605 747
Web.: http://research.iac.es/proyecto/polmag/
-
AVISO LEGAL: Este mensaje puede contener información confidencial y/o 
privilegiada. Si usted no es el destinatario final del mismo o lo ha recibido 
por error, por favor notifíquelo al remitente inmediatamente. Cualquier uso no 
autorizadas del contenido de este mensaje está estrictamente prohibida. Más 
información en: https://www.iac.es/es/responsabilidad-legal
DISCLAIMER: This message may contain confidential and / or privileged 
information. If you are not the final recipient or have received it in error, 
please notify the sender immediately. Any unauthorized use of the content of 
this message is strictly prohibited. More information:  
https://www.iac.es/en/disclaimer


Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-02-23 Thread Jeff Squyres (jsquyres) via users
I'd recommend against using Open MPI v3.1.0 -- it's quite old.  If you have to 
use Open MPI v3.1.x, I'd at least suggest using v3.1.6, which has all the 
rolled-up bug fixes on the v3.1.x series.

That being said, Open MPI v4.1.2 is the most current.  Open MPI v4.1.2 does 
restrict which versions of UCX it uses because there are bugs in the older 
versions of UCX.  I am not intimately familiar with UCX -- you'll need to ask 
Nvidia for support there -- but I was under the impression that it's just a 
user-level library, and you could certainly install your own copy of UCX to use 
with your compilation of Open MPI.  I.e., you're not restricted to whatever UCX 
is installed in the cluster system-default locations.

I don't know why you're getting MXM-specific error messages; those don't appear 
to be coming from Open MPI (especially since you configured Open MPI with 
--without-mxm).  If you can upgrade to Open MPI v4.1.2 and the latest UCX, see 
if you are still getting those MXM error messages.

--
Jeff Squyres
jsquy...@cisco.com


From: users  on behalf of Angel de Vicente 
via users 
Sent: Friday, February 18, 2022 5:46 PM
To: Gilles Gouaillardet via users
Cc: Angel de Vicente
Subject: Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

Hello,

Gilles Gouaillardet via users  writes:

> Infiniband detection likely fails before checking expanded verbs.

thanks for this. At the end, after playing a bit with different options,
I managed to install OpenMPI 3.1.0 OK in our cluster using UCX (I wanted
4.1.1, but that would not compile cleanly with the old version of UCX
that is installed in the cluster). The configure command line (as
reported by ompi_info) was:

,
|   Configure command line: 
'--prefix=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/openmpi-3.1.0-g5a7szwxcsgmyibqvwwavfkz5b4i2ym7'
|   '--enable-shared' '--disable-silent-rules'
|   '--disable-builtin-atomics' '--with-pmi=/usr'
|   
'--with-zlib=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/zlib-1.2.11-hrstx5ffrg4f4k3xc2anyxed3mmgdcoz'
|   '--without-knem' '--with-hcoll=/opt/mellanox/hcoll'
|   '--without-psm' '--without-ofi' '--without-cma'
|   '--with-ucx=/opt/ucx' '--without-fca'
|   '--without-mxm' '--without-verbs' '--without-xpmem'
|   '--without-psm2' '--without-alps' '--without-lsf'
|   '--without-sge' '--with-slurm' '--without-tm'
|   '--without-loadleveler' '--disable-memchecker'
|   
'--with-hwloc=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/hwloc-1.11.13-kpjkidab37wn25h2oyh3eva43ycjb6c5'
|   '--disable-java' '--disable-mpi-java'
|   '--without-cuda' '--enable-wrapper-rpath'
|   '--disable-wrapper-runpath' '--disable-mpi-cxx'
|   '--disable-cxx-exceptions'
|   
'--with-wrapper-ldflags=-Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib/gcc/x86_\
| 64-pc-linux-gnu/9.3.0
|   
-Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib64'
`


The versions that I'm using are:

gcc:   9.3.0
mxm:   3.6.3102  (though I configure OpenMPI --without-mxm)
hcoll: 3.8.1649
knem:  1.1.2.90mlnx2 (though I configure OpenMPI --without-knem)
ucx:   1.2.2947
slurm: 18.08.7


It looks like everything executes fine, but I have a couple of warnings,
and I'm not sure how much I should worry and what I could do about them:

1) Conflicting CPU frequencies detected:

[1645221586.038838] [s01r3b78:11041:0] sys.c:744  MXM  WARN  
Conflicting CPU frequencies detected, using: 3151.41
[1645221585.740595] [s01r3b79:11484:0] sys.c:744  MXM  WARN  
Conflicting CPU frequencies detected, using: 2998.76

2) Won't use knem. In a previous try, I was specifying --with-knem, but
I was getting this warning about not being able to open /dev/knem. I
guess our cluster is not properly configured w.r.t knem, so I built
OpenMPI again --without-knem, but I still get this message?

[1645221587.091122] [s01r3b74:9054 :0] shm.c:65   MXM  WARN  Could not 
open the KNEM device file at /dev/knem : No such file or directory. Won't use 
knem.
[1645221587.104807] [s01r3b76:8610 :0] shm.c:65   MXM  WARN  Could not 
open the KNEM device file at /dev/knem : No such file or directory. Won't use 
knem.


Any help/pointers appreciated. Many thanks,
--
Ángel de Vicente

Tel.: +34 922 605 747
Web.: http://research.iac.es/proye

Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-02-18 Thread Angel de Vicente via users
Hello,

Gilles Gouaillardet via users  writes:

> Infiniband detection likely fails before checking expanded verbs.

thanks for this. At the end, after playing a bit with different options,
I managed to install OpenMPI 3.1.0 OK in our cluster using UCX (I wanted
4.1.1, but that would not compile cleanly with the old version of UCX
that is installed in the cluster). The configure command line (as
reported by ompi_info) was:

,
|   Configure command line: 
'--prefix=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/openmpi-3.1.0-g5a7szwxcsgmyibqvwwavfkz5b4i2ym7'
|   '--enable-shared' '--disable-silent-rules'
|   '--disable-builtin-atomics' '--with-pmi=/usr'
|   
'--with-zlib=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/zlib-1.2.11-hrstx5ffrg4f4k3xc2anyxed3mmgdcoz'
|   '--without-knem' '--with-hcoll=/opt/mellanox/hcoll'
|   '--without-psm' '--without-ofi' '--without-cma'
|   '--with-ucx=/opt/ucx' '--without-fca'
|   '--without-mxm' '--without-verbs' '--without-xpmem'
|   '--without-psm2' '--without-alps' '--without-lsf'
|   '--without-sge' '--with-slurm' '--without-tm'
|   '--without-loadleveler' '--disable-memchecker'
|   
'--with-hwloc=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/hwloc-1.11.13-kpjkidab37wn25h2oyh3eva43ycjb6c5'
|   '--disable-java' '--disable-mpi-java'
|   '--without-cuda' '--enable-wrapper-rpath'
|   '--disable-wrapper-runpath' '--disable-mpi-cxx'
|   '--disable-cxx-exceptions'
|   
'--with-wrapper-ldflags=-Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib/gcc/x86_\
| 64-pc-linux-gnu/9.3.0
|   
-Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib64'
`


The versions that I'm using are:

gcc:   9.3.0
mxm:   3.6.3102  (though I configure OpenMPI --without-mxm)
hcoll: 3.8.1649
knem:  1.1.2.90mlnx2 (though I configure OpenMPI --without-knem)
ucx:   1.2.2947
slurm: 18.08.7


It looks like everything executes fine, but I have a couple of warnings,
and I'm not sure how much I should worry and what I could do about them:

1) Conflicting CPU frequencies detected:

[1645221586.038838] [s01r3b78:11041:0] sys.c:744  MXM  WARN  
Conflicting CPU frequencies detected, using: 3151.41
[1645221585.740595] [s01r3b79:11484:0] sys.c:744  MXM  WARN  
Conflicting CPU frequencies detected, using: 2998.76

2) Won't use knem. In a previous try, I was specifying --with-knem, but
I was getting this warning about not being able to open /dev/knem. I
guess our cluster is not properly configured w.r.t knem, so I built
OpenMPI again --without-knem, but I still get this message?

[1645221587.091122] [s01r3b74:9054 :0] shm.c:65   MXM  WARN  Could not 
open the KNEM device file at /dev/knem : No such file or directory. Won't use 
knem.
[1645221587.104807] [s01r3b76:8610 :0] shm.c:65   MXM  WARN  Could not 
open the KNEM device file at /dev/knem : No such file or directory. Won't use 
knem.


Any help/pointers appreciated. Many thanks,
-- 
Ángel de Vicente

Tel.: +34 922 605 747
Web.: http://research.iac.es/proyecto/polmag/
-
AVISO LEGAL: Este mensaje puede contener información confidencial y/o 
privilegiada. Si usted no es el destinatario final del mismo o lo ha recibido 
por error, por favor notifíquelo al remitente inmediatamente. Cualquier uso no 
autorizadas del contenido de este mensaje está estrictamente prohibida. Más 
información en: https://www.iac.es/es/responsabilidad-legal
DISCLAIMER: This message may contain confidential and / or privileged 
information. If you are not the final recipient or have received it in error, 
please notify the sender immediately. Any unauthorized use of the content of 
this message is strictly prohibited. More information:  
https://www.iac.es/en/disclaimer


Re: [OMPI users] Trouble compiling OpenMPI with Infiniband support

2022-02-17 Thread Gilles Gouaillardet via users
Angel,

Infiniband detection likely fails before checking expanded verbs.
Please compress and post the full configure output


Cheers,

Gilles

On Fri, Feb 18, 2022 at 12:02 AM Angel de Vicente via users <
users@lists.open-mpi.org> wrote:

> Hi,
>
> I'm trying to compile the latest OpenMPI version with Infiniband support
> in our local cluster, but didn't get very far (since I'm installing this
> via Spack, I also asked in their support group).
>
> I'm doing the installation via Spack, which is issuing the following
> .configure step (see the options given for --with-knem, --with-hcoll and
> --with-mxm):
>
> ,
> | configure'
> |
> '--prefix=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/openmpi-4.1.1-jsvbusyjgthr2d6oyny5klt62gm6ma2u'
> | '--enable-shared' '--disable-silent-rules' '--disable-builtin-atomics'
> | '--enable-static' '--without-pmi'
> |
> '--with-zlib=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/zlib-1.2.11-hrstx5ffrg4f4k3xc2anyxed3mmgdcoz'
> | '--enable-mpi1-compatibility' '--with-knem=/opt/knem-1.1.2.90mlnx2'
> | '--with-hcoll=/opt/mellanox/hcoll' '--without-psm' '--without-ofi'
> | '--without-cma' '--without-ucx' '--without-fca'
> | '--with-mxm=/opt/mellanox/mxm' '--without-verbs' '--without-xpmem'
> | '--without-psm2' '--without-alps' '--without-lsf' '--without-sge'
> | '--without-slurm' '--without-tm' '--without-loadleveler'
> | '--disable-memchecker'
> |
> '--with-libevent=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/libevent-2.1.12-yd5l4tjmnigv6dqlv5afpn4zc6ekdchc'
> |
> '--with-hwloc=/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-9.3.0/hwloc-2.6.0-bfnt4g3givflydpe5d2iglyupgbzxbfn'
> | '--disable-java' '--disable-mpi-java' '--without-cuda'
> | '--enable-wrapper-rpath' '--disable-wrapper-runpath' '--disable-mpi-cxx'
> | '--disable-cxx-exceptions'
> |
> '--with-wrapper-ldflags=-Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib/gcc/x86_64-pc-linux-gnu/9.3.0
> |
> -Wl,-rpath,/storage/projects/can30/angelv/spack/opt/spack/linux-sles12-sandybridge/gcc-7.2.0/gcc-9.3.0-ghr2jekwusoa4zip36xsa3okgp3bylqm/lib64'
> `
>
> Later on in the configuration phase I see:
>
> ,
> | --- MCA component btl:openib (m4 configuration macro)
> | checking for MCA component btl:openib compile mode... static
> | checking whether expanded verbs are available... yes
> | checking whether IBV_EXP_ATOMIC_HCA_REPLY_BE is declared... yes
> | checking whether IBV_EXP_QP_CREATE_ATOMIC_BE_REPLY is declared... yes
> | checking whether ibv_exp_create_qp is declared... yes
> | checking whether ibv_exp_query_device is declared... yes
> | checking whether IBV_EXP_QP_INIT_ATTR_ATOMICS_ARG is declared... yes
> | checking for struct ibv_exp_device_attr.ext_atom... yes
> | checking for struct ibv_exp_device_attr.exp_atomic_cap... yes
> | checking if MCA component btl:openib can compile... no
> `
>
> This is the first time I try to compile OpenMPI this way, and I get a
> bit confused with what each bit is doing, but it looks like it goes
> through the moves to get the btl:openib built, but then for some reason
> it cannot compile it.
>
> Any suggestions/pointers?
>
> Many thanks,
> --
> Ángel de Vicente
>
> Tel.: +34 922 605 747
> Web.: http://research.iac.es/proyecto/polmag/
>
> -
> AVISO LEGAL: Este mensaje puede contener información confidencial y/o
> privilegiada. Si usted no es el destinatario final del mismo o lo ha
> recibido por error, por favor notifíquelo al remitente inmediatamente.
> Cualquier uso no autorizadas del contenido de este mensaje está
> estrictamente prohibida. Más información en:
> https://www.iac.es/es/responsabilidad-legal
> DISCLAIMER: This message may contain confidential and / or privileged
> information. If you are not the final recipient or have received it in
> error, please notify the sender immediately. Any unauthorized use of the
> content of this message is strictly prohibited. More information:
> https://www.iac.es/en/disclaimer
>