Re: [OMPI users] psec warning when launching with srun

2023-05-20 Thread christof.koehler--- via users
Hello Gilles,

thank you very much for the prompt patch. 

I can confirm that configure now prefers the external PMIx. I can
confirm that the munge warnings and PMIx errors we observed are gone. An
mpi hello world runs successfully with srun --mpi=pmix and --mpi=pmi2.

I noticed that configure complained loudly about a missing external
libevent (i.e. libevent-devel package), but did not complain at all
that an external hwloc-devel was also missing.

Best Regards

Christof



On Sat, May 20, 2023 at 06:54:54PM +0900, Gilles Gouaillardet wrote:
> Christof,
> 
> Open MPI switching to the internal PMIx is a bug I addressed in
> https://github.com/open-mpi/ompi/pull/11704
> 
> Feel free to manually download and apply the patch, you will then need
> recent autotools and run
> ./autogen.pl --force
> 
> An other option is to manually edit the configure file
> 
> Look for the following snippet
> 
># Final check - if they didn't point us explicitly at an
> external version
> 
># but we found one anyway, use the internal version if it is
> higher
> 
>if test "$opal_external_pmix_version" != "internal" && (test -z
> "$with_pmix" || test "$with_pmix" = "yes")
> 
> then :
> 
>   if test "$opal_external_pmix_version" != "3x"
> 
> 
> and replace the last line with
> 
>   if test $opal_external_pmix_version_major -lt 3
> 
> 
> Cheers,
> 
> Gilles
> 
> On Sat, May 20, 2023 at 6:13 PM christof.koehler--- via users <
> users@lists.open-mpi.org> wrote:
> 
> > Hello Z. Matthias Krawutschke,
> >
> > On Fri, May 19, 2023 at 09:08:08PM +0200, Zhéxué M. Krawutschke wrote:
> > > Hello Christoph,
> > > what exactly is your problem with OpenMPI and Slurm?
> > > Do you compile the products yourself? Which LINUX distribution and
> > version are you using?
> > >
> > > If you compile the software yourself, could you please tell me what the
> > "configure" command looks like and which MUNGE version is in use? From the
> > distribution or compiled by yourself?
> > >
> > > I would be very happy to take on this topic and help you. You can also
> > reach me at +49 176 67270992.
> > > Best regards from Berlin
> >
> > please refer to (especially the end) of my first mail in this thread
> > which is available here
> > https://www.mail-archive.com/users@lists.open-mpi.org/msg35141.html
> >
> > I believe this contains the relevant information you are requesting. The
> > second mail which you are replying to was just additional information.
> > My apologies if this led to confusion.
> >
> > Please let me know if any relevant information is missing from my first
> > email. At the bottom of this email I include the ompi_info output as
> > further addendum.
> >
> > To summarize: I would like to understand where the munge warning
> > and PMIx error described in the first email (and the github link
> > included) come from. The explanation in the github issue
> > does not appear to be correct as all munge libraries are
> > available everywhere. To me, it appears at the moment that OpenMPIs
> > configure decides erroneously to build and use the internal pmix
> > instead of using the (presumably) newer externally available PMIx,
> > leading to launcher problems with srun.
> >
> >
> > Best Regards
> >
> > Christof
> >
> >  Package: Open MPI root@admin.service Distribution
> > Open MPI: 4.1.5
> >   Open MPI repo revision: v4.1.5
> >Open MPI release date: Feb 23, 2023
> > Open RTE: 4.1.5
> >   Open RTE repo revision: v4.1.5
> >Open RTE release date: Feb 23, 2023
> > OPAL: 4.1.5
> >   OPAL repo revision: v4.1.5
> >OPAL release date: Feb 23, 2023
> >  MPI API: 3.1.0
> > Ident string: 4.1.5
> >   Prefix: /cluster/mpi/openmpi/4.1.5/gcc-11.3.1
> >  Configured architecture: x86_64-pc-linux-gnu
> >   Configure host: admin.service
> >Configured by: root
> >Configured on: Wed May 17 18:45:42 UTC 2023
> >   Configure host: admin.service
> >   Configure command line: '--enable-mpi1-compatibility'
> > '--enable-orterun-prefix-by-default'
> > '--with-ofi=/cluster/libraries/libfabric/1.18.0/' '--with-slurm'
> > '--with-pmix' '--with-pmix-libdir=/usr/lib64' '--with-pmi'
> > '--with-pmi-libdir=/usr/lib64'
> > '--prefix=/cluster/mpi/openmpi/4.1.5/gcc-11.3.1'
> > Built by: root
> > Built on: Wed May 17 06:48:36 PM UTC 2023
> >   Built host: admin.service
> >   C bindings: yes
> > C++ bindings: no
> >  Fort mpif.h: yes (all)
> > Fort use mpi: yes (full: ignore TKR)
> >Fort use mpi size: deprecated-ompi-info-value
> > Fort use mpi_f08: yes
> >  Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
> > limitations in the gfortran compiler and/or Open MPI, does not support
> > the following: array subsections, direct passthru (where possible) to
> > 

Re: [OMPI users] psec warning when launching with srun

2023-05-20 Thread Gilles Gouaillardet via users
Christof,

Open MPI switching to the internal PMIx is a bug I addressed in
https://github.com/open-mpi/ompi/pull/11704

Feel free to manually download and apply the patch, you will then need
recent autotools and run
./autogen.pl --force

An other option is to manually edit the configure file

Look for the following snippet

   # Final check - if they didn't point us explicitly at an
external version

   # but we found one anyway, use the internal version if it is
higher

   if test "$opal_external_pmix_version" != "internal" && (test -z
"$with_pmix" || test "$with_pmix" = "yes")

then :

  if test "$opal_external_pmix_version" != "3x"


and replace the last line with

  if test $opal_external_pmix_version_major -lt 3


Cheers,

Gilles

On Sat, May 20, 2023 at 6:13 PM christof.koehler--- via users <
users@lists.open-mpi.org> wrote:

> Hello Z. Matthias Krawutschke,
>
> On Fri, May 19, 2023 at 09:08:08PM +0200, Zhéxué M. Krawutschke wrote:
> > Hello Christoph,
> > what exactly is your problem with OpenMPI and Slurm?
> > Do you compile the products yourself? Which LINUX distribution and
> version are you using?
> >
> > If you compile the software yourself, could you please tell me what the
> "configure" command looks like and which MUNGE version is in use? From the
> distribution or compiled by yourself?
> >
> > I would be very happy to take on this topic and help you. You can also
> reach me at +49 176 67270992.
> > Best regards from Berlin
>
> please refer to (especially the end) of my first mail in this thread
> which is available here
> https://www.mail-archive.com/users@lists.open-mpi.org/msg35141.html
>
> I believe this contains the relevant information you are requesting. The
> second mail which you are replying to was just additional information.
> My apologies if this led to confusion.
>
> Please let me know if any relevant information is missing from my first
> email. At the bottom of this email I include the ompi_info output as
> further addendum.
>
> To summarize: I would like to understand where the munge warning
> and PMIx error described in the first email (and the github link
> included) come from. The explanation in the github issue
> does not appear to be correct as all munge libraries are
> available everywhere. To me, it appears at the moment that OpenMPIs
> configure decides erroneously to build and use the internal pmix
> instead of using the (presumably) newer externally available PMIx,
> leading to launcher problems with srun.
>
>
> Best Regards
>
> Christof
>
>  Package: Open MPI root@admin.service Distribution
> Open MPI: 4.1.5
>   Open MPI repo revision: v4.1.5
>Open MPI release date: Feb 23, 2023
> Open RTE: 4.1.5
>   Open RTE repo revision: v4.1.5
>Open RTE release date: Feb 23, 2023
> OPAL: 4.1.5
>   OPAL repo revision: v4.1.5
>OPAL release date: Feb 23, 2023
>  MPI API: 3.1.0
> Ident string: 4.1.5
>   Prefix: /cluster/mpi/openmpi/4.1.5/gcc-11.3.1
>  Configured architecture: x86_64-pc-linux-gnu
>   Configure host: admin.service
>Configured by: root
>Configured on: Wed May 17 18:45:42 UTC 2023
>   Configure host: admin.service
>   Configure command line: '--enable-mpi1-compatibility'
> '--enable-orterun-prefix-by-default'
> '--with-ofi=/cluster/libraries/libfabric/1.18.0/' '--with-slurm'
> '--with-pmix' '--with-pmix-libdir=/usr/lib64' '--with-pmi'
> '--with-pmi-libdir=/usr/lib64'
> '--prefix=/cluster/mpi/openmpi/4.1.5/gcc-11.3.1'
> Built by: root
> Built on: Wed May 17 06:48:36 PM UTC 2023
>   Built host: admin.service
>   C bindings: yes
> C++ bindings: no
>  Fort mpif.h: yes (all)
> Fort use mpi: yes (full: ignore TKR)
>Fort use mpi size: deprecated-ompi-info-value
> Fort use mpi_f08: yes
>  Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
> limitations in the gfortran compiler and/or Open MPI, does not support
> the following: array subsections, direct passthru (where possible) to
> underlying Open MPI's C functionality
>   Fort mpi_f08 subarrays: no
>Java bindings: no
>   Wrapper compiler rpath: runpath
>   C compiler: gcc
>  C compiler absolute: /usr/bin/gcc
>   C compiler family name: GNU
>   C compiler version: 11.3.1
> C++ compiler: g++
>C++ compiler absolute: /usr/bin/g++
>Fort compiler: gfortran
>Fort compiler abs: /usr/bin/gfortran
>  Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
>Fort 08 assumed shape: yes
>   Fort optional args: yes
>   Fort INTERFACE: yes
> Fort ISO_FORTRAN_ENV: yes
>Fort STORAGE_SIZE: yes
>   Fort BIND(C) (all): yes
>   Fort ISO_C_BINDING: yes
>  Fort SUBROUTINE BIND(C): yes
>Fort TYPE,BIND(C): yes
>  

Re: [OMPI users] psec warning when launching with srun

2023-05-20 Thread christof.koehler--- via users
Hello Z. Matthias Krawutschke,

On Fri, May 19, 2023 at 09:08:08PM +0200, Zhéxué M. Krawutschke wrote:
> Hello Christoph,
> what exactly is your problem with OpenMPI and Slurm?
> Do you compile the products yourself? Which LINUX distribution and version 
> are you using?
> 
> If you compile the software yourself, could you please tell me what the 
> "configure" command looks like and which MUNGE version is in use? From the 
> distribution or compiled by yourself?
> 
> I would be very happy to take on this topic and help you. You can also reach 
> me at +49 176 67270992.
> Best regards from Berlin

please refer to (especially the end) of my first mail in this thread
which is available here
https://www.mail-archive.com/users@lists.open-mpi.org/msg35141.html

I believe this contains the relevant information you are requesting. The
second mail which you are replying to was just additional information.
My apologies if this led to confusion.

Please let me know if any relevant information is missing from my first
email. At the bottom of this email I include the ompi_info output as
further addendum.

To summarize: I would like to understand where the munge warning 
and PMIx error described in the first email (and the github link
included) come from. The explanation in the github issue 
does not appear to be correct as all munge libraries are
available everywhere. To me, it appears at the moment that OpenMPIs 
configure decides erroneously to build and use the internal pmix 
instead of using the (presumably) newer externally available PMIx, 
leading to launcher problems with srun.


Best Regards

Christof

 Package: Open MPI root@admin.service Distribution
Open MPI: 4.1.5
  Open MPI repo revision: v4.1.5
   Open MPI release date: Feb 23, 2023
Open RTE: 4.1.5
  Open RTE repo revision: v4.1.5
   Open RTE release date: Feb 23, 2023
OPAL: 4.1.5
  OPAL repo revision: v4.1.5
   OPAL release date: Feb 23, 2023
 MPI API: 3.1.0
Ident string: 4.1.5
  Prefix: /cluster/mpi/openmpi/4.1.5/gcc-11.3.1
 Configured architecture: x86_64-pc-linux-gnu
  Configure host: admin.service
   Configured by: root
   Configured on: Wed May 17 18:45:42 UTC 2023
  Configure host: admin.service
  Configure command line: '--enable-mpi1-compatibility'
'--enable-orterun-prefix-by-default'
'--with-ofi=/cluster/libraries/libfabric/1.18.0/' '--with-slurm'
'--with-pmix' '--with-pmix-libdir=/usr/lib64' '--with-pmi'
'--with-pmi-libdir=/usr/lib64'
'--prefix=/cluster/mpi/openmpi/4.1.5/gcc-11.3.1'
Built by: root
Built on: Wed May 17 06:48:36 PM UTC 2023
  Built host: admin.service
  C bindings: yes
C++ bindings: no
 Fort mpif.h: yes (all)
Fort use mpi: yes (full: ignore TKR)
   Fort use mpi size: deprecated-ompi-info-value
Fort use mpi_f08: yes
 Fort mpi_f08 compliance: The mpi_f08 module is available, but due to
limitations in the gfortran compiler and/or Open MPI, does not support
the following: array subsections, direct passthru (where possible) to
underlying Open MPI's C functionality
  Fort mpi_f08 subarrays: no
   Java bindings: no
  Wrapper compiler rpath: runpath
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
  C compiler family name: GNU
  C compiler version: 11.3.1
C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
   Fort compiler: gfortran
   Fort compiler abs: /usr/bin/gfortran
 Fort ignore TKR: yes (!GCC$ ATTRIBUTES NO_ARG_CHECK ::)
   Fort 08 assumed shape: yes
  Fort optional args: yes
  Fort INTERFACE: yes
Fort ISO_FORTRAN_ENV: yes
   Fort STORAGE_SIZE: yes
  Fort BIND(C) (all): yes
  Fort ISO_C_BINDING: yes
 Fort SUBROUTINE BIND(C): yes
   Fort TYPE,BIND(C): yes
 Fort T,BIND(C,name="a"): yes
Fort PRIVATE: yes
  Fort PROTECTED: yes
   Fort ABSTRACT: yes
   Fort ASYNCHRONOUS: yes
  Fort PROCEDURE: yes
 Fort USE...ONLY: yes
   Fort C_FUNLOC: yes
 Fort f08 using wrappers: yes
 Fort MPI_SIZEOF: yes
 C profiling: yes
   C++ profiling: no
   Fort mpif.h profiling: yes
  Fort use mpi profiling: yes
   Fort use mpi_f08 prof: yes
  C++ exceptions: no
  Thread support: posix (MPI_THREAD_MULTIPLE: yes, OPAL support:
yes, OMPI progress: no, ORTE progress: yes, Event lib: yes)
   Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: yes
 MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
  dl support: yes
   Heterogeneous support: no
 mpirun default --prefix: yes
   MPI_WTIME support: native
 Symbol vis. support: yes
   Host topology support: yes
IPv6 support: no
  MPI1 compatibility: yes