Re: [OMPI packagers] Shift focus to external libevent, hwloc, pmix

2018-07-11 Thread Geoffrey Paulsen
I am HOPING it won't, but we have not done the analysis yet to determine if it's safe to NOT rev the .so version.  Best to assume it WILL rev.We should know more before the end of the month.
---Geoffrey PaulsenSoftware Engineer, IBM Spectrum MPIPhone: 720-349-2832Email: gpaul...@us.ibm.comwww.ibm.com
 
 
- Original message -From: "Jeff Squyres (jsquyres)" To: Packagers for the Open MPI software Cc: Geoff Paulsen , Howard Pritchard Subject: Re: [OMPI packagers] Shift focus to external libevent, hwloc, pmixDate: Wed, Jul 11, 2018 10:59 AM 
On Jul 11, 2018, at 3:56 AM, Alastair McKinstry  wrote:>> For the v4.0 release, is it expected that the major library version will change again ?>> i.e. from so.40 currently on most libraries to so.50 ?I *believe* we have some incompatibilities coming, but I don't remember for sure.Geoff / Howard: can you comment?--Jeff Squyresjsquy...@cisco.com 
 

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Announcing Open MPI v4.0.0rc2

2018-09-22 Thread Geoffrey Paulsen
The Second release candidate for the Open MPI v4.0.0 release has been built and will be available tonight at:https://www.open-mpi.org/software/ompi/v4.0/
 
Major differences from v4.0.0rc1 include: 
- Removed support for SCIF.- Enable use of CUDA allocated buffers for OMPIO.- Fix a problem with ORTE not reporting error messages if an application  terminated normally but exited with non-zero error code. Thanks to Emre Brookes for reporting.
 All Major differences from v3.1x include:

 
- OSHMEM updated to the OpenSHMEM 1.4 API.- Do not build Open SHMEM layer when there are no SPMLs available.  Currently, this means the Open SHMEM layer will only build if  a MXM or UCX library is found.- A UCX BTL was added for enhanced MPI RMA support using UCX- With this release,  OpenIB BTL now only supports iWarp and RoCE by default.- Updated internal HWLOC to 2.0.1- Updated internal PMIx to 3.0.1- Change the priority for selecting external verses internal HWLOC  and PMIx packages to build.  Starting with this release, configure  by default selects available external HWLOC and PMIx packages over  the internal ones.- Updated internal ROMIO to 3.2.1.- Removed support for the MXM MTL.- Removed support for SCIF.- Improved CUDA support when using UCX.- Enable use of CUDA allocated buffers for OMPIO.- Improved support for two phase MPI I/O operations when using OMPIO.- Added support for Software-based Performance Counters, see  https://github.com/davideberius/ompi/wiki/How-to-Use-Software-Based-Performance-Counters-(SPCs)-in-Open-MPI- Various improvements to MPI RMA performance when using RDMA  capable interconnects.- Update memkind component to use the memkind 1.6 public API.- Fix problems with use of newer map-by mpirun options.  Thanks to  Tony Reina for reporting.- Fix rank-by algorithms to properly rank by object and span- Allow for running as root of two environment variables are set.  Requested by Axel Huebl.- Fix a problem with building the Java bindings when using Java 10.  Thanks to Bryce Glover for reporting.- Fix a problem with ORTE not reporting error messages if an application  terminated normally but exited with non-zero error code.  Thanks to  Emre Brookes for reporting.

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Announcing Open MPI v4.0.3

2020-03-04 Thread Geoffrey Paulsen
The Open MPI community is pleased to announce the release of Open MPI v4.0.3. Now available for download from the Open MPI Website (https://www.open-mpi.org/software/ompi/v4.0/).Changes to v4.0.3 since v4.0.2 include:
4.0.3 -- March, 2020
---
- Update embedded PMIx to 3.1.5
- Add support for Mellanox ConnectX-6.
- Fix an issue in OpenMPI IO when using shared file pointers.
  Thanks to Romain Hild for reporting.
- Fix a problem with Open MPI using a previously installed
  Fortran mpi module during compilation.  Thanks to Marcin
  Mielniczuk for reporting
- Fix a problem with Fortran compiler wrappers ignoring use of
  disable-wrapper-runpath configure option.  Thanks to David
  Shrader for reporting.
- Fixed an issue with trying to use mpirun on systems where neither
  ssh nor rsh is installed.
- Address some problems found when using XPMEM for intra-node message
  transport.
- Improve dimensions returned by MPI_Dims_create for certain
  cases.  Thanks to @aw32 for reporting.
- Fix an issue when sending messages larger than 4GB. Thanks to
  Philip Salzmann for reporting this issue.
- Add ability to specify alternative module file path using
  Open MPI's RPM spec file.  Thanks to @jschwartz-cray for reporting.
- Clarify use of --with-hwloc configuration option in the README.
  Thanks to Marcin Mielniczuk for raising this documentation issue.
- Fix an issue with shmem_atomic_set.  Thanks to Sameh Sharkawi for reporting.
- Fix a problem with MPI_Neighbor_alltoall(v,w) for cartesian communicators
  with cyclic boundary conditions.  Thanks to Ralph Rabenseifner and
  Tony Skjellum for reporting.
- Fix an issue using Open MPIO on 32 bit systems.  Thanks to
  Orion Poplawski for reporting.
- Fix an issue with NetCDF test deadlocking when using the vulcan
  Open MPIO component.  Thanks to Orion Poplawski for reporting.
- Fix an issue with the mpi_yield_when_idle parameter being ignored
  when set in the Open MPI MCA parameter configuration file.
  Thanks to @iassiour for reporting.
- Address an issue with Open MPIO when writing/reading more than 2GB
  in an operation.  Thanks to Richard Warren for reporting.

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Open MPI 4.0.5 released

2020-08-26 Thread Geoffrey Paulsen
The Open MPI community is pleased to announce the release of Open MPI
v4.0.5.
Now available for download from the Open MPI Website -
https://www.open-mpi.org/software/ompi/v4.0/
  4.0.5 -- August, 2020
  -
  - Fix a problem with MPI RMA compare and swap operations.  Thanks
    to Wojciech Chlapek for reporting.
  - Disable binding of MPI processes to system resources by Open MPI
    if an application is launched using SLURM's srun command.
  - Disable building of the Fortran mpi_f08 module when configuring
    Open MPI with default 8 byte Fortran integer size.  Thanks to
    @ahcien for reporting.
  - Fix a problem with mpirun when the --map-by option is used.
    Thanks to Wenbin Lyu for reporting.
  - Fix some issues with MPI one-sided operations uncovered using Global
    Arrays regression test-suite.  Thanks to @bjpalmer for reporting.
  - Fix a problem with make check when using the PGI compiler.  Thanks to
    Carl Ponder for reporting.
  - Fix a problem with MPI_FILE_READ_AT_ALL that could lead to application
    hangs under certain circumstances.  Thanks to Scot Breitenfeld for
    reporting.
  - Fix a problem building C++ applications with newer versions of GCC.
    Thanks to Constantine Khrulev for reporting.


___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Open MPI v5.0.0rc1 ready for testing.

2021-10-07 Thread Geoffrey Paulsen
The first release candidate for the Open MPI v5.0.0 release is posted at: https://www.open-mpi.org/software/ompi/v5.0/
 
Major changes include:
v5.0.0rc1 -- September, 2021- ORTE, the underlying OMPI launcher has been removed, and replaced  with PRTE.- Reworked how Open MPI integrates with 3rd party packages.  The decision was made to stop building 3rd-party packages  such as Libevent, HWLOC, PMIx, and PRRTE as MCA components  and instead 1) start relying on external libraries whenever  possible and 2) Open MPI builds the 3rd party libraries (if needed)  as independent libraries, rather than linked into libopen-pal.- Update to use PMIx v4.1.1rc2- Update to use PRRTE v2.0.1rc2- Change the default component build behavior to prefer building  components as part of libmpi.so instead of individual DSOs.- Remove pml/yalla, mxm, mtl/psm, and ikrit components.- Remove all vestiges of the C/R support.- Various ROMIO v3.4.1 updates.- Use Pandoc to generate manpages- 32 bit atomics are now only supported via C11 compliant compilers.- Do not build Open SHMEM layer when there are no SPMLs available.  Currently, this means the Open SHMEM layer will only build if  the UCX library is found.- Fix rank-by algorithms to properly rank by object and span.- Updated the "-mca pml" option to only accept one pml, not a list.- vprotocol/pessimist: Updated to support MPI_THREAD_MULLTIPLE.- btl/tcp: Updated to use reachability and graph solving for global  interface matching. This has been shown to improve MPI_Init()  performance under btl/tcp.- fs/ime: Fixed compilation errors due to missing header inclusion  Thanks to Sylvain Didelot  for finding  and fixing this issue.- Fixed bug where MPI_Init_thread can give wrong error messages by  delaying error reporting until all infrastructure is running.- Atomics support removed: S390/s390x, Sparc v9, ARMv4 and ARMv5 CMA  support.- autogen.pl now supports a "-j" option to run multi-threaded.  Users can also use environment variable "AUTOMAKE_JOBS".- PMI support has been removed for Open MPI apps.- Legacy btl/sm has been removed, and replaced with btl/vader, which  was renamed to "btl/sm".- Update btl/sm to not use CMA in user namespaces.- C++ bindings have been removed.- The "--am" and "--amca" options have been deprecated.- opal/mca/threads framework added. Currently supports  argobots, qthreads, and pthreads. See the --with-threads=x option  in configure.- Various README.md fixes - thanks to:  Yixin Zhang ,  Samuel Cho ,  rlangefe ,  Alex Ross ,  Sophia Fang ,  mitchelltopaloglu ,  Evstrife , and  Hao Tong  for their  contributions.- osc/pt2pt: Removed. Users can use osc/rdma + btl/tcp  for OSC support using TCP, or other providers.- Open MPI now links -levent_core instead of -levent.- MPI-4: Added ERRORS_ABORT infrastructure.- common/cuda docs: Various fixes. Thanks to  Simon Byrne  for finding and fixing.- osc/ucx: Add support for acc_single_intrinsic.- Fixed buildrpm.sh "-r" option used for RPM options specification.  Thanks to John K. McIver III  for  reporting and fixing.- configure: Added support for setting the wrapper C compiler.  Adds new option "--with-wrapper-cc=" .- mpi_f08: Fixed Fortran-8-byte-INTEGER vs. C-4-byte-int issue.  Thanks to @ahaichen for reporting the bug.- MPI-4: Added support for 'initial error handler'.- opal/thread/tsd: Added thread-specific-data (tsd) api.- MPI-4: Added error handling for 'unbound' errors to MPI_COMM_SELF.- Add missing MPI_Status conversion subroutines:  MPI_Status_c2f08(), MPI_Status_f082c(), MPI_Status_f082f(),  MPI_Status_f2f08() and the PMPI_* related subroutines.- patcher: Removed the Linux component.- opal/util: Fixed typo in error string. Thanks to  NARIBAYASHI Akira  for finding  and fixing the bug.- fortran/use-mpi-f08: Generate PMPI bindings from the MPI bindings.- Converted man pages to markdown.  Thanks to Fangcong Yin  for their contribution  to this effort.- Fixed ompi_proc_world error string and some comments in pml/ob1.  Thanks to Julien EMMANUEL  for  finding and fixing these issues.- oshmem/tools/oshmem_info: Fixed Fortran keyword issue when  compiling param.c. Thanks to Pak Lui  for  finding and fixing the bug.- autogen.pl: Patched libtool.m4 for OSX Big Sur. Thanks to  @fxcoudert for reporting the issue.- Updgraded to HWLOC v2.4.0.- Removed config/opal_check_pmi.m4.  Thanks to Zach Osman  for the contribution.- opal/atomics: Added load-linked, store-conditional atomics for  AArch6.- Fixed envvar names to OMPI_MCA_orte_precondition_transports.  Thanks to Marisa Roman   for the contribution.- fcoll/two_phase: Removed the component. All scenerios it was  used for has been replaced.- btl/uct: Bumped UCX allowed version to v1.9.x.- ULFM Fault Tolerance has been added. See README.FT.ULFM.md.- Fixed a crash during CUDA initialization.  Thanks to Yaz Saito  for finding  and fixing the bug.- Added CUDA support to the OFI MTL.- ompio: Added atomicity support.- Singleton comm spawn support has been fixed.-