The first release candidate for the Open MPI v5.0.0 release is posted at:
Major changes include:

v5.0.0rc1 -- September, 2021
- ORTE, the underlying OMPI launcher has been removed, and replaced
  with PRTE.
- Reworked how Open MPI integrates with 3rd party packages.
  The decision was made to stop building 3rd-party packages
  such as Libevent, HWLOC, PMIx, and PRRTE as MCA components
  and instead 1) start relying on external libraries whenever
  possible and 2) Open MPI builds the 3rd party libraries (if needed)
  as independent libraries, rather than linked into libopen-pal.
- Update to use PMIx v4.1.1rc2
- Update to use PRRTE v2.0.1rc2
- Change the default component build behavior to prefer building
  components as part of instead of individual DSOs.
- Remove pml/yalla, mxm, mtl/psm, and ikrit components.
- Remove all vestiges of the C/R support.
- Various ROMIO v3.4.1 updates.
- Use Pandoc to generate manpages
- 32 bit atomics are now only supported via C11 compliant compilers.
- Do not build Open SHMEM layer when there are no SPMLs available.
  Currently, this means the Open SHMEM layer will only build if
  the UCX library is found.
- Fix rank-by algorithms to properly rank by object and span.
- Updated the "-mca pml" option to only accept one pml, not a list.
- vprotocol/pessimist: Updated to support MPI_THREAD_MULLTIPLE.
- btl/tcp: Updated to use reachability and graph solving for global
  interface matching. This has been shown to improve MPI_Init()
  performance under btl/tcp.
- fs/ime: Fixed compilation errors due to missing header inclusion
  Thanks to Sylvain Didelot <> for finding
  and fixing this issue.
- Fixed bug where MPI_Init_thread can give wrong error messages by
  delaying error reporting until all infrastructure is running.
- Atomics support removed: S390/s390x, Sparc v9, ARMv4 and ARMv5 CMA
- now supports a "-j" option to run multi-threaded.
  Users can also use environment variable "AUTOMAKE_JOBS".
- PMI support has been removed for Open MPI apps.
- Legacy btl/sm has been removed, and replaced with btl/vader, which
  was renamed to "btl/sm".
- Update btl/sm to not use CMA in user namespaces.
- C++ bindings have been removed.
- The "--am" and "--amca" options have been deprecated.
- opal/mca/threads framework added. Currently supports
  argobots, qthreads, and pthreads. See the --with-threads=x option
  in configure.
- Various fixes - thanks to:
  Yixin Zhang <>,
  Samuel Cho <>,
  rlangefe <>,
  Alex Ross <>,
  Sophia Fang <>,
  mitchelltopaloglu <>,
  Evstrife <>, and
  Hao Tong <> for their
- osc/pt2pt: Removed. Users can use osc/rdma + btl/tcp
  for OSC support using TCP, or other providers.
- Open MPI now links -levent_core instead of -levent.
- MPI-4: Added ERRORS_ABORT infrastructure.
- common/cuda docs: Various fixes. Thanks to
  Simon Byrne <> for finding and fixing.
- osc/ucx: Add support for acc_single_intrinsic.
- Fixed "-r" option used for RPM options specification.
  Thanks to John K. McIver III <> for
  reporting and fixing.
- configure: Added support for setting the wrapper C compiler.
  Adds new option "--with-wrapper-cc=" .
- mpi_f08: Fixed Fortran-8-byte-INTEGER vs. C-4-byte-int issue.
  Thanks to @ahaichen for reporting the bug.
- MPI-4: Added support for 'initial error handler'.
- opal/thread/tsd: Added thread-specific-data (tsd) api.
- MPI-4: Added error handling for 'unbound' errors to MPI_COMM_SELF.
- Add missing MPI_Status conversion subroutines:
  MPI_Status_c2f08(), MPI_Status_f082c(), MPI_Status_f082f(),
  MPI_Status_f2f08() and the PMPI_* related subroutines.
- patcher: Removed the Linux component.
- opal/util: Fixed typo in error string. Thanks to
  NARIBAYASHI Akira <> for finding
  and fixing the bug.
- fortran/use-mpi-f08: Generate PMPI bindings from the MPI bindings.
- Converted man pages to markdown.
  Thanks to Fangcong Yin <> for their contribution
  to this effort.
- Fixed ompi_proc_world error string and some comments in pml/ob1.
  Thanks to Julien EMMANUEL <> for
  finding and fixing these issues.
- oshmem/tools/oshmem_info: Fixed Fortran keyword issue when
  compiling param.c. Thanks to Pak Lui <> for
  finding and fixing the bug.
- Patched libtool.m4 for OSX Big Sur. Thanks to
  @fxcoudert for reporting the issue.
- Updgraded to HWLOC v2.4.0.
- Removed config/opal_check_pmi.m4.
  Thanks to Zach Osman <> for the contribution.
- opal/atomics: Added load-linked, store-conditional atomics for
- Fixed envvar names to OMPI_MCA_orte_precondition_transports.
  Thanks to Marisa Roman <>
  for the contribution.
- fcoll/two_phase: Removed the component. All scenerios it was
  used for has been replaced.
- btl/uct: Bumped UCX allowed version to v1.9.x.
- ULFM Fault Tolerance has been added. See
- Fixed a crash during CUDA initialization.
  Thanks to Yaz Saito <> for finding
  and fixing the bug.
- Added CUDA support to the OFI MTL.
- ompio: Added atomicity support.
- Singleton comm spawn support has been fixed.
- Autoconf v2.7 support has been updated.
- fortran: Added check for ISO_FORTRAN_ENV:REAL16. Thanks to
  Jeff Hammond <> for reporting this issue.
- Changed the MCA component build style default to static.
- PowerPC atomics: Force usage of opal/ppc assembly.
- Removed C++ compiler requirement to build Open MPI.
- Fixed .la files leaking into wrapper compilers.
- Fixed bug where the cache line size was not set soon enough in
- coll/ucc and scoll/ucc components were added.
- coll/ucc: Added support for allgather and reduce collective
- Fixed bug where it would not ignore all
  excluded components.
- Various datatype bugfixes and performance improvements
- Various pack/unpack bugfixes and performance improvements
- Fix mmap infinite recurse in memory patcher
- Fix C to Fortran error code conversions.
- osc/ucx: Fix data corruption with non-contiguous accumulates
- Update coll/tuned selection rules
- Fix non-blocking collective ops
- btl/portals4: Fix flow control
- Various oshmem:ucx bugfixes and performance improvements
- common/ofi: Disable new monitor API until libfabric 1.14.0
- Fix AVX detection with icc
- mpirun option "--mca ompi_display_comm mpi_init/mpi_finalize" has been added.
We hope to release v5.0.0 in the beginning of November, so any amount of testing is appreciated. 

announce mailing list

Reply via email to