The Open MPI community is pleased to announce the Open MPI v4.1.3 release.  
This release contains a number of bug fixes.

Open MPI v4.1.3 can be downloaded from the Open MPI website:

Changes to v4.1.3 compared to v4.1.2:

- Fixed a seg fault in the smcuda BTL.  Thanks to Moritz Kreutzer and
  @Stadik for reporting the issue.
- Added support for ELEMENTAL to the MPI handle comparison functions
  in the mpi_f08 module.  Thanks to Salvatore Filippone for raising
  the issue.
- Minor datatype performance improvements in the CUDA-based code paths.
- Fix MPI_ALLTOALLV when used with MPI_IN_PLACE.
- Fix MPI_BOTTOM handling for non-blocking collectives.  Thanks to
  Lisandro Dalcin for reporting the problem.
- Enable OPAL memory hooks by default for UCX.
- Many compiler warnings fixes, particularly for newer versions of
- Fix intercommunicator overflow with large payload collectives.  Also
  fixed MPI_REDUCE_SCATTER_BLOCK for similar issues with large payload
- Back-port ROMIO 3.3 fix to use stat64() instead of stat() on GPFS.
- Fixed several non-blocking MPI collectives to not round fractions
  based on float precision.
- Fix compile failure for --enable-heterogeneous.  Also updated the
  README to clarify that --enable-heterogeneous is functional, but
  still not recomended for most environments.
- Minor fixes to OMPIO, including:
  - Fixing the open behavior of shared memory shared file pointers.
    Thanks to Axel Huebl for reporting the issue
  - Fixes to clean up lockfiles when closing files.  Thanks to Eric
    Chamberland for reporting the issue.
- Update LSF configure failure output to be more clear (e.g., on RHEL
- Update if_[in|ex]clude behavior in btl_tcp and oob_tcp to select
  *all* interfaces that fall within the specified subnet range.

announce mailing list

Reply via email to