Open MPI v4.0.3rc4 has been posted to https://www.open-mpi.org/software/ompi/v4.0/.

Please test this on your systems, as it's likely to become v4.0.3.
4.0.3 -- March, 2020
-----------------------
- Update embedded PMIx to 3.1.5
- Add support for Mellanox ConnectX-6.
- Fix an issue in OpenMPI IO when using shared file pointers.
Thanks to Romain Hild for reporting.
- Fix a problem with Open MPI using a previously installed
Fortran mpi module during compilation. Thanks to Marcin
Mielniczuk for reporting
- Fix a problem with Fortran compiler wrappers ignoring use of
disable-wrapper-runpath configure option. Thanks to David
Shrader for reporting.
- Fixed an issue with trying to use mpirun on systems where neither
ssh nor rsh is installed.
- Address some problems found when using XPMEM for intra-node message
transport.
- Improve dimensions returned by MPI_Dims_create for certain
cases. Thanks to @aw32 for reporting.
- Fix an issue when sending messages larger than 4GB. Thanks to
Philip Salzmann for reporting this issue.
- Add ability to specify alternative module file path using
Open MPI's RPM spec file. Thanks to @jschwartz-cray for reporting.
- Clarify use of --with-hwloc configuration option in the README.
Thanks to Marcin Mielniczuk for raising this documentation issue.
- Fix an issue with shmem_atomic_set. Thanks to Sameh Sharkawi for reporting.
- Fix a problem with MPI_Neighbor_alltoall(v,w) for cartesian communicators
with cyclic boundary conditions. Thanks to Ralph Rabenseifner and
Tony Skjellum for reporting.
- Fix an issue using Open MPIO on 32 bit systems. Thanks to
Orion Poplawski for reporting.
- Fix an issue with NetCDF test deadlocking when using the vulcan
Open MPIO component. Thanks to Orion Poplawski for reporting.
- Fix an issue with the mpi_yield_when_idle parameter being ignored
when set in the Open MPI MCA parameter configuration file.
Thanks to @iassiour for reporting.
- Address an issue with Open MPIO when writing/reading more than 2GB
in an operation. Thanks to Richard Warren for reporting.

 

Reply via email to