Open MPI packagers: FYI.
Begin forwarded message: From: "Jeff Squyres \(jsquyres\) via announce" <annou...@lists.open-mpi.org<mailto:annou...@lists.open-mpi.org>> Subject: [Open MPI Announce] Open MPI v4.1.0 Released Date: December 18, 2020 at 6:03:50 PM EST To: Open MPI Announcements <annou...@lists.open-mpi.org<mailto:annou...@lists.open-mpi.org>> Cc: "Jeff Squyres (jsquyres)" <jsquy...@cisco.com<mailto:jsquy...@cisco.com>> Reply-To: <us...@lists.open-mpi.org<mailto:us...@lists.open-mpi.org>> The Open MPI community is pleased to announce the start of the Open MPI 4.1 release series with the release of Open MPI 4.1.0. The 4.1 release series builds on the 4.0 release series and includes enhancements to OFI and UCX communication channels, as well as collectives performance improvements. The Open MPI 4.1 release series can be downloaded from the Open MPI website: https://www.open-mpi.org/software/ompi/v4.1/ Changes in 4.1.0 compared to 4.0.x: - collectives: Add HAN and ADAPT adaptive collectives components. Both components are off by default and can be enabled by specifying "mpirun --mca coll_adapt_priority 100 --mca coll_han_priority 100 ...". We intend to enable both by default in Open MPI 5.0. - OMPIO is now the default for MPI-IO on all filesystems, including Lustre (prior to this, ROMIO was the default for Lustre). Many thanks to Mark Dixon for identifying MPI I/O issues and providing access to Lustre systems for testing. - Updates for macOS Big Sur. Thanks to FX Coudert for reporting this issue and pointing to a solution. - Minor MPI one-sided RDMA performance improvements. - Fix hcoll MPI_SCATTERV with MPI_IN_PLACE. - Add AVX support for MPI collectives. - Updates to mpirun(1) about "slots" and PE=x values. - Fix buffer allocation for large environment variables. Thanks to @zrss for reporting the issue. - Upgrade the embedded OpenPMIx to v3.2.2. - Take more steps towards creating fully Reproducible builds (see https://reproducible-builds.org/). Thanks Bernhard M. Wiedemann for bringing this to our attention. - Fix issue with extra-long values in MCA files. Thanks to GitHub user @zrss for bringing the issue to our attention. - UCX: Fix zero-sized datatype transfers. - Fix --cpu-list for non-uniform modes. - Fix issue in PMIx callback caused by missing memory barrier on Arm platforms. - OFI MTL: Various bug fixes. - Fixed issue where MPI_TYPE_CREATE_RESIZED would create a datatype with unexpected extent on oddly-aligned datatypes. - collectives: Adjust default tuning thresholds for many collective algorithms - runtime: fix situation where rank-by argument does not work - Portals4: Clean up error handling corner cases - runtime: Remove --enable-install-libpmix option, which has not worked since it was added - opal: Disable memory patcher component on MacOS - UCX: Allow UCX 1.8 to be used with the btl uct - UCX: Replace usage of the deprecated NB API of UCX with NBX - OMPIO: Add support for the IME file system - OFI/libfabric: Added support for multiple NICs - OFI/libfabric: Added support for Scalable Endpoints - OFI/libfabric: Added btl for one-sided support - OFI/libfabric: Multiple small bugfixes - libnbc: Adding numerous performance-improving algorithms -- Jeff Squyres jsquy...@cisco.com<mailto:jsquy...@cisco.com> _______________________________________________ announce mailing list annou...@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/announce -- Jeff Squyres jsquy...@cisco.com<mailto:jsquy...@cisco.com>
_______________________________________________ ompi-packagers mailing list email@example.com https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers