[Open MPI Announce] Open MPI v3.1.0 Released

2018-05-07 Thread Barrett, Brian via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 
3.1.0. 

v3.1.0 is the start of a new release series for Open MPI.  New Features include 
a monitoring framework to track data movement in MPI operations, support for 
the MPI communicator tags, and direct support for one-sided over UCX.  The 
embedded PMIx runtime has been updated to 2.1.1.  There have been numerous 
other bug fix and performance improvements.  Version 3.0.0 can be downloaded 
from the main Open MPI web site:

  https://www.open-mpi.org/software/ompi/v3.1/

NEWS:

- Various OpenSHMEM bug fixes.
- Properly handle array_of_commands argument to Fortran version of
  MPI_COMM_SPAWN_MULTIPLE.
- Fix bug with MODE_SEQUENTIAL and the sharedfp MPI-IO component.
- Use "javac -h" instead of "javah" when building the Java bindings
  with a recent version of Java.
- Fix mis-handling of jostepid under SLURM that could cause problems
  with PathScale/OmniPath NICs.
- Disable the POWER 7/BE block in configure.  Note that POWER 7/BE is
  still not a supported platform, but it is no longer automatically
  disabled.  See
  https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
  for more information.
- The output-filename option for mpirun is now converted to an
  absolute path before being passed to other nodes.
- Add monitoring component for PML, OSC, and COLL to track data
  movement of MPI applications.  See
  ompi/mca/commmon/monitoring/HowTo_pml_monitoring.tex for more
  information about the monitoring framework.
- Add support for communicator assertions: mpi_assert_no_any_tag,
  mpi_assert_no_any_source, mpi_assert_exact_length, and
  mpi_assert_allow_overtaking.
- Update PMIx to version 2.1.1.
- Update hwloc to 1.11.7.
- Many one-sided behavior fixes.
- Improved performance for Reduce and Allreduce using Rabenseifner's algorithm.
- Revamped mpirun --help output to make it a bit more manageable.
- Portals4 MTL improvements: Fix race condition in rendezvous protocol and
  retry logic.
- UCX OSC: initial implementation.
- UCX PML improvements: add multi-threading support.
- Yalla PML improvements: Fix error with irregular contiguous datatypes.
- Openib BTL: disable XRC support by default.
- TCP BTL: Add check to detect and ignore connections from processes
  that aren't MPI (such as IDS probes) and verify that source and
  destination are using the same version of Open MPI, fix issue with very
  large message transfer.
- ompi_info parsable output now escapes double quotes in values, and
  also quotes values can contains colons.  Thanks to Lev Givon for the
  suggestion.
- CUDA-aware support can now handle GPUs within a node that do not
  support CUDA IPC.  Earlier versions would get error and abort.
- Add a mca parameter ras_base_launch_orted_on_hn to allow for launching
  MPI processes on the same node where mpirun is executing using a separate
  orte daemon, rather than the mpirun process.   This may be useful to set to
  true when using SLURM, as it improves interoperability with SLURM's signal
  propagation tools.  By default it is set to false, except for Cray XC systems.
- Remove LoadLeveler RAS support.
- Remove IB XRC support from the OpenIB BTL due to lack of support.
- Add functionality for IBM s390 platforms.  Note that regular
  regression testing does not occur on the s390 and it is not
  considered a supported platform.
- Remove support for big endian PowerPC.
- Remove support for XL compilers older than v13.1.
- Remove support for atomic operations using MacOS atomics library.

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI 3.0.2 Released

2018-06-05 Thread Barrett, Brian via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 3.0.2.

Version 3.0.2 can be downloaded from the Open MPI web site: 
https://www.open-mpi.org/software/ompi/v3.0/

This is a bug fix release for the Open MPI 3.0.x release stream. Items fixed in 
this release include the following:

- Disable osc/pt2pt when using MPI_THREAD_MULTIPLE due to numerous
  race conditions in the component.
- Fix dummy variable names for the mpi and mpi_f08 Fortran bindings to
  match the MPI standard.  This may break applications which use
  name-based parameters in Fortran which used our internal names
  rather than those documented in the MPI standard.
- Fixed MPI_SIZEOF in the "mpi" Fortran module for the NAG compiler.
- Fix RMA function signatures for use-mpi-f08 bindings to have the
  asynchonous property on all buffers.
- Fix Fortran MPI_COMM_SPAWN_MULTIPLE to properly follow the count
  length argument when parsing the array_of_commands variable.
- Revamp Java detection to properly handle new Java versions which do
  not provide a javah wrapper.
- Improved configure logic for finding the UCX library.
- Add support for HDR InfiniBand link speeds.
- Disable the POWER 7/BE block in configure.  Note that POWER 7/BE is
  still not a supported platform, but it is no longer automatically
  disabled.  See
  https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
  for more information.

Thanks,

Your Open MPI release team 
___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v3.1.2 released

2018-08-24 Thread Barrett, Brian via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 3.1.2.

Open MPI v3.1.2 is a critical bug fix release, fixing a subtle race condition 
in the vader shared memory communication channel.  Open MPI v3.1.2 can be 
downloaded from the Open MPI web site:

  https://www.open-mpi.org/software/ompi/v2.1/

Items fixed in Open MPI 3.1.2 include the following:

- A subtle race condition bug was discovered in the "vader" BTL
  (shared memory communications) that, in rare instances, can cause
  MPI processes to crash or incorrectly classify (or effectively drop)
  an MPI message sent via shared memory.  If you are using the "ob1"
  PML with "vader" for shared memory communication (note that vader is
  the default for shared memory communication with ob1), you need to
  upgrade to v3.1.2 or later to fix this issue.  You may also upgrade
  to the following versions to fix this issue:
  - Open MPI v2.1.5 (expected end of August, 2018) or later in the
v2.1.x series
  - Open MPI v3.0.1 (released March, 2018) or later in the v3.0.x
series
- Assorted Portals 4.0 bug fixes.
- Fix for possible data corruption in MPI_BSEND.
- Move shared memory file for vader btl into /dev/shm on Linux.
- Fix for MPI_ISCATTER/MPI_ISCATTERV Fortran interfaces with MPI_IN_PLACE.
- Upgrade PMIx to v2.1.3.
- Numerous One-sided bug fixes.
- Fix for race condition in uGNI BTL.
- Improve handling of large number of interfaces with TCP BTL.
- Numerous UCX bug fixes.

Thanks,

Your Open MPI release team
___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI 3.0.1 Released

2018-03-29 Thread Barrett, Brian via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 3.0.1.

Version 3.0.1 can be downloaded from the Open MPI web site: 
https://www.open-mpi.org/software/ompi/v3.0/

This is a bug fix release for the Open MPI 3.0.x release stream. Items fixed in 
this release include the following:

- Fix ability to attach parallel debuggers to MPI processes.
- Fix a number of issues in MPI I/O found by the HDF5 test suite.
- Fix (extremely) large message transfers with shared memory.
- Fix out of sequence bug in multi-NIC configurations.
- Fix stdin redirection bug that could result in lost input.
- Disable the LSF launcher if CSM is detected.
- Plug a memory leak in MPI_Mem_free().  Thanks to Philip Blakely for reporting.
- Fix the tree spawn operation when the number of nodes is larger than the 
radix.
  Thanks to Carlos Eduardo de Andrade for reporting.
- Fix Fortran 2008 macro in MPI extensions.  Thanks to Nathan T. Weeks for
  reporting.
- Add UCX to list of interfaces that OpenSHMEM will use by default.
- Add --{enable|disable}-show-load-errors-by-default to control
  default behavior of the load errors option.
- OFI MTL improvements: handle empty completion queues properly, fix
  incorrect error message around fi_getinfo(), use default progress
  option for provider by default, Add support for reading multiple
  CQ events in ofi_progress.
- PSM2 MTL improvements: Allow use of GPU buffers, thread fixes.
- Numerous corrections to memchecker behavior.
- Add a mca parameter ras_base_launch_orted_on_hn to allow for launching
  MPI processes on the same node where mpirun is executing using a separate
  orte daemon, rather than the mpirun process.   This may be useful to set to
  true when using SLURM, as it improves interoperability with SLURM's signal
  propagation tools.  By default it is set to false, except for Cray XC systems.
- Fix a problem reported on the mailing separately by Kevin McGrattan and 
Stephen
  Guzik about consistency issues on NFS file systems when using OMPIO. This fix
  also introduces a new mca parameter fs_ufs_lock_algorithm which allows to
  control the locking algorithm used by ompio for read/write operations. By
  default, ompio does not perfom locking on local UNIX file systems, locks the
  entire file per operation on NFS file systems, and selective byte-range
  locking on other distributed file systems.
- Add an mca parameter pmix_server_usock_connections to allow mpirun to
  support applications statically built against the Open MPI v2.x release,
  or installed in a container along with the Open MPI v2.x libraries. It is
  set to false by default.

Thanks,

Your Open MPI release team 

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI 3.0.3 Released

2018-10-29 Thread Barrett, Brian via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 3.0.3.

Version 3.0.3 can be downloaded from the Open MPI web site: 
https://www.open-mpi.org/software/ompi/v3.0

This is a bug fix release for the Open MPI 3.0.x release stream. Items fixed in 
this release include the following:

- Fix race condition in MPI_THREAD_MULTIPLE support of non-blocking
  send/receive path.
- Fix error handling SIGCHLD forwarding.
- Add support for CHARACTER and LOGICAL Fortran datatypes for MPI_SIZEOF.
- Fix compile error when using OpenJDK 11 to compile the Java bindings.
- Fix crash when using a hostfile with a 'user@host' line.
- Numerous Fortran '08 interface fixes.
- TCP BTL error message fixes.
- OFI MTL now will use any provider other than shm, sockets, tcp, udp, or
  rstream, rather than only supporting gni, psm, and psm2.
- Disable async receive of CUDA buffers by default, fixing a hang
  on large transfers.
- Support the BCM57XXX and BCM58XXX Broadcomm adapters.
- Fix minmax datatype support in ROMIO.
- Bug fixes in vader shared memory transport.
- Support very large buffers with MPI_TYPE_VECTOR.
- Fix hang when launching with mpirun on Cray systems.
- Bug fixes in OFI MTL.
- Assorted Portals 4.0 bug fixes.
- Fix for possible data corruption in MPI_BSEND.
- Move shared memory file for vader btl into /dev/shm on Linux.
- Fix for MPI_ISCATTER/MPI_ISCATTERV Fortran interfaces with MPI_IN_PLACE.
- Upgrade PMIx to v2.1.4.
- Fix for Power9 built-in atomics.
- Numerous One-sided bug fixes.
- Fix for race condition in uGNI BTL.
- Improve handling of large number of interfaces with TCP BTL.
- Numerous UCX bug fixes.
- Add support for QLogic and Broadcom Cumulus RoCE HCAs to Open IB BTL.
- Add patcher support for aarch64.
- Fix hang on Power and ARM when Open MPI was built with low compiler
  optimization settings

Thanks,

Your Open MPI release team
___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI 4.1.2 released

2021-11-24 Thread Barrett, Brian via announce
The Open MPI community is pleased to announce the Open MPI v4.1.2 release.  
This release contains a number of bug fixes.

Open MPI v4.1.2 can be downloaded from the Open MPI website:

  https://www.open-mpi.org/software/ompi/v4.1/

Changes to v4.1.2 compared to v4.1.1:

- ROMIO portability fix for OpenBSD
- Fix handling of MPI_IN_PLACE with MPI_ALLTOALLW and improve performance
  of MPI_ALLTOALL and MPI_ALLTOALLV for MPI_IN_PLACE.
- Fix one-sided issue with empty groups in Post-Start-Wait-Complete
  synchronization mode.
- Fix Fortran status returns in certain use cases involving
  Generalized Requests
- Romio datatype bug fixes.
- Fix oshmem_shmem_finalize() when main() returns non-zero value.
- Fix wrong affinity under LSF with the membind option.
- Fix count==0 cases in MPI_REDUCE and MPI_IREDUCE.
- Fix ssh launching on Bourne-flavored shells when the user has "set
  -u" set in their shell startup files.
- Correctly process 0 slots with the mpirun --host option.
- Ensure to unlink and rebind socket when the Open MPI session
  directory already exists.
- Fix a segv in mpirun --disable-dissable-map.
- Fix a potential hang in the memory hook handling.
- Slight performance improvement in MPI_WAITALL when running in
  MPI_THREAD_MULTIPLE.
- Fix hcoll datatype mapping and rooted operation behavior.
- Correct some operations modifying MPI_Status.MPI_ERROR when it is
  disallowed by the MPI standard.
- UCX updates:
  - Fix datatype reference count issues.
  - Detach dynamic window memory when freeing a window.
  - Fix memory leak in datatype handling.
- Fix various atomic operations issues.
- mpirun: try to set the curses winsize to the pty of the spawned
  task.  Thanks to Stack Overflow user @Seriously for reporting the
  issue.
- PMIx updates:
  - Fix compatibility with external PMIx v4.x installations.
  - Fix handling of PMIx v3.x compiler/linker flags.  Thanks to Erik
Schnetter for reporting the issue.
  - Skip SLURM-provided PMIx detection when appropriate.  Thanks to
Alexander Grund for reporting the issue.
- Fix handling by C++ compilers when they #include the STL ""
  header file, which ends up including Open MPI's text VERSION file
  (which is not C code).  Thanks to @srpgilles for reporting the
  issue.
- Fix MPI_Op support for MPI_LONG.
- Make the MPI C++ bindings library (libmpi_cxx) explicitly depend on
  the OPAL internal library (libopen-pal).  Thanks to Ye Luo for
  reporting the issue.
- Fix configure handling of "--with-libevent=/usr".
- Fix memory leak when opening Lustre files.  Thanks to Bert Wesarg
  for submitting the fix.
- Fix MPI_SENDRECV_REPLACE to correctly process datatype errors.
  Thanks to Lisandro Dalcin for reporting the issue.
- Fix MPI_SENDRECV_REPLACE to correctly handle large data.  Thanks
  Jakub Benda for reporting this issue and suggesting a fix.
- Add workaround for TCP "dropped connection" errors to drastically
  reduce the possibility of this happening.
- OMPIO updates:
  - Fix handling when AMODE is not set.  Thanks to Rainer Keller for
reporting the issue and supplying the fix.
  - Fix FBTL "posix" component linking issue.  Thanks for Honggang Li
for reporting the issue.
  - Fixed segv with MPI_FILE_GET_BYTE_OFFSET on 0-sized file view.
  - Thanks to GitHub user @shanedsnyder for submitting the issue.
- OFI updates:
  - Multi-plane / Multi-Nic nic selection cleanups
  - Add support for exporting Open MPI memory monitors into
Libfabric.
  - Ensure that Cisco usNIC devices are never selected by the OFI
MTL.
  - Fix buffer overflow in OFI networking setup.  Thanks to Alexander
Grund for reporting the issue and supplying the fix.
- Fix SSEND on tag matching networks.
- Fix error handling in several MPI collectives.
- Fix the ordering of MPI_COMM_SPLIT_TYPE.  Thanks to Wolfgang
  Bangerth for raising the issue.
- No longer install the orted-mpir library (it's an internal / Libtool
  convenience library).  Thanks to Andrew Hesford for the fix.
- PSM2 updates:
  - Allow advanced users to disable PSM2 version checking.
  - Fix to allow non-default installation locations of psm2.h.















___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI 4.1.3 released

2022-03-31 Thread Barrett, Brian via announce
The Open MPI community is pleased to announce the Open MPI v4.1.3 release.  
This release contains a number of bug fixes.

Open MPI v4.1.3 can be downloaded from the Open MPI website:

  https://www.open-mpi.org/software/ompi/v4.1/

Changes to v4.1.3 compared to v4.1.2:

- Fixed a seg fault in the smcuda BTL.  Thanks to Moritz Kreutzer and
  @Stadik for reporting the issue.
- Added support for ELEMENTAL to the MPI handle comparison functions
  in the mpi_f08 module.  Thanks to Salvatore Filippone for raising
  the issue.
- Minor datatype performance improvements in the CUDA-based code paths.
- Fix MPI_ALLTOALLV when used with MPI_IN_PLACE.
- Fix MPI_BOTTOM handling for non-blocking collectives.  Thanks to
  Lisandro Dalcin for reporting the problem.
- Enable OPAL memory hooks by default for UCX.
- Many compiler warnings fixes, particularly for newer versions of
  GCC.
- Fix intercommunicator overflow with large payload collectives.  Also
  fixed MPI_REDUCE_SCATTER_BLOCK for similar issues with large payload
  collectives.
- Back-port ROMIO 3.3 fix to use stat64() instead of stat() on GPFS.
- Fixed several non-blocking MPI collectives to not round fractions
  based on float precision.
- Fix compile failure for --enable-heterogeneous.  Also updated the
  README to clarify that --enable-heterogeneous is functional, but
  still not recomended for most environments.
- Minor fixes to OMPIO, including:
  - Fixing the open behavior of shared memory shared file pointers.
Thanks to Axel Huebl for reporting the issue
  - Fixes to clean up lockfiles when closing files.  Thanks to Eric
Chamberland for reporting the issue.
- Update LSF configure failure output to be more clear (e.g., on RHEL
  platforms).
- Update if_[in|ex]clude behavior in btl_tcp and oob_tcp to select
  *all* interfaces that fall within the specified subnet range.

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI 4.1.6 released

2023-09-30 Thread Barrett, Brian via announce
The Open MPI community is pleased to announce the Open MPI v4.1.6 release.

This release is a bug fix release.



Open MPI v4.1.6 can be downloaded from the Open MPI website:



  https://www.open-mpi.org/software/ompi/v4.1/



Changes to v4.1.6 compared to v4.1.5:



- Fix configure issue with XCode 15.

- Update embedded PMIx to 3.2.5.  PMIx 3.2.5 addresses CVE-2023-41915.

  Note that prior versions of Open MPI (and their associated PMIx

  implementations) are not impacted by this CVE, because Open MPI

  never uses escalated privileges on behalf of an unprivileged user.

  We are backporting this change both because it is low risk and to

  avoid alarms from CVE scanners.

- Fix issue with buffered sends and MTL-based interfaces (Libfabric,

  PSM, Portals).

- Add missing MPI_F_STATUS_SIZE to mpi.h.  Thanks to @jprotze for

  reporting the issue.

- Update Fortran mpi module configure check to be more correct.

  Thanks to Sergey Kosukhin for identifying the issue and supplying

  the fix.

- Update to properly handle PMIx v>=4.2.3.  Thanks to Bruno Chareyre,

  Github user @sukanka, and Christof Koehler for raising the

  compatibility issues and helping test the fixes.

- Fix minor issues and add some minor performance optimizations with

  OFI support.

- Support the "striping_factor" and "striping_unit" MPI_Info names

  recomended by the MPI standard for parallel IO.

- Fixed some minor issues with UCX support.

- Minor optimization for 0-byte MPI_Alltoallw (i.e., make it a no-op).

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce

[Open MPI Announce] Open MPI 4.1.4 released

2022-05-26 Thread Barrett, Brian via announce
The Open MPI community is pleased to announce the Open MPI v4.1.4 release.   
This release contains a number of bug fixes, as well as the UCC collectives 
component to accelerate collectives on systems with the UCC library installed.

Open MPI v4.1.4 can be downloaded from the Open MPI website:

  https://www.open-mpi.org/software/ompi/v4.1/

Changes to v4.1.4 compared to v4.1.3:

- Fix possible length integer overflow in numerous non-blocking collective 
operations.
- Fix segmentation fault in UCX if MPI Tool interface is finalized before 
MPI_Init is called.
- Remove /usr/bin/python dependency in configure.
- Fix OMPIO issue with long double etypes.
- Update treematch topology component to fix numerous correctness issues.
- Fix memory leak in UCX MCA parameter registration.
- Fix long operation closing file descriptors on non-Linux systems that can 
appear as a hang to users.
- Fix for attribute handling on GCC 11 due to pointer aliasing.
- Fix multithreaded race in UCX PML's datatype handling.
- Fix a correctness issue in CUDA Reduce algorithm.
- Fix compilation issue with CUDA GPUDirect RDMA support.
- Fix to make shmem_calloc(..., 0) conform to the OpenSHMEM specification.
- Add UCC collectives component.
- Fix divide by zero issue in OMPI IO component.
- Fix compile issue with libnl when not in standard search locations.

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI 4.1.5 released

2023-02-23 Thread Barrett, Brian via announce
The Open MPI community is pleased to announce the Open MPI v4.1.5 release.   
This release is a bug fix release.

Open MPI v4.1.5 can be downloaded from the Open MPI website:

  https://www.open-mpi.org/software/ompi/v4.1/

Changes to v4.1.5 compared to v4.1.4:

- Fix crash in one-sided applications for certain process layouts.
- Update embedded OpenPMIx to version 3.2.4
- Fix issue building with ifort on MacOS.
- Backport patches to Libevent for CVE-2016-10195, CVE-2016-10196, and
  CVE-2016-10197.  Note that Open MPI's internal libevent does not
  use the impacted portions of the Libevent code base.
- SHMEM improvements:
  - Fix initializer bugs in SHMEM interface.
  - Fix unsigned type comparisons generating warnings.
  - Fix use after clear issue in shmem_ds_reset.
- UCX improvements
  - Fix memory registration bug that could occur when UCX was built
but not selected.
  - Reduce overhead of add_procs with intercommunicators.
  - Enable multi_send_nb by default.
  - Call opal_progress while waiting for a UCX fence to complete.
- Fix data corruption bug in osc/rdma component.
- Fix overflow bug in alltoall collective
- Fix crash when displaying topology.
- Add some MPI_F_XXX constants that were missing from mpi.h.
- coll/ucc bug fixes.

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce