The Open MPI Team, representing a consortium of research, academic,
and industry partners, is pleased to announce the release of Open MPI
version 1.3.2. This release is mainly a bug fix release over the v1.3.1
release, but there are few new features.  We strongly recommend
that all users upgrade to version 1.3.2 if possible.

Version 1.3.2 can be downloaded from the main Open MPI web site or
any of its mirrors (mirrors will be updating shortly).

NOTE: The Open MPI team has uncovered a serious bug in Open MPI v1.3.0 and
v1.3.1: when running on OpenFabrics-based networks, silent data
corruption is possible in some cases. There are two workarounds to
avoid the issue -- please see the bug ticket that has been opened
about this issue for further details:

We strongly encourage all users who are using Open MPI v1.3.0 and/or
v1.3.1 on OpenFabrics-based networks to upgrade to 1.3.2.

Here is a list of changes in v1.3.2 as compared to v1.3.1:

- Fixed a potential infinite loop in the openib BTL that could occur
 in senders in some frequent-communication scenarios.  Thanks to Don
 Wood for reporting the problem.
- Add a new checksum PML variation on ob1 (main MPI point-to-point
 communication engine) to detect memory corruption in node-to-node
- Add a new configuration option to add padding to the openib
 header so the data is aligned
- Add a new configuration option to use an alternative checksum algo
 when using the checksum PML
- Fixed a problem reported by multiple users on the mailing list that
 the LSF support would fail to find the appropriate libraries at
- Allow empty shell designations from getpwuid().  Thanks to Sergey
 Koposov for the bug report.
- Ensure that mpirun exits with non-zero status when applications die
 due to user signal.  Thanks to Geoffroy Pignot for suggesting the
- Ensure that MPI_VERSION / MPI_SUBVERSION match what is returned by
 MPI_GET_VERSION.  Thanks to Rob Egan for reporting the error.
- Updated MPI_*KEYVAL_CREATE functions to properly handle Fortran
 extra state.
- A variety of ob1 (main MPI point-to-point communication engine) bug
 fixes that could have caused hangs or seg faults.
- Do not install Open MPI's signal handlers in MPI_INIT if there are
 already signal handlers installed.  Thanks to Kees Verstoep for
 bringing the issue to our attention.
- Fix GM support to not seg fault in MPI_INIT.
- Various VampirTrace fixes.
- Various PLPA fixes.
- No longer create BTLs for invalid (TCP) devices.
- Various man page style and lint cleanups.
- Fix critical OpenFabrics-related bug noted here:
 Open MPI now uses a much more robust memory intercept scheme that is
 quite similar to what is used by MX.  The use of "-lopenmpi-malloc"
 is no longer necessary, is deprecated, and is expected to disappear
 in a future release.  -lopenmpi-malloc will continue to work for the
 duration of the Open MPI v1.3 and v1.4 series.
- Fix some OpenFabrics shutdown errors, both regarding iWARP and SRQ.
- Allow the udapl BTL to work on Solaris platforms that support
 relaxed PCI ordering.
- Fix problem where the mpirun would sometimes use rsh/ssh to launch on
 the localhost (instead of simply forking).
- Minor SLURM stdin fixes.
- Fix to run properly under SGE jobs.
- Scalability and latency improvements for shared memory jobs: convert
 to using one message queue instead of N queues.
- Automatically size the shared-memory area (mmap file) to match
 better what is needed;  specifically, so that large-np jobs will start.
- Use fixed-length MPI predefined handles in order to provide ABI
 compatibility between Open MPI releases.
- Fix building of the posix paffinity component to properly get the
 number of processors in loosely tested environments (e.g.,
 FreeBSD).  Thanks to Steve Kargl for reporting the issue.
- Fix --with-libnuma handling in configure.  Thanks to Gus Correa for
 reporting the problem.

Reply via email to