[Open MPI Announce] Open MPI 4.1.1 released

2021-04-24 Thread Jeff Squyres (jsquyres) via announce
The Open MPI community is pleased to announce the Open MPI v4.1.1 release.  
This release contains a number of bug fixes and minor improvements.

Open MPI v4.1.1 can be downloaded from the Open MPI website:

https://www.open-mpi.org/software/ompi/v4.1/

Changes in v4.1.1 compared to v4.1.0:

- Fix a number of datatype issues, including an issue with
  improper handling of partial datatypes that could lead to
  an unexpected application failure.
- Change UCX PML to not warn about MPI_Request leaks during
  MPI_FINALIZE by default.  The old behavior can be restored with
  the mca_pml_ucx_request_leak_check MCA parameter.
- Reverted temporary solution that worked around launch issues in
  SLURM v20.11.{0,1,2}. SchedMD encourages users to avoid these
  versions and to upgrade to v20.11.3 or newer.
- Updated PMIx to v3.2.2.
- Fixed configuration issue on Apple Silicon observed with
  Homebrew. Thanks to François-Xavier Coudert for reporting the issue.
- Disabled gcc built-in atomics by default on aarch64 platforms.
- Disabled UCX PML when UCX v1.8.0 is detected. UCX version 1.8.0 has a bug that
  may cause data corruption when its TCP transport is used in conjunction with
  the shared memory transport. UCX versions prior to v1.8.0 are not affected by
  this issue. Thanks to @ksiazekm for reporting the issue.
- Fixed detection of available UCX transports/devices to better inform PML
  prioritization.
- Fixed SLURM support to mark ORTE daemons as non-MPI tasks.
- Improved AVX detection to more accurately detect supported
  platforms.  Also improved the generated AVX code, and switched to
  using word-based MCA params for the op/avx component (vs. numeric
  big flags).
- Improved OFI compatibility support and fixed memory leaks in error
  handling paths.
- Improved HAN collectives with support for Barrier and Scatter. Thanks
  to @EmmanuelBRELLE for these changes and the relevant bug fixes.
- Fixed MPI debugger support (i.e., the MPIR_Breakpoint() symbol).
  Thanks to @louisespellacy-arm for reporting the issue.
- Fixed ORTE bug that prevented debuggers from reading MPIR_Proctable.
- Removed PML uniformity check from the UCX PML to address performance
  regression.
- Fixed MPI_Init_thread(3) statement about C++ binding and update
  references about MPI_THREAD_MULTIPLE.  Thanks to Andreas Lösel for
  bringing the outdated docs to our attention.
- Added fence_nb to Flux PMIx support to address segmentation faults.
- Ensured progress of AIO requests in the POSIX FBTL component to
  prevent exceeding maximum number of pending requests on MacOS.
- Used OPAL's mutli-thread support in the orted to leverage atomic
  operations for object refcounting.
- Fixed segv when launching with static TCP ports.
- Fixed --debug-daemons mpirun CLI option.
- Fixed bug where mpirun did not honor --host in a managed job
  allocation.
- Made a managed allocation filter a hostfile/hostlist.
- Fixed bug to marked a generalized request as pending once initiated.
- Fixed external PMIx v4.x check.
- Fixed OSHMEM build with `--enable-mem-debug`.
- Fixed a performance regression observed with older versions of GCC when
  __ATOMIC_SEQ_CST is used. Thanks to @BiplabRaut for reporting the issue.
- Fixed buffer allocation bug in the binomial tree scatter algorithm when
  non-contiguous datatypes are used. Thanks to @sadcat11 for reporting the 
issue.
- Fixed bugs related to the accumulate and atomics functionality in the
  osc/rdma component.
- Fixed race condition in MPI group operations observed with
  MPI_THREAD_MULTIPLE threading level.
- Fixed a deadlock in the TCP BTL's connection matching logic.
- Fixed pml/ob1 compilation error when CUDA support is enabled.
- Fixed a build issue with Lustre caused by unnecessary header includes.
- Fixed a build issue with IMB LSF workload manager.
- Fixed linker error with UCX SPML.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce

[Open MPI Announce] Open MPI State of the Union BOF (webinar)

2021-03-30 Thread Jeff Squyres (jsquyres) via announce
Thanks to all who attended today.  The slides from the presentation are now 
available here:

https://www-lb.open-mpi.org/papers/ecp-bof-2021/



> On Mar 29, 2021, at 2:52 PM, Jeff Squyres (jsquyres) via announce 
>  wrote:
> 
> Gentle reminder that the Open MPI State of the Union BOF webinar is TOMORROW:
> 
>Date: Tuesday, March 30, 2021
>Time: 1:00pm US Eastern time
>Registration URL:  
> https://www.exascaleproject.org/event/ecp-community-bof-days-2021/
> 
> Anyone can attend for free, but YOU MUST REGISTER ahead of time!
> 
> Hope to see you there!
> 
> 
>> On Mar 15, 2021, at 1:06 PM, Jeff Squyres (jsquyres)  
>> wrote:
>> 
>> In conjunction with the Exascale Computing Project (ECP), George Bosilca, 
>> Jeff Squyres, and members of the Open MPI community will present the current 
>> status and future roadmap for the Open MPI project.
>> 
>> We typically have an Open MPI "State of the Union" BOF at the annual 
>> Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
>> The ECP has graciously agreed to host "Community BOF days" for many 
>> HPC-related projects, including Open MPI.
>> 
>> We therefore invite everyone to attend a free 90-minute webinar for the Open 
>> MPI SotU BOF:
>> 
>> Date: March 30, 2021
>> Time: 1:00pm US Eastern time
>> URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/
>> 
>> YOU MUST REGISTER TO ATTEND!
>> 
>> Expand the "Open MPI State of the Union" entry on that page and click on the 
>> registration link to sign up (registration is free).
>> 
>> We hope to see you there!


-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


Re: [Open MPI Announce] Open MPI State of the Union BOF (webinar)

2021-03-29 Thread Jeff Squyres (jsquyres) via announce
Gentle reminder that the Open MPI State of the Union BOF webinar is TOMORROW:

Date: Tuesday, March 30, 2021
Time: 1:00pm US Eastern time
Registration URL:  
https://www.exascaleproject.org/event/ecp-community-bof-days-2021/

Anyone can attend for free, but YOU MUST REGISTER ahead of time!

Hope to see you there!


> On Mar 15, 2021, at 1:06 PM, Jeff Squyres (jsquyres)  
> wrote:
> 
> In conjunction with the Exascale Computing Project (ECP), George Bosilca, 
> Jeff Squyres, and members of the Open MPI community will present the current 
> status and future roadmap for the Open MPI project.
> 
> We typically have an Open MPI "State of the Union" BOF at the annual 
> Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
> The ECP has graciously agreed to host "Community BOF days" for many 
> HPC-related projects, including Open MPI.
> 
> We therefore invite everyone to attend a free 90-minute webinar for the Open 
> MPI SotU BOF:
> 
> Date: March 30, 2021
> Time: 1:00pm US Eastern time
> URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/
> 
> YOU MUST REGISTER TO ATTEND!
> 
> Expand the "Open MPI State of the Union" entry on that page and click on the 
> registration link to sign up (registration is free).
> 
> We hope to see you there!
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 


-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI State of the Union BOF (webinar)

2021-03-15 Thread Jeff Squyres (jsquyres) via announce
In conjunction with the Exascale Computing Project (ECP), George Bosilca, Jeff 
Squyres, and members of the Open MPI community will present the current status 
and future roadmap for the Open MPI project.

We typically have an Open MPI "State of the Union" BOF at the annual 
Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
The ECP has graciously agreed to host "Community BOF days" for many HPC-related 
projects, including Open MPI.

We therefore invite everyone to attend a free 90-minute webinar for the Open 
MPI SotU BOF:

Date: March 30, 2021
Time: 1:00pm US Eastern time
URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/

YOU MUST REGISTER TO ATTEND!

Expand the "Open MPI State of the Union" entry on that page and click on the 
registration link to sign up (registration is free).

We hope to see you there!

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v4.1.0 Released

2020-12-18 Thread Jeff Squyres (jsquyres) via announce
The Open MPI community is pleased to announce the start of the Open MPI 4.1 
release series with the release of Open MPI 4.1.0.  The 4.1 release series 
builds on the 4.0 release series and includes enhancements to OFI and UCX 
communication channels, as well as collectives performance improvements.

The Open MPI 4.1 release series can be downloaded from the Open MPI website:

https://www.open-mpi.org/software/ompi/v4.1/

Changes in 4.1.0 compared to 4.0.x:

- collectives: Add HAN and ADAPT adaptive collectives components.
  Both components are off by default and can be enabled by specifying
  "mpirun --mca coll_adapt_priority 100 --mca coll_han_priority 100 ...".
  We intend to enable both by default in Open MPI 5.0.
- OMPIO is now the default for MPI-IO on all filesystems, including
  Lustre (prior to this, ROMIO was the default for Lustre).  Many
  thanks to Mark Dixon for identifying MPI I/O issues and providing
  access to Lustre systems for testing.
- Updates for macOS Big Sur.  Thanks to FX Coudert for reporting this
  issue and pointing to a solution.
- Minor MPI one-sided RDMA performance improvements.
- Fix hcoll MPI_SCATTERV with MPI_IN_PLACE.
- Add AVX support for MPI collectives.
- Updates to mpirun(1) about "slots" and PE=x values.
- Fix buffer allocation for large environment variables.  Thanks to
  @zrss for reporting the issue.
- Upgrade the embedded OpenPMIx to v3.2.2.
- Take more steps towards creating fully Reproducible builds (see
  https://reproducible-builds.org/).  Thanks Bernhard M. Wiedemann for
  bringing this to our attention.
- Fix issue with extra-long values in MCA files.  Thanks to GitHub
  user @zrss for bringing the issue to our attention.
- UCX: Fix zero-sized datatype transfers.
- Fix --cpu-list for non-uniform modes.
- Fix issue in PMIx callback caused by missing memory barrier on Arm platforms.
- OFI MTL: Various bug fixes.
- Fixed issue where MPI_TYPE_CREATE_RESIZED would create a datatype
  with unexpected extent on oddly-aligned datatypes.
- collectives: Adjust default tuning thresholds for many collective
  algorithms
- runtime: fix situation where rank-by argument does not work
- Portals4: Clean up error handling corner cases
- runtime: Remove --enable-install-libpmix option, which has not
  worked since it was added
- opal: Disable memory patcher component on MacOS
- UCX: Allow UCX 1.8 to be used with the btl uct
- UCX: Replace usage of the deprecated NB API of UCX with NBX
- OMPIO: Add support for the IME file system
- OFI/libfabric: Added support for multiple NICs
- OFI/libfabric: Added support for Scalable Endpoints
- OFI/libfabric: Added btl for one-sided support
- OFI/libfabric: Multiple small bugfixes
- libnbc: Adding numerous performance-improving algorithms

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] The ABCs of Open MPI (parts 1 and 2): slides + videos posted

2020-07-14 Thread Jeff Squyres (jsquyres) via announce
The slides and videos for parts 1 and 2 of the online seminar presentation "The 
ABCs of Open MPI" have been posted on both the Open MPI web site and the 
EasyBuild wiki:

https://www.open-mpi.org/video/?category=general

https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI

The last part of the seminar (part 3) will be held on Wednesday, August 5, 2020 
at:

- 11am US Eastern time
- 8am US Pacific time
- 3pm UTC
- 5pm CEST

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


Re: [Open MPI Announce] Online presentation: the ABCs of Open MPI

2020-07-06 Thread Jeff Squyres (jsquyres) via announce
Gentle reminder that part 2 of "The ABCs of Open MPI" will be this Wednesday, 8 
July, 2020 at:

- 8am US Pacific time
- 11am US Eastern time
- 3pm UTC
- 5pm CEST

Ralph and I will be continuing our discussion and explanations of the Open MPI 
ecosystem.  The Webex link to join is on the event wiki page:


https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI
The wiki page also has links to the slides and video from the first session.
We've also linked the slides and video on the main Open MPI web 
site<https://www.open-mpi.org/video/?category=general#abcs-of-open-mpi-part-1>.

Additionally, Ralph and I decided that we have so much material that we're 
actually extending to have a *third* session on Wednesday August 5th, 2020 (in 
the same time slot).

Please share this information with others who may be interested in attending 
the 2nd and/or 3rd sessions.



On Jun 22, 2020, at 12:10 PM, Jeff Squyres 
mailto:jsquy...@cisco.com>> wrote:

After assembling the content for this online presentation (based on questions 
and comments from the user community), we have so much material to cover that 
we're going to split it into two sessions.

The first part will be **this Wednesday (24 June 2020)*** at:

- 8am US Pacific time
- 11am US Eastern time
- 3pm UTC
- 5pm CEST

The second part will be two weeks later, on Wednesday, 8 July, 2020, in the 
same time slot.

   
https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI

Anyone is free to join either / both parts.

Hope to see you this Wednesday!




On Jun 14, 2020, at 2:05 PM, Jeff Squyres (jsquyres) via announce 
mailto:announce@lists.open-mpi.org>> wrote:

In conjunction with the EasyBuild community, Ralph Castain (Intel, Open MPI, 
PMIx) and Jeff Squyres (Cisco, Open MPI) will host an online presentation about 
Open MPI on **Wednesday June 24th 2020** at:

- 11am US Eastern time
- 8am US Pacific time
- 3pm UTC
- 5pm CEST

The general scope of the presentation will be to demystify the alphabet soup of 
the Open MPI ecosystem: the user-facing frameworks and components, the 3rd 
party dependencies, etc.  More information, including topics to be covered and 
WebEx connection details, is available at:

https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI

The presentation is open for anyone to join.  There is no need to register up 
front, just show up!

The session will be recorded and will be available after the fact.

Please share this information with others who may be interested in attending.


--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce

Re: [Open MPI Announce] Online presentation: the ABCs of Open MPI

2020-06-22 Thread Jeff Squyres (jsquyres) via announce
After assembling the content for this online presentation (based on questions 
and comments from the user community), we have so much material to cover that 
we're going to split it into two sessions.

The first part will be **this Wednesday (24 June 2020)*** at:

- 8am US Pacific time
- 11am US Eastern time
- 3pm UTC
- 5pm CEST

The second part will be two weeks later, on Wednesday, 8 July, 2020, in the 
same time slot.


https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI

Anyone is free to join either / both parts.

Hope to see you this Wednesday!




> On Jun 14, 2020, at 2:05 PM, Jeff Squyres (jsquyres) via announce 
>  wrote:
> 
> In conjunction with the EasyBuild community, Ralph Castain (Intel, Open MPI, 
> PMIx) and Jeff Squyres (Cisco, Open MPI) will host an online presentation 
> about Open MPI on **Wednesday June 24th 2020** at:
> 
> - 11am US Eastern time
> - 8am US Pacific time
> - 3pm UTC
> - 5pm CEST
> 
> The general scope of the presentation will be to demystify the alphabet soup 
> of the Open MPI ecosystem: the user-facing frameworks and components, the 3rd 
> party dependencies, etc.  More information, including topics to be covered 
> and WebEx connection details, is available at:
> 
>  
> https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI
> 
> The presentation is open for anyone to join.  There is no need to register up 
> front, just show up!
> 
> The session will be recorded and will be available after the fact.
> 
> Please share this information with others who may be interested in attending.
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 
> ___
> announce mailing list
> announce@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/announce


-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Online presentation: the ABCs of Open MPI

2020-06-14 Thread Jeff Squyres (jsquyres) via announce
In conjunction with the EasyBuild community, Ralph Castain (Intel, Open MPI, 
PMIx) and Jeff Squyres (Cisco, Open MPI) will host an online presentation about 
Open MPI on **Wednesday June 24th 2020** at:

- 11am US Eastern time
- 8am US Pacific time
- 3pm UTC
- 5pm CEST

The general scope of the presentation will be to demystify the alphabet soup of 
the Open MPI ecosystem: the user-facing frameworks and components, the 3rd 
party dependencies, etc.  More information, including topics to be covered and 
WebEx connection details, is available at:

  
https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI

The presentation is open for anyone to join.  There is no need to register up 
front, just show up!

The session will be recorded and will be available after the fact.

Please share this information with others who may be interested in attending.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI State of the Union BOF at SC'19

2019-10-23 Thread Jeff Squyres (jsquyres) via announce
Be sure to come to the Open MPI State of the Union BOF at SC'19 next month!

As usual, we'll discuss the current status and future roadmap for Open MPI, 
answer questions, and generally be available for discussion.

The BOF will be in the Wednesday noon hour: 
https://sc19.supercomputing.org/session/?sess=sess296

The BOF is not live streamed, but the slides will be available after SC.

We only have an hour; it can be helpful to submit your questions ahead of time. 
 That way, we can be sure to answer them during the main presentation:

https://sc19.supercomputing.org/session/?sess=sess296

Hope to see you in Denver!

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v2.1.6

2019-01-14 Thread Jeff Squyres (jsquyres) via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 
2.1.6.  

Open MPI v2.1.6 is expected to be the last release in the v2.1.x series.  

Although we assume that many users have already upgraded to the newer series 
(3.0.x, v3.1.x, or v4.0.x), this release is intended to fix a few remaining 
bugs in the v2.1.x series before closing off the series for good (e.g., for 
those who have locked their configuration on to the Open MPI v2.1.x series).

Open MPI v2.1.6 is only recommended for those who are still running Open MPI 
v2.0.x or v2.1.x, and can be downloaded from the Open MPI web site:
   
https://www.open-mpi.org/software/ompi/v2.1/

Open MPI v2.1.6 only fixed a small number of issues:

- Update the openib BTL to handle a newer flavor of the
  ibv_exp_query() API.  Thanks to Angel Beltre (and others) for
  reporting the issue.
- Fix a segv when specifying a username in a hostfile.  Thanks to
  StackOverflow user @derangedhk417 for reporting the issue.
- Work around Oracle compiler v5.15 bug (which resulted in a failure
  to compile Open MPI source code).
- Disable CUDA async receive support in the openib BTL by default
  because it is broken for sizes larger than the GPUDirect RDMA
  limit.  User can set the MCA variable btl_openib_cuda_async_recv to
  true to re-enable CUDA async receive support.
- Various atomic and shared memory consistency bug fixes, especially
  affecting the vader (shared memory) BTL and PMIx.
- Add openib BTL support for BCM57XXX and BCM58XXX Broadcom HCAs.
- Fix segv in oob/ud component.  Thanks to Balázs Hajgató for
  reporting the issue.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce

[Open MPI Announce] Open MPI SC'18 State of the Union BOF slides

2018-11-16 Thread Jeff Squyres (jsquyres) via announce
Thanks to all who came to the Open MPI SotU BOF at SC'18 in Dallas, TX, USA 
this week!  It was great talking with you all.

Here are the slides that we presented:

https://www.open-mpi.org/papers/sc-2018/

Please feel free to ask any followup questions on the users or devel lists.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v2.1.5

2018-08-17 Thread Jeff Squyres (jsquyres) via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 
2.1.5.  

Open MPI v2.1.5 is a critical bug fix release, and is likely the last release 
in the v2.1.x series (we released v2.1.4 a week ago, and thought that would be 
the end of the v2.1.x series, but a serious shared memory bug was found 
immediately after the v2.1.4 release).

Open MPI v2.1.5 is only recommended for those who are still running Open MPI 
v2.0.x or v2.1.x, and can be downloaded from the Open MPI web site:

   https://www.open-mpi.org/software/ompi/v2.1/

Open MPI v2.1.5 only fixed two issues:

- A subtle race condition bug was discovered in the "vader" BTL
  (shared memory communications) that, in rare instances, can cause
  MPI processes to crash or incorrectly classify (or effectively drop)
  an MPI message sent via shared memory.  If you are using the "ob1"
  PML with "vader" for shared memory communication (note that vader is
  the default for shared memory communication with ob1), you need to
  upgrade to v2.1.5 to fix this issue.  You may also upgrade to the
  following versions to fix this issue:
  - Open MPI v3.0.1 (released March, 2018) or later in the v3.0.x
series
  - Open MPI v3.1.2 (expected end of August, 2018) or later
- A link issue was fixed when the UCX library was not located in the
  linker-default search paths.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v2.1.4 released

2018-08-10 Thread Jeff Squyres (jsquyres) via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 
2.1.4.  Open MPI v2.1.4 is a bug fix release, and is likely the last release in 
the v2.1.x series.  

Open MPI v2.1.4 is only recommended for those who are still running Open MPI 
v2.0.x or v2.1.x, and can be downloaded from the Open MPI web site:

https://www.open-mpi.org/software/ompi/v2.1/

Item fixed in Open MPI 2.1.4 include the following:

- Disable the POWER 7/BE block in configure.  Note that POWER 7/BE is
  still not a supported platform, but it is no longer automatically
  disabled.  See
  https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
  for more information.
- Fix bug with request-based one-sided MPI operations when using the
  "rdma" component.
- Fix issue with large data structure in the TCP BTL causing problems
  in some environments.  Thanks to @lgarithm for reporting the issue.
- Minor Cygwin build fixes.
- Minor fixes for the openib BTL:
  - Support for the QLogic RoCE HCA
  - Support for the Boradcom Cumulus RoCE HCA
  - Enable support for HDR link speeds
- Fix MPI_FINALIZED hang if invoked from an attribute destructor
  during the MPI_COMM_SELF destruction in MPI_FINALIZE.  Thanks to
  @AndrewGaspar for reporting the issue.
- Java fixes:
  - Modernize Java framework detection, especially on OS X/MacOS.
Thanks to Bryce Glover for reporting and submitting the fixes.
  - Prefer "javac -h" to "javah" to support newer Java frameworks.
- Fortran fixes:
  - Use conformant dummy parameter names for Fortran bindings.  Thanks
to Themos Tsikas for reporting and submitting the fixes.
  - Build the MPI_SIZEOF() interfaces in the "TKR"-style "mpi" module
whenever possible.  Thanks to Themos Tsikas for reporting the
issue.
  - Fix array of argv handling for the Fortran bindings of
MPI_COMM_SPAWN_MULTIPLE (and its associated man page).
  - Make NAG Fortran compiler support more robust in configure.
- Disable the "pt2pt" one-sided MPI component when MPI_THREAD_MULTIPLE
  is used.  This component is simply not safe in MPI_THREAD_MULTIPLE
  scenarios, and will not be fixed in the v2.1.x series.
- Make the "external" hwloc component fail gracefully if it is tries
  to use an hwloc v2.x.y installation.  hwloc v2.x.y will not be
  supported in the Open MPI v2.1.x series.
- Fix "vader" shared memory support for messages larger than 2GB.
  Thanks to Heiko Bauke for the bug report.
- Configure fixes for external PMI directory detection.  Thanks to
  Davide Vanzo for the report.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce