[Open MPI Announce] Open MPI BOF at SC'23

2023-11-06 Thread Jeff Squyres (jsquyres) via announce
We're excited to see everyone next week in Denver, Colorado, USA at SC23!

Open MPI will be hosting our usual State of the Union Birds of a Feather (BOF) 
session
 on Wednesday, 15, November, 2023, from 12:15-1:15pm US Mountain time.  The 
event is in-person only; SC does not allow us to livestream.

During the BOF, we'll present the state of Open MPI, where we are, and where 
we're going.  We also use the BOF as an opportunity to directly respond to your 
questions.  We only have an hour; it's really helpful if you submit your 
questions ahead of 
time
 so that we can include them directly in our presentation.  We'll obviously 
take questions in-person, too, and will be available after the presentation as 
well, but chances are: if you have a question, others have the same question.  
So submit your question to 
us
 so that we can include them in the presentation!  

Hope to see you in Denver!

--
Jeff Squyres
___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce

[Open MPI Announce] Open MPI 4.1.1 released

2021-04-24 Thread Jeff Squyres (jsquyres) via announce
The Open MPI community is pleased to announce the Open MPI v4.1.1 release.  
This release contains a number of bug fixes and minor improvements.

Open MPI v4.1.1 can be downloaded from the Open MPI website:

https://www.open-mpi.org/software/ompi/v4.1/

Changes in v4.1.1 compared to v4.1.0:

- Fix a number of datatype issues, including an issue with
  improper handling of partial datatypes that could lead to
  an unexpected application failure.
- Change UCX PML to not warn about MPI_Request leaks during
  MPI_FINALIZE by default.  The old behavior can be restored with
  the mca_pml_ucx_request_leak_check MCA parameter.
- Reverted temporary solution that worked around launch issues in
  SLURM v20.11.{0,1,2}. SchedMD encourages users to avoid these
  versions and to upgrade to v20.11.3 or newer.
- Updated PMIx to v3.2.2.
- Fixed configuration issue on Apple Silicon observed with
  Homebrew. Thanks to François-Xavier Coudert for reporting the issue.
- Disabled gcc built-in atomics by default on aarch64 platforms.
- Disabled UCX PML when UCX v1.8.0 is detected. UCX version 1.8.0 has a bug that
  may cause data corruption when its TCP transport is used in conjunction with
  the shared memory transport. UCX versions prior to v1.8.0 are not affected by
  this issue. Thanks to @ksiazekm for reporting the issue.
- Fixed detection of available UCX transports/devices to better inform PML
  prioritization.
- Fixed SLURM support to mark ORTE daemons as non-MPI tasks.
- Improved AVX detection to more accurately detect supported
  platforms.  Also improved the generated AVX code, and switched to
  using word-based MCA params for the op/avx component (vs. numeric
  big flags).
- Improved OFI compatibility support and fixed memory leaks in error
  handling paths.
- Improved HAN collectives with support for Barrier and Scatter. Thanks
  to @EmmanuelBRELLE for these changes and the relevant bug fixes.
- Fixed MPI debugger support (i.e., the MPIR_Breakpoint() symbol).
  Thanks to @louisespellacy-arm for reporting the issue.
- Fixed ORTE bug that prevented debuggers from reading MPIR_Proctable.
- Removed PML uniformity check from the UCX PML to address performance
  regression.
- Fixed MPI_Init_thread(3) statement about C++ binding and update
  references about MPI_THREAD_MULTIPLE.  Thanks to Andreas Lösel for
  bringing the outdated docs to our attention.
- Added fence_nb to Flux PMIx support to address segmentation faults.
- Ensured progress of AIO requests in the POSIX FBTL component to
  prevent exceeding maximum number of pending requests on MacOS.
- Used OPAL's mutli-thread support in the orted to leverage atomic
  operations for object refcounting.
- Fixed segv when launching with static TCP ports.
- Fixed --debug-daemons mpirun CLI option.
- Fixed bug where mpirun did not honor --host in a managed job
  allocation.
- Made a managed allocation filter a hostfile/hostlist.
- Fixed bug to marked a generalized request as pending once initiated.
- Fixed external PMIx v4.x check.
- Fixed OSHMEM build with `--enable-mem-debug`.
- Fixed a performance regression observed with older versions of GCC when
  __ATOMIC_SEQ_CST is used. Thanks to @BiplabRaut for reporting the issue.
- Fixed buffer allocation bug in the binomial tree scatter algorithm when
  non-contiguous datatypes are used. Thanks to @sadcat11 for reporting the 
issue.
- Fixed bugs related to the accumulate and atomics functionality in the
  osc/rdma component.
- Fixed race condition in MPI group operations observed with
  MPI_THREAD_MULTIPLE threading level.
- Fixed a deadlock in the TCP BTL's connection matching logic.
- Fixed pml/ob1 compilation error when CUDA support is enabled.
- Fixed a build issue with Lustre caused by unnecessary header includes.
- Fixed a build issue with IMB LSF workload manager.
- Fixed linker error with UCX SPML.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce

[Open MPI Announce] Open MPI State of the Union BOF (webinar)

2021-03-30 Thread Jeff Squyres (jsquyres) via announce
Thanks to all who attended today.  The slides from the presentation are now 
available here:

https://www-lb.open-mpi.org/papers/ecp-bof-2021/



> On Mar 29, 2021, at 2:52 PM, Jeff Squyres (jsquyres) via announce 
>  wrote:
> 
> Gentle reminder that the Open MPI State of the Union BOF webinar is TOMORROW:
> 
>Date: Tuesday, March 30, 2021
>Time: 1:00pm US Eastern time
>Registration URL:  
> https://www.exascaleproject.org/event/ecp-community-bof-days-2021/
> 
> Anyone can attend for free, but YOU MUST REGISTER ahead of time!
> 
> Hope to see you there!
> 
> 
>> On Mar 15, 2021, at 1:06 PM, Jeff Squyres (jsquyres)  
>> wrote:
>> 
>> In conjunction with the Exascale Computing Project (ECP), George Bosilca, 
>> Jeff Squyres, and members of the Open MPI community will present the current 
>> status and future roadmap for the Open MPI project.
>> 
>> We typically have an Open MPI "State of the Union" BOF at the annual 
>> Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
>> The ECP has graciously agreed to host "Community BOF days" for many 
>> HPC-related projects, including Open MPI.
>> 
>> We therefore invite everyone to attend a free 90-minute webinar for the Open 
>> MPI SotU BOF:
>> 
>> Date: March 30, 2021
>> Time: 1:00pm US Eastern time
>> URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/
>> 
>> YOU MUST REGISTER TO ATTEND!
>> 
>> Expand the "Open MPI State of the Union" entry on that page and click on the 
>> registration link to sign up (registration is free).
>> 
>> We hope to see you there!


-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


Re: [Open MPI Announce] Open MPI State of the Union BOF (webinar)

2021-03-29 Thread Jeff Squyres (jsquyres) via announce
Gentle reminder that the Open MPI State of the Union BOF webinar is TOMORROW:

Date: Tuesday, March 30, 2021
Time: 1:00pm US Eastern time
Registration URL:  
https://www.exascaleproject.org/event/ecp-community-bof-days-2021/

Anyone can attend for free, but YOU MUST REGISTER ahead of time!

Hope to see you there!


> On Mar 15, 2021, at 1:06 PM, Jeff Squyres (jsquyres)  
> wrote:
> 
> In conjunction with the Exascale Computing Project (ECP), George Bosilca, 
> Jeff Squyres, and members of the Open MPI community will present the current 
> status and future roadmap for the Open MPI project.
> 
> We typically have an Open MPI "State of the Union" BOF at the annual 
> Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
> The ECP has graciously agreed to host "Community BOF days" for many 
> HPC-related projects, including Open MPI.
> 
> We therefore invite everyone to attend a free 90-minute webinar for the Open 
> MPI SotU BOF:
> 
> Date: March 30, 2021
> Time: 1:00pm US Eastern time
> URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/
> 
> YOU MUST REGISTER TO ATTEND!
> 
> Expand the "Open MPI State of the Union" entry on that page and click on the 
> registration link to sign up (registration is free).
> 
> We hope to see you there!
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 


-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI State of the Union BOF (webinar)

2021-03-15 Thread Jeff Squyres (jsquyres) via announce
In conjunction with the Exascale Computing Project (ECP), George Bosilca, Jeff 
Squyres, and members of the Open MPI community will present the current status 
and future roadmap for the Open MPI project.

We typically have an Open MPI "State of the Union" BOF at the annual 
Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
The ECP has graciously agreed to host "Community BOF days" for many HPC-related 
projects, including Open MPI.

We therefore invite everyone to attend a free 90-minute webinar for the Open 
MPI SotU BOF:

Date: March 30, 2021
Time: 1:00pm US Eastern time
URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/

YOU MUST REGISTER TO ATTEND!

Expand the "Open MPI State of the Union" entry on that page and click on the 
registration link to sign up (registration is free).

We hope to see you there!

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v4.1.0 Released

2020-12-18 Thread Jeff Squyres (jsquyres) via announce
The Open MPI community is pleased to announce the start of the Open MPI 4.1 
release series with the release of Open MPI 4.1.0.  The 4.1 release series 
builds on the 4.0 release series and includes enhancements to OFI and UCX 
communication channels, as well as collectives performance improvements.

The Open MPI 4.1 release series can be downloaded from the Open MPI website:

https://www.open-mpi.org/software/ompi/v4.1/

Changes in 4.1.0 compared to 4.0.x:

- collectives: Add HAN and ADAPT adaptive collectives components.
  Both components are off by default and can be enabled by specifying
  "mpirun --mca coll_adapt_priority 100 --mca coll_han_priority 100 ...".
  We intend to enable both by default in Open MPI 5.0.
- OMPIO is now the default for MPI-IO on all filesystems, including
  Lustre (prior to this, ROMIO was the default for Lustre).  Many
  thanks to Mark Dixon for identifying MPI I/O issues and providing
  access to Lustre systems for testing.
- Updates for macOS Big Sur.  Thanks to FX Coudert for reporting this
  issue and pointing to a solution.
- Minor MPI one-sided RDMA performance improvements.
- Fix hcoll MPI_SCATTERV with MPI_IN_PLACE.
- Add AVX support for MPI collectives.
- Updates to mpirun(1) about "slots" and PE=x values.
- Fix buffer allocation for large environment variables.  Thanks to
  @zrss for reporting the issue.
- Upgrade the embedded OpenPMIx to v3.2.2.
- Take more steps towards creating fully Reproducible builds (see
  https://reproducible-builds.org/).  Thanks Bernhard M. Wiedemann for
  bringing this to our attention.
- Fix issue with extra-long values in MCA files.  Thanks to GitHub
  user @zrss for bringing the issue to our attention.
- UCX: Fix zero-sized datatype transfers.
- Fix --cpu-list for non-uniform modes.
- Fix issue in PMIx callback caused by missing memory barrier on Arm platforms.
- OFI MTL: Various bug fixes.
- Fixed issue where MPI_TYPE_CREATE_RESIZED would create a datatype
  with unexpected extent on oddly-aligned datatypes.
- collectives: Adjust default tuning thresholds for many collective
  algorithms
- runtime: fix situation where rank-by argument does not work
- Portals4: Clean up error handling corner cases
- runtime: Remove --enable-install-libpmix option, which has not
  worked since it was added
- opal: Disable memory patcher component on MacOS
- UCX: Allow UCX 1.8 to be used with the btl uct
- UCX: Replace usage of the deprecated NB API of UCX with NBX
- OMPIO: Add support for the IME file system
- OFI/libfabric: Added support for multiple NICs
- OFI/libfabric: Added support for Scalable Endpoints
- OFI/libfabric: Added btl for one-sided support
- OFI/libfabric: Multiple small bugfixes
- libnbc: Adding numerous performance-improving algorithms

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] The ABCs of Open MPI (parts 1 and 2): slides + videos posted

2020-07-14 Thread Jeff Squyres (jsquyres) via announce
The slides and videos for parts 1 and 2 of the online seminar presentation "The 
ABCs of Open MPI" have been posted on both the Open MPI web site and the 
EasyBuild wiki:

https://www.open-mpi.org/video/?category=general

https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI

The last part of the seminar (part 3) will be held on Wednesday, August 5, 2020 
at:

- 11am US Eastern time
- 8am US Pacific time
- 3pm UTC
- 5pm CEST

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Online presentation: the ABCs of Open MPI

2020-06-14 Thread Jeff Squyres (jsquyres) via announce
In conjunction with the EasyBuild community, Ralph Castain (Intel, Open MPI, 
PMIx) and Jeff Squyres (Cisco, Open MPI) will host an online presentation about 
Open MPI on **Wednesday June 24th 2020** at:

- 11am US Eastern time
- 8am US Pacific time
- 3pm UTC
- 5pm CEST

The general scope of the presentation will be to demystify the alphabet soup of 
the Open MPI ecosystem: the user-facing frameworks and components, the 3rd 
party dependencies, etc.  More information, including topics to be covered and 
WebEx connection details, is available at:

  
https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI

The presentation is open for anyone to join.  There is no need to register up 
front, just show up!

The session will be recorded and will be available after the fact.

Please share this information with others who may be interested in attending.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI State of the Union BOF at SC'19

2019-10-23 Thread Jeff Squyres (jsquyres) via announce
Be sure to come to the Open MPI State of the Union BOF at SC'19 next month!

As usual, we'll discuss the current status and future roadmap for Open MPI, 
answer questions, and generally be available for discussion.

The BOF will be in the Wednesday noon hour: 
https://sc19.supercomputing.org/session/?sess=sess296

The BOF is not live streamed, but the slides will be available after SC.

We only have an hour; it can be helpful to submit your questions ahead of time. 
 That way, we can be sure to answer them during the main presentation:

https://sc19.supercomputing.org/session/?sess=sess296

Hope to see you in Denver!

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v2.1.4 released

2018-08-10 Thread Jeff Squyres (jsquyres) via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 
2.1.4.  Open MPI v2.1.4 is a bug fix release, and is likely the last release in 
the v2.1.x series.  

Open MPI v2.1.4 is only recommended for those who are still running Open MPI 
v2.0.x or v2.1.x, and can be downloaded from the Open MPI web site:

https://www.open-mpi.org/software/ompi/v2.1/

Item fixed in Open MPI 2.1.4 include the following:

- Disable the POWER 7/BE block in configure.  Note that POWER 7/BE is
  still not a supported platform, but it is no longer automatically
  disabled.  See
  https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
  for more information.
- Fix bug with request-based one-sided MPI operations when using the
  "rdma" component.
- Fix issue with large data structure in the TCP BTL causing problems
  in some environments.  Thanks to @lgarithm for reporting the issue.
- Minor Cygwin build fixes.
- Minor fixes for the openib BTL:
  - Support for the QLogic RoCE HCA
  - Support for the Boradcom Cumulus RoCE HCA
  - Enable support for HDR link speeds
- Fix MPI_FINALIZED hang if invoked from an attribute destructor
  during the MPI_COMM_SELF destruction in MPI_FINALIZE.  Thanks to
  @AndrewGaspar for reporting the issue.
- Java fixes:
  - Modernize Java framework detection, especially on OS X/MacOS.
Thanks to Bryce Glover for reporting and submitting the fixes.
  - Prefer "javac -h" to "javah" to support newer Java frameworks.
- Fortran fixes:
  - Use conformant dummy parameter names for Fortran bindings.  Thanks
to Themos Tsikas for reporting and submitting the fixes.
  - Build the MPI_SIZEOF() interfaces in the "TKR"-style "mpi" module
whenever possible.  Thanks to Themos Tsikas for reporting the
issue.
  - Fix array of argv handling for the Fortran bindings of
MPI_COMM_SPAWN_MULTIPLE (and its associated man page).
  - Make NAG Fortran compiler support more robust in configure.
- Disable the "pt2pt" one-sided MPI component when MPI_THREAD_MULTIPLE
  is used.  This component is simply not safe in MPI_THREAD_MULTIPLE
  scenarios, and will not be fixed in the v2.1.x series.
- Make the "external" hwloc component fail gracefully if it is tries
  to use an hwloc v2.x.y installation.  hwloc v2.x.y will not be
  supported in the Open MPI v2.1.x series.
- Fix "vader" shared memory support for messages larger than 2GB.
  Thanks to Heiko Bauke for the bug report.
- Configure fixes for external PMI directory detection.  Thanks to
  Davide Vanzo for the report.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce