[Open MPI Announce] Open MPI BOF at SC'23

2023-11-06 Thread Jeff Squyres (jsquyres) via announce
We're excited to see everyone next week in Denver, Colorado, USA at SC23!

Open MPI will be hosting our usual State of the Union Birds of a Feather (BOF) 
session
 on Wednesday, 15, November, 2023, from 12:15-1:15pm US Mountain time.  The 
event is in-person only; SC does not allow us to livestream.

During the BOF, we'll present the state of Open MPI, where we are, and where 
we're going.  We also use the BOF as an opportunity to directly respond to your 
questions.  We only have an hour; it's really helpful if you submit your 
questions ahead of 
time
 so that we can include them directly in our presentation.  We'll obviously 
take questions in-person, too, and will be available after the presentation as 
well, but chances are: if you have a question, others have the same question.  
So submit your question to 
us
 so that we can include them in the presentation!  

Hope to see you in Denver!

--
Jeff Squyres
___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce

[Open MPI Announce] Open MPI 4.1.1 released

2021-04-24 Thread Jeff Squyres (jsquyres) via announce
The Open MPI community is pleased to announce the Open MPI v4.1.1 release.  
This release contains a number of bug fixes and minor improvements.

Open MPI v4.1.1 can be downloaded from the Open MPI website:

https://www.open-mpi.org/software/ompi/v4.1/

Changes in v4.1.1 compared to v4.1.0:

- Fix a number of datatype issues, including an issue with
  improper handling of partial datatypes that could lead to
  an unexpected application failure.
- Change UCX PML to not warn about MPI_Request leaks during
  MPI_FINALIZE by default.  The old behavior can be restored with
  the mca_pml_ucx_request_leak_check MCA parameter.
- Reverted temporary solution that worked around launch issues in
  SLURM v20.11.{0,1,2}. SchedMD encourages users to avoid these
  versions and to upgrade to v20.11.3 or newer.
- Updated PMIx to v3.2.2.
- Fixed configuration issue on Apple Silicon observed with
  Homebrew. Thanks to François-Xavier Coudert for reporting the issue.
- Disabled gcc built-in atomics by default on aarch64 platforms.
- Disabled UCX PML when UCX v1.8.0 is detected. UCX version 1.8.0 has a bug that
  may cause data corruption when its TCP transport is used in conjunction with
  the shared memory transport. UCX versions prior to v1.8.0 are not affected by
  this issue. Thanks to @ksiazekm for reporting the issue.
- Fixed detection of available UCX transports/devices to better inform PML
  prioritization.
- Fixed SLURM support to mark ORTE daemons as non-MPI tasks.
- Improved AVX detection to more accurately detect supported
  platforms.  Also improved the generated AVX code, and switched to
  using word-based MCA params for the op/avx component (vs. numeric
  big flags).
- Improved OFI compatibility support and fixed memory leaks in error
  handling paths.
- Improved HAN collectives with support for Barrier and Scatter. Thanks
  to @EmmanuelBRELLE for these changes and the relevant bug fixes.
- Fixed MPI debugger support (i.e., the MPIR_Breakpoint() symbol).
  Thanks to @louisespellacy-arm for reporting the issue.
- Fixed ORTE bug that prevented debuggers from reading MPIR_Proctable.
- Removed PML uniformity check from the UCX PML to address performance
  regression.
- Fixed MPI_Init_thread(3) statement about C++ binding and update
  references about MPI_THREAD_MULTIPLE.  Thanks to Andreas Lösel for
  bringing the outdated docs to our attention.
- Added fence_nb to Flux PMIx support to address segmentation faults.
- Ensured progress of AIO requests in the POSIX FBTL component to
  prevent exceeding maximum number of pending requests on MacOS.
- Used OPAL's mutli-thread support in the orted to leverage atomic
  operations for object refcounting.
- Fixed segv when launching with static TCP ports.
- Fixed --debug-daemons mpirun CLI option.
- Fixed bug where mpirun did not honor --host in a managed job
  allocation.
- Made a managed allocation filter a hostfile/hostlist.
- Fixed bug to marked a generalized request as pending once initiated.
- Fixed external PMIx v4.x check.
- Fixed OSHMEM build with `--enable-mem-debug`.
- Fixed a performance regression observed with older versions of GCC when
  __ATOMIC_SEQ_CST is used. Thanks to @BiplabRaut for reporting the issue.
- Fixed buffer allocation bug in the binomial tree scatter algorithm when
  non-contiguous datatypes are used. Thanks to @sadcat11 for reporting the 
issue.
- Fixed bugs related to the accumulate and atomics functionality in the
  osc/rdma component.
- Fixed race condition in MPI group operations observed with
  MPI_THREAD_MULTIPLE threading level.
- Fixed a deadlock in the TCP BTL's connection matching logic.
- Fixed pml/ob1 compilation error when CUDA support is enabled.
- Fixed a build issue with Lustre caused by unnecessary header includes.
- Fixed a build issue with IMB LSF workload manager.
- Fixed linker error with UCX SPML.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce

[Open MPI Announce] Open MPI State of the Union BOF (webinar)

2021-03-30 Thread Jeff Squyres (jsquyres) via announce
Thanks to all who attended today.  The slides from the presentation are now 
available here:

https://www-lb.open-mpi.org/papers/ecp-bof-2021/



> On Mar 29, 2021, at 2:52 PM, Jeff Squyres (jsquyres) via announce 
>  wrote:
> 
> Gentle reminder that the Open MPI State of the Union BOF webinar is TOMORROW:
> 
>Date: Tuesday, March 30, 2021
>Time: 1:00pm US Eastern time
>Registration URL:  
> https://www.exascaleproject.org/event/ecp-community-bof-days-2021/
> 
> Anyone can attend for free, but YOU MUST REGISTER ahead of time!
> 
> Hope to see you there!
> 
> 
>> On Mar 15, 2021, at 1:06 PM, Jeff Squyres (jsquyres)  
>> wrote:
>> 
>> In conjunction with the Exascale Computing Project (ECP), George Bosilca, 
>> Jeff Squyres, and members of the Open MPI community will present the current 
>> status and future roadmap for the Open MPI project.
>> 
>> We typically have an Open MPI "State of the Union" BOF at the annual 
>> Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
>> The ECP has graciously agreed to host "Community BOF days" for many 
>> HPC-related projects, including Open MPI.
>> 
>> We therefore invite everyone to attend a free 90-minute webinar for the Open 
>> MPI SotU BOF:
>> 
>> Date: March 30, 2021
>> Time: 1:00pm US Eastern time
>> URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/
>> 
>> YOU MUST REGISTER TO ATTEND!
>> 
>> Expand the "Open MPI State of the Union" entry on that page and click on the 
>> registration link to sign up (registration is free).
>> 
>> We hope to see you there!


-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


Re: [Open MPI Announce] Open MPI State of the Union BOF (webinar)

2021-03-29 Thread Jeff Squyres (jsquyres) via announce
Gentle reminder that the Open MPI State of the Union BOF webinar is TOMORROW:

Date: Tuesday, March 30, 2021
Time: 1:00pm US Eastern time
Registration URL:  
https://www.exascaleproject.org/event/ecp-community-bof-days-2021/

Anyone can attend for free, but YOU MUST REGISTER ahead of time!

Hope to see you there!


> On Mar 15, 2021, at 1:06 PM, Jeff Squyres (jsquyres)  
> wrote:
> 
> In conjunction with the Exascale Computing Project (ECP), George Bosilca, 
> Jeff Squyres, and members of the Open MPI community will present the current 
> status and future roadmap for the Open MPI project.
> 
> We typically have an Open MPI "State of the Union" BOF at the annual 
> Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
> The ECP has graciously agreed to host "Community BOF days" for many 
> HPC-related projects, including Open MPI.
> 
> We therefore invite everyone to attend a free 90-minute webinar for the Open 
> MPI SotU BOF:
> 
> Date: March 30, 2021
> Time: 1:00pm US Eastern time
> URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/
> 
> YOU MUST REGISTER TO ATTEND!
> 
> Expand the "Open MPI State of the Union" entry on that page and click on the 
> registration link to sign up (registration is free).
> 
> We hope to see you there!
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> 


-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI State of the Union BOF (webinar)

2021-03-15 Thread Jeff Squyres (jsquyres) via announce
In conjunction with the Exascale Computing Project (ECP), George Bosilca, Jeff 
Squyres, and members of the Open MPI community will present the current status 
and future roadmap for the Open MPI project.

We typically have an Open MPI "State of the Union" BOF at the annual 
Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
The ECP has graciously agreed to host "Community BOF days" for many HPC-related 
projects, including Open MPI.

We therefore invite everyone to attend a free 90-minute webinar for the Open 
MPI SotU BOF:

Date: March 30, 2021
Time: 1:00pm US Eastern time
URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/

YOU MUST REGISTER TO ATTEND!

Expand the "Open MPI State of the Union" entry on that page and click on the 
registration link to sign up (registration is free).

We hope to see you there!

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v4.1.0 Released

2020-12-18 Thread Jeff Squyres (jsquyres) via announce
The Open MPI community is pleased to announce the start of the Open MPI 4.1 
release series with the release of Open MPI 4.1.0.  The 4.1 release series 
builds on the 4.0 release series and includes enhancements to OFI and UCX 
communication channels, as well as collectives performance improvements.

The Open MPI 4.1 release series can be downloaded from the Open MPI website:

https://www.open-mpi.org/software/ompi/v4.1/

Changes in 4.1.0 compared to 4.0.x:

- collectives: Add HAN and ADAPT adaptive collectives components.
  Both components are off by default and can be enabled by specifying
  "mpirun --mca coll_adapt_priority 100 --mca coll_han_priority 100 ...".
  We intend to enable both by default in Open MPI 5.0.
- OMPIO is now the default for MPI-IO on all filesystems, including
  Lustre (prior to this, ROMIO was the default for Lustre).  Many
  thanks to Mark Dixon for identifying MPI I/O issues and providing
  access to Lustre systems for testing.
- Updates for macOS Big Sur.  Thanks to FX Coudert for reporting this
  issue and pointing to a solution.
- Minor MPI one-sided RDMA performance improvements.
- Fix hcoll MPI_SCATTERV with MPI_IN_PLACE.
- Add AVX support for MPI collectives.
- Updates to mpirun(1) about "slots" and PE=x values.
- Fix buffer allocation for large environment variables.  Thanks to
  @zrss for reporting the issue.
- Upgrade the embedded OpenPMIx to v3.2.2.
- Take more steps towards creating fully Reproducible builds (see
  https://reproducible-builds.org/).  Thanks Bernhard M. Wiedemann for
  bringing this to our attention.
- Fix issue with extra-long values in MCA files.  Thanks to GitHub
  user @zrss for bringing the issue to our attention.
- UCX: Fix zero-sized datatype transfers.
- Fix --cpu-list for non-uniform modes.
- Fix issue in PMIx callback caused by missing memory barrier on Arm platforms.
- OFI MTL: Various bug fixes.
- Fixed issue where MPI_TYPE_CREATE_RESIZED would create a datatype
  with unexpected extent on oddly-aligned datatypes.
- collectives: Adjust default tuning thresholds for many collective
  algorithms
- runtime: fix situation where rank-by argument does not work
- Portals4: Clean up error handling corner cases
- runtime: Remove --enable-install-libpmix option, which has not
  worked since it was added
- opal: Disable memory patcher component on MacOS
- UCX: Allow UCX 1.8 to be used with the btl uct
- UCX: Replace usage of the deprecated NB API of UCX with NBX
- OMPIO: Add support for the IME file system
- OFI/libfabric: Added support for multiple NICs
- OFI/libfabric: Added support for Scalable Endpoints
- OFI/libfabric: Added btl for one-sided support
- OFI/libfabric: Multiple small bugfixes
- libnbc: Adding numerous performance-improving algorithms

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] The ABCs of Open MPI (parts 1 and 2): slides + videos posted

2020-07-14 Thread Jeff Squyres (jsquyres) via announce
The slides and videos for parts 1 and 2 of the online seminar presentation "The 
ABCs of Open MPI" have been posted on both the Open MPI web site and the 
EasyBuild wiki:

https://www.open-mpi.org/video/?category=general

https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI

The last part of the seminar (part 3) will be held on Wednesday, August 5, 2020 
at:

- 11am US Eastern time
- 8am US Pacific time
- 3pm UTC
- 5pm CEST

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Online presentation: the ABCs of Open MPI

2020-06-14 Thread Jeff Squyres (jsquyres) via announce
In conjunction with the EasyBuild community, Ralph Castain (Intel, Open MPI, 
PMIx) and Jeff Squyres (Cisco, Open MPI) will host an online presentation about 
Open MPI on **Wednesday June 24th 2020** at:

- 11am US Eastern time
- 8am US Pacific time
- 3pm UTC
- 5pm CEST

The general scope of the presentation will be to demystify the alphabet soup of 
the Open MPI ecosystem: the user-facing frameworks and components, the 3rd 
party dependencies, etc.  More information, including topics to be covered and 
WebEx connection details, is available at:

  
https://github.com/easybuilders/easybuild/wiki/EasyBuild-Tech-Talks-I:-Open-MPI

The presentation is open for anyone to join.  There is no need to register up 
front, just show up!

The session will be recorded and will be available after the fact.

Please share this information with others who may be interested in attending.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI State of the Union BOF at SC'19

2019-10-23 Thread Jeff Squyres (jsquyres) via announce
Be sure to come to the Open MPI State of the Union BOF at SC'19 next month!

As usual, we'll discuss the current status and future roadmap for Open MPI, 
answer questions, and generally be available for discussion.

The BOF will be in the Wednesday noon hour: 
https://sc19.supercomputing.org/session/?sess=sess296

The BOF is not live streamed, but the slides will be available after SC.

We only have an hour; it can be helpful to submit your questions ahead of time. 
 That way, we can be sure to answer them during the main presentation:

https://sc19.supercomputing.org/session/?sess=sess296

Hope to see you in Denver!

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v2.1.4 released

2018-08-10 Thread Jeff Squyres (jsquyres) via announce
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 
2.1.4.  Open MPI v2.1.4 is a bug fix release, and is likely the last release in 
the v2.1.x series.  

Open MPI v2.1.4 is only recommended for those who are still running Open MPI 
v2.0.x or v2.1.x, and can be downloaded from the Open MPI web site:

https://www.open-mpi.org/software/ompi/v2.1/

Item fixed in Open MPI 2.1.4 include the following:

- Disable the POWER 7/BE block in configure.  Note that POWER 7/BE is
  still not a supported platform, but it is no longer automatically
  disabled.  See
  https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
  for more information.
- Fix bug with request-based one-sided MPI operations when using the
  "rdma" component.
- Fix issue with large data structure in the TCP BTL causing problems
  in some environments.  Thanks to @lgarithm for reporting the issue.
- Minor Cygwin build fixes.
- Minor fixes for the openib BTL:
  - Support for the QLogic RoCE HCA
  - Support for the Boradcom Cumulus RoCE HCA
  - Enable support for HDR link speeds
- Fix MPI_FINALIZED hang if invoked from an attribute destructor
  during the MPI_COMM_SELF destruction in MPI_FINALIZE.  Thanks to
  @AndrewGaspar for reporting the issue.
- Java fixes:
  - Modernize Java framework detection, especially on OS X/MacOS.
Thanks to Bryce Glover for reporting and submitting the fixes.
  - Prefer "javac -h" to "javah" to support newer Java frameworks.
- Fortran fixes:
  - Use conformant dummy parameter names for Fortran bindings.  Thanks
to Themos Tsikas for reporting and submitting the fixes.
  - Build the MPI_SIZEOF() interfaces in the "TKR"-style "mpi" module
whenever possible.  Thanks to Themos Tsikas for reporting the
issue.
  - Fix array of argv handling for the Fortran bindings of
MPI_COMM_SPAWN_MULTIPLE (and its associated man page).
  - Make NAG Fortran compiler support more robust in configure.
- Disable the "pt2pt" one-sided MPI component when MPI_THREAD_MULTIPLE
  is used.  This component is simply not safe in MPI_THREAD_MULTIPLE
  scenarios, and will not be fixed in the v2.1.x series.
- Make the "external" hwloc component fail gracefully if it is tries
  to use an hwloc v2.x.y installation.  hwloc v2.x.y will not be
  supported in the Open MPI v2.1.x series.
- Fix "vader" shared memory support for messages larger than 2GB.
  Thanks to Heiko Bauke for the bug report.
- Configure fixes for external PMI directory detection.  Thanks to
  Davide Vanzo for the report.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI SC'17 BOF slides

2017-11-17 Thread Jeff Squyres (jsquyres)
The slides from the Open MPI State of the Union XI SC'17 BOF are now up on the 
web site:

https://www.open-mpi.org/papers/sc-2017/

Many thanks to all who attended, just stopped by to say hello, or otherwise 
stopped by to chat.

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


[Open MPI Announce] Open MPI v2.0.2 released

2017-01-31 Thread Jeff Squyres (jsquyres)
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 2.0.2.

v2.0.2 is a bug fix release that includes a variety of bug fixes and some 
performance fixes.  All users are encouraged to upgrade to v2.0.2 when 
possible.  

Version 2.0.2 can be downloaded from the main Open MPI web site:

https://www.open-mpi.org/software/ompi/v2.0/

NEWS

2.0.2 -- 31 January 2017
-

Bug fixes/minor improvements:

- Fix a problem with MPI_FILE_WRITE_SHARED when using MPI_MODE_APPEND and
  Open MPI's native MPI-IO implementation.  Thanks to Nicolas Joly for
  reporting.
- Fix a typo in the MPI_WIN_GET_NAME man page.  Thanks to Nicolas Joly
  for reporting.
- Fix a race condition with ORTE's session directory setup.  Thanks to
  @tbj900 for reporting this issue.
- Fix a deadlock issue arising from Open MPI's approach to catching calls to
  munmap. Thanks to Paul Hargrove for reporting and helping to analyze this
  problem.
- Fix a problem with PPC atomics which caused make check to fail unless builtin
  atomics configure option was enabled.  Thanks to Orion Poplawski for 
reporting.
- Fix a problem with use of x86_64 cpuid instruction which led to segmentation
  faults when Open MPI was configured with -O3 optimization.  Thanks to Mark
  Santcroos for reporting this problem.
- Fix a problem when using built in atomics configure options on PPC platforms
  when building 32 bit applications.  Thanks to Paul Hargrove for reporting.
- Fix a problem with building Open MPI against an external hwloc installation.
  Thanks to Orion Poplawski for reporting this issue.
- Remove use of DATE in the message queue version string reported to debuggers 
to
  insure bit-wise reproducibility of binaries.  Thanks to Alastair McKinstry
  for help in fixing this problem.
- Fix a problem with early exit of a MPI process without calling MPI_FINALIZE
  or MPI_ABORT that could lead to job hangs.  Thanks to Christof Koehler for
  reporting.
- Fix a problem with forwarding of SIGTERM signal from mpirun to MPI processes
  in a job.  Thanks to Noel Rycroft for reporting this problem
- Plug some memory leaks in MPI_WIN_FREE discovered using Valgrind.  Thanks
  to Joseph Schuchart for reporting.
- Fix a problems  MPI_NEIGHOR_ALLTOALL when using a communicator with an empty 
topology
  graph.  Thanks to Daniel Ibanez for reporting.
- Fix a typo in a PMIx component help file.  Thanks to @njoly for reporting 
this.
- Fix a problem with Valgrind false positives when using Open MPI's internal 
memchecker.
  Thanks to Yvan Fournier for reporting.
- Fix a problem with MPI_FILE_DELETE returning MPI_SUCCESS when
  deleting a non-existent file. Thanks to Wei-keng Liao for reporting.
- Fix a problem with MPI_IMPROBE that could lead to hangs in subsequent MPI
  point to point or collective calls.  Thanks to Chris Pattison for reporting.
- Fix a problem when configure Open MPI for powerpc with --enable-mpi-cxx
  enabled.  Thanks to Alastair McKinstry for reporting.
- Fix a problem using MPI_IALLTOALL with MPI_IN_PLACE argument.  Thanks to
  Chris Ward for reporting.
- Fix a problem using MPI_RACCUMULATE with the Portals4 transport.  Thanks to
  @PDeveze for reporting.
- Fix an issue with static linking and duplicate symbols arising from PMIx
  Slurm components.  Thanks to Limin Gu for reporting.
- Fix a problem when using MPI dynamic memory windows.  Thanks to
  Christoph Niethammer for reporting.
- Fix a problem with Open MPI's pkgconfig files.  Thanks to Alastair McKinstry
  for reporting.
- Fix a problem with MPI_IREDUCE when the same buffer is supplied for the
  send and recv buffer arguments.  Thanks to Valentin Petrov for reporting.
- Fix a problem with atomic operations on PowerPC.  Thanks to Paul
  Hargrove for reporting.

Known issues (to be addressed in v2.0.3):

- See the list of fixes slated for v2.0.3 here:
  https://github.com/open-mpi/ompi/milestone/23

-- 
Jeff Squyres
jsquy...@cisco.com

___
announce mailing list
announce@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/announce


Re: [hwloc-announce] This list is suspended while migrating

2016-07-21 Thread Jeff Squyres (jsquyres)
We unfortunately ran into major issues while trying to migrate these lists, and 
have therefore restored them back on the Indiana U servers until we try the 
migration again.

Sorry for the hassle folks; stay tuned!


> On Jul 20, 2016, at 10:28 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com> 
> wrote:
> 
> We are beginning the list migration process; this list will be suspended 
> while it is in transit to a new home.
> 
> We can't predict the exact timing of the migration -- hopefully it'll only 
> take a few hours.
> 
> See you on the other side!
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



[Open MPI Announce] Update to Open MPI version number scheme

2015-06-24 Thread Jeff Squyres (jsquyres)
Greetings Open MPI users and system administrators.

In response to user feedback, Open MPI is changing how its releases will be 
numbered.

In short, Open MPI will no longer be released using an "odd/even" cadence 
corresponding to "feature development" and "super stable" releases.  Instead, 
each of the "A.B.C" digits in the Open MPI version number will have a specific 
meaning, conveying semantic information in the overall release number.

The details of this scheme, and our roadmap plans for releases over the next 
several months, are included in the form of slides, available here:

* On my blog:
  http://blogs.cisco.com/performance/open-mpi-new-versioning-scheme-and-roadmap

* On the Open MPI web site:
  http://www.open-mpi.org/papers/versioning-update-2015/

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



[Open MPI Announce] Open MPI v1.8.1 released

2014-04-23 Thread Jeff Squyres (jsquyres)
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 1.8.1.

*** This release includes a single critical bug fix for users who run MPI jobs
as the "root" user.  It is otherwise identical to the Open MPI v1.8 release.
See https://svn.open-mpi.org/trac/ompi/ticket/4536 for more details.

Version 1.8.1 can be downloaded from:

http://www.open-mpi.org/software/ompi/v1.8/

Note that v1.8.2 is expected to be released within the next few weeks, and will 
contain the usual bug fixes and end user-suggested improvements.  The Open MPI 
team felt that this critical bug warranted an immediate release, but did not 
want to interrupt the normal post-v1.8 bug fix/improvement schedule already in 
progress.  Hence, we are releasing v1.8.1 with this one critical bug fix, and 
keeping on track with all other fixes for the next release in a few weeks.

If you do not run Open MPI jobs as root, you do not need the v1.8.1 release; 
you can wait for v1.8.2.

As usual, MPI application ABI will be preserved through the v1.8.x series.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



[Open MPI Announce] Open MPI @SC next week

2013-11-13 Thread Jeff Squyres (jsquyres)
I'm sure everyone reading this email will be in Denver at SC'13 next week 
(http://sc13.supercomputing.org/).  Right?  Of course!

Many of us from the Open MPI community will be there, and we'd love to chat 
with real, honest-to-goodness users, admins, and developers who are using Open 
MPI.  Come say hi!  You can generally find me in one of two places:

- Dave Goodell and I will be hanging around the Cisco booth (#2535).  
- George Bosilca and I will be hosting hosting the Open MPI "State of the 
Union" BOF on Tuesday 12:15-1:15pm in rooms 301-302-303.

For the BOF, this year we're doing something a little different.  We'd like to 
ask for your suggestions for topics and your specific questions *before the 
BOF*:

http://www.open-mpi.org/sc13/

Every year, the SC program committee pesters us to have an interactive BOF.  
But we find that that doesn't work well in a large room with many attendees.  
So we thought we'd try a "get questions ahead of time" approach and see how 
that works.  We'll use your feedback to guide what we present at the BOF.

You can assume that we'll present the normal "current status" and "future 
roadmap" kind of information.  Use the web form to suggest something specific 
you'd like to see in either of those general topics, or if you have a specific 
question you'd like us to answer.

See you in Denver!

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



[Open MPI Announce] Open MPI 1.7.2 released

2013-06-26 Thread Jeff Squyres (jsquyres)
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI version 
1.7.2. 


This is a bug-fix release driven by two issues in the v1.7.0 release. 

- Add a distance-based mapping component to find the socket "closest"
  to the PCI bus.
- Added Location Aware Mapping Algorithm (LAMA) mapping component.
- Fix an error that caused epoll to automatically be disabled
  in libevent.
- Upgrade hwloc to 1.5.2.
- *Really* fixed XRC compile issue in Open Fabrics support.
- Fixed parallel debugger ability to attach to MPI jobs.
- Fix MXM connection establishment flow.
- Fixed some minor memory leaks.
- Fixed datatype corruption issue when combining datatypes of specific
  formats.
- Fixes for MPI_STATUS handling in corner cases.
- Major VampirTrace update to 5.14.4.2.
  (** also appeared: 1.6.5)
- Fix to set flag==1 when MPI_IPROBE is called with MPI_PROC_NULL.
  (** also appeared: 1.6.5)
- Set the Intel Phi device to be ignored by default by the openib BTL.
  (** also appeared: 1.6.5)
- Decrease the internal memory storage used by intrinsic MPI datatypes
  for Fortran types.  Thanks to Takahiro Kawashima for the initial
  patch.
  (** also appeared: 1.6.5)
- Fix total registered memory calculation for Mellanox ConnectIB and
  OFED 2.0.
  (** also appeared: 1.6.5)
- Fix possible data corruption in the MXM MTL component.
  (** also appeared: 1.6.5)
- Remove extraneous -L from hwloc's embedding.  Thanks to Stefan
  Friedel for reporting the issue.
  (** also appeared: 1.6.5)
- Fix contiguous datatype memory check.  Thanks to Eric Chamberland
  for reporting the issue.
  (** also appeared: 1.6.5)
- Make the openib BTL more friendly to ignoring verbs devices that are
  not RC-capable.
  (** also appeared: 1.6.5)
- Fix some MPI datatype engine issues.  Thanks to Thomas Jahns for
  reporting the issue.
  (** also appeared: 1.6.5)
- Add INI information for Chelsio T5 device.
  (** also appeared: 1.6.5)
- Integrate MXM STREAM support for MPI_ISEND and MPI_IRECV, and other
  minor MXM fixes.
  (** also appeared: 1.6.5)
- Fix to not show amorphous "MPI was already finalized" error when
  failing to MPI_File_close an open file.  Thanks to Brian Smith for
  reporting the issue.
  (** also appeared: 1.6.5)

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




[Open MPI Announce] Open MPI 1.6.5 released

2013-06-26 Thread Jeff Squyres (jsquyres)
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to the next release in the stable release series: 
Open MPI version 1.6.5. 

Version 1.6.5 is a minor bugfix release. All users are encouraged to upgrade to 
v1.6.5 when possible. 

Note that v1.6.5 is ABI compatible with the entire v1.5.x and v1.6.x series, 
but is not ABI compatible with the v1.4.x series. See 
http://www.open-mpi.org/software/ompi/versions/ for a description of Open MPI's 
release methodology. 

Version 1.6.5 can be downloaded from the main Open MPI web site or any of its 
mirrors (Windows binaries may be available eventually; Open MPI needs a new 
Windows maintainer -- let us know on the developers' list if you're interested 
in helping out). 

Here is a list of changes in v1.6.5 as compared to v1.6.4: 

- Updated default SRQ parameters for the openib BTL.
  (** also to appear: 1.7.2)
- Major VampirTrace update to 5.14.4.2.
  (** also to appear: 1.7.2)
- Fix to set flag==1 when MPI_IPROBE is called with MPI_PROC_NULL.
  (** also to appear: 1.7.2)
- Set the Intel Phi device to be ignored by default by the openib BTL.
  (** also to appear: 1.7.2)
- Decrease the internal memory storage used by intrinsic MPI datatypes
  for Fortran types.  Thanks to Takahiro Kawashima for the initial
  patch.
  (** also to appear: 1.7.2)
- Fix total registered memory calculation for Mellanox ConnectIB and
  OFED 2.0.
  (** also to appear: 1.7.2)
- Fix possible data corruption in the MXM MTL component.
  (** also to appear: 1.7.2)
- Remove extraneous -L from hwloc's embedding.  Thanks to Stefan
  Friedel for reporting the issue.
  (** also to appear: 1.7.2)
- Fix contiguous datatype memory check.  Thanks to Eric Chamberland
  for reporting the issue.
  (** also to appear: 1.7.2)
- Make the openib BTL more friendly to ignoring verbs devices that are
  not RC-capable.
  (** also to appear: 1.7.2)
- Fix some MPI datatype engine issues.  Thanks to Thomas Jahns for
  reporting the issue.
  (** also to appear: 1.7.2)
- Add INI information for Chelsio T5 device.
  (** also to appear: 1.7.2)
- Integrate MXM STREAM support for MPI_ISEND and MPI_IRECV, and other
  minor MXM fixes.
  (** also to appear: 1.7.2)
- Improved alignment for OpenFabrics buffers.
- Fix to not show amorphous "MPI was already finalized" error when
  failing to MPI_File_close an open file.  Thanks to Brian Smith for
  reporting the issue.
  (** also to appear: 1.7.2)

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/




[Open MPI Announce] Open MPI v1.1 release

2006-06-23 Thread Jeff Squyres (jsquyres)
The Open MPI Team, representing a consortium of research, academic, and
industry partners, is pleased to announce the release of Open MPI
version 1.1.  This release contains many new features, performance
enhancements, and stability bug fixes.  Version 1.1 can be downloaded
from the main Open MPI web site or any of its mirrors (mirrors will be
updating shortly).

We strongly recommend that all users upgrade to the version 1.1 series,
if possible.  The 1.0 series will likely have one more bug fix release
(v1.0.3), but is generally considered deprecated in favor of the new 1.1
series.

Here are a list of changes in v1.1 as compared to v1.0.x:

- Various MPI datatype fixes, optimizations.
- Fixed various problems on the SPARC architecture (e.g., not
  correctly aligning addresses within structs).
- Improvements in various run-time error messages to be more clear
  about what they mean and where the errors are occurring.
- Various fixes to mpirun's handling of --prefix.
- Updates and fixes for Cray/Red Storm support.
- Major improvements to the Fortran 90 MPI bindings:
  - General improvements in compile/linking time and portability
between different F90 compilers.
  - Addition of "trivial", "small" (the default), and "medium"
Fortran 90 MPI module sizes (v1.0.x's F90 module was
equivalent to "medium").  See the README file for more
explanation.
  - Fix various MPI F90 interface functions and constant types to
match.  Thanks to Michael Kluskens for pointing out the problems
to us.
- Allow short messages to use RDMA (vs. send/receive semantics) to a
  limited number peers in both the mvapi and openib BTL components.
  This reduces communication latency over IB channels.
- Numerous performance improvements throughout the entire code base.
- Many minor threading fixes.
- Add a define OMPI_SKIP_CXX to allow the user to skip the mpicxx.h from
  being included in mpi.h. It allows the user to compile C code with a
CXX
  compiler without including the CXX bindings.
- PERUSE support has been added. In order to activate it add
  --enable-peruse to the configure options. All events described in
  the PERUSE 2.0 draft are supported, plus one Open MPI
  extension. PERUSE_COMM_REQ_XFER_CONTINUE allow to see how the data
  is segmented internally, using multiple interfaces or the pipeline
  engine. However, this version only support one event of each type
  simultaneously attached to a communicator.
- Add support for running jobs in heterogeneous environments.
  Currently supports environments with different endianness and
  different representations of C++ bool and Fortran LOGICAL.
  Mismatched sizes for other datatypes is not supported.
- Open MPI now includes an implementation of the MPI-2 One-Sided
  Communications specification.
- Open MPI is now configurable in cross-compilation environments.
  Several Fortran 77 and Fortran 90 tests need to be pre-seeded with
  results from a config.cache-like file.
- Add --debug option to mpirun to generically invoke a parallel
debugger.

-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems