[OMPI packagers] Open MPI 5.0.0: drop support for gcc 4.4.7?

2021-09-21 Thread Jeff Squyres (jsquyres) via ompi-packagers
All --

Unless someone has a strong reason for keeping support for GCC 4.4.7 (i.e., the 
default GCC compiler that shipped in RHEL 6), Open MPI is going to drop support 
for it in v5.0.0.

The reason for this is that PRTE and PMIX no longer compile successfully with 
GCC 4.4.7 (and Open MPI gets a lot of compiler warnings with GCC 4.4.7).  These 
packages ***could be updated to support GCC 4.4.7 if someone cares***.  But 
we'll need someone to contribute pull requests to do so.

If no one plans to contribute pull requests for this in the near future, we're 
doing to drop support for GCC 4.4.7 in Open MPI v5.0.0.  See 
https://github.com/open-mpi/ompi/pull/9398.

* Note that RHEL 7 ships with GCC v4.8.5.
* The Open MPI community regularly tests with GCC v4.8.1.

Hence, Open MPI >= v5.0.0 will support GCC >= v4.8.1.  Specifically: Open MPI's 
configure script will abort with helpful error message for versions of GCC < 
v4.8.1.

Finally, note that this announcement is solely about Open MPI >= v5.0.0.  The 
Open MPI v4.0.x and v4.1.x series will continue to support GCC 4.4.7.

-- 
Jeff Squyres
jsquy...@cisco.com



___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Fwd: Open MPI State of the Union BOF (webinar)

2021-03-30 Thread Jeff Squyres (jsquyres) via ompi-packagers
Open MPI packagers --

I should have thought to forward this before now; sorry!

We're having a free Open MPI "State of the Union" webinar today to give a 
snapshot of where we are and the roadmaps for the various Open MPI release 
trains.  This "State of the Union" presentation is usually given as a 
Birds-of-a-Feather (BOF) session at the annual Supercomputing trade show in 
November, but COVID scuttled those plans in 2020.

If you'd like to attend, roughly the first half will be version roadmap 
updates, and the 2nd half will be updates from various Open MPI community 
members on the work they've been doing over the past year or so.

It's free, but YOU MUST REGISTER TO ATTEND!  See details below.

The webinar starts in roughly 2 hours (1pm US Eastern time).

Hope to see you there!


Begin forwarded message:

From: Jeff Squyres mailto:jsquy...@cisco.com>>
Subject: Open MPI State of the Union BOF (webinar)
Date: March 15, 2021 at 1:06:48 PM EDT
To: Open MPI Announcements 
mailto:annou...@lists.open-mpi.org>>, Open MPI 
User's List mailto:us...@lists.open-mpi.org>>

In conjunction with the Exascale Computing Project (ECP), George Bosilca, Jeff 
Squyres, and members of the Open MPI community will present the current status 
and future roadmap for the Open MPI project.

We typically have an Open MPI "State of the Union" BOF at the annual 
Supercomputing conference, but COVID thwarted those plans in 2020.  Instead, 
The ECP has graciously agreed to host "Community BOF days" for many HPC-related 
projects, including Open MPI.

We therefore invite everyone to attend a free 90-minute webinar for the Open 
MPI SotU BOF:

Date: March 30, 2021
Time: 1:00pm US Eastern time
URL:  https://www.exascaleproject.org/event/ecp-community-bof-days-2021/

YOU MUST REGISTER TO ATTEND!

Expand the "Open MPI State of the Union" entry on that page and click on the 
registration link to sign up (registration is free).

We hope to see you there!

--
Jeff Squyres
jsquy...@cisco.com



--
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Fwd: [OMPI devel] Open MPI v5.0.x branch created

2021-03-12 Thread Jeff Squyres (jsquyres) via ompi-packagers
FYI.  Open MPI v5.0.x is coming!


Begin forwarded message:

From: Geoffrey Paulsen via devel 
mailto:de...@lists.open-mpi.org>>
Subject: [OMPI devel] Open MPI v5.0.x branch created
Date: March 11, 2021 at 1:24:32 PM EST
To: de...@lists.open-mpi.org
Cc: Geoffrey Paulsen mailto:gpaul...@us.ibm.com>>
Reply-To: Open MPI Developers 
mailto:de...@lists.open-mpi.org>>

Open MPI developers,

  We've created the Open MPI v5.0.x branch today, and are receiving bugfixes. 
Please cherry-pick any master PRs to v5.0.x once they've been merged to master.

  We're targeting an aggressive but achievable release date of May 15th.

  If you're in charge of your organization's CI tests, please enable for v5.0.x 
PRs.  It may be a few days until all of our CI is enabled on v5.0.x.

  Thanks everyone for your continued commitment to Open MPI's success.

  Josh Ladd, Austen Lauria, and Geoff Paulsen - v5.0 RMs




--
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Does anyone care about 32-bit Open MPI?

2021-03-09 Thread Jeff Squyres (jsquyres) via ompi-packagers
Open MPI packagers --

We're seriously contemplating dropping 32 bit support for Open MPI v5.0 (32-bit 
support will continue in Open MPI series prior to v5.0.x -- for example, the 
v4.1.x series will continue to have 32-bit support).

Will this be a problem for the downstream Open MPI packagers?

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Fwd: [Open MPI Announce] Open MPI v4.1.0 Released

2020-12-18 Thread Jeff Squyres (jsquyres) via ompi-packagers
Open MPI packagers: FYI.


Begin forwarded message:

From: "Jeff Squyres \(jsquyres\) via announce" 
mailto:annou...@lists.open-mpi.org>>
Subject: [Open MPI Announce] Open MPI v4.1.0 Released
Date: December 18, 2020 at 6:03:50 PM EST
To: Open MPI Announcements 
mailto:annou...@lists.open-mpi.org>>
Cc: "Jeff Squyres (jsquyres)" mailto:jsquy...@cisco.com>>
Reply-To: mailto:us...@lists.open-mpi.org>>

The Open MPI community is pleased to announce the start of the Open MPI 4.1 
release series with the release of Open MPI 4.1.0.  The 4.1 release series 
builds on the 4.0 release series and includes enhancements to OFI and UCX 
communication channels, as well as collectives performance improvements.

The Open MPI 4.1 release series can be downloaded from the Open MPI website:

   https://www.open-mpi.org/software/ompi/v4.1/

Changes in 4.1.0 compared to 4.0.x:

- collectives: Add HAN and ADAPT adaptive collectives components.
 Both components are off by default and can be enabled by specifying
 "mpirun --mca coll_adapt_priority 100 --mca coll_han_priority 100 ...".
 We intend to enable both by default in Open MPI 5.0.
- OMPIO is now the default for MPI-IO on all filesystems, including
 Lustre (prior to this, ROMIO was the default for Lustre).  Many
 thanks to Mark Dixon for identifying MPI I/O issues and providing
 access to Lustre systems for testing.
- Updates for macOS Big Sur.  Thanks to FX Coudert for reporting this
 issue and pointing to a solution.
- Minor MPI one-sided RDMA performance improvements.
- Fix hcoll MPI_SCATTERV with MPI_IN_PLACE.
- Add AVX support for MPI collectives.
- Updates to mpirun(1) about "slots" and PE=x values.
- Fix buffer allocation for large environment variables.  Thanks to
 @zrss for reporting the issue.
- Upgrade the embedded OpenPMIx to v3.2.2.
- Take more steps towards creating fully Reproducible builds (see
 https://reproducible-builds.org/).  Thanks Bernhard M. Wiedemann for
 bringing this to our attention.
- Fix issue with extra-long values in MCA files.  Thanks to GitHub
 user @zrss for bringing the issue to our attention.
- UCX: Fix zero-sized datatype transfers.
- Fix --cpu-list for non-uniform modes.
- Fix issue in PMIx callback caused by missing memory barrier on Arm platforms.
- OFI MTL: Various bug fixes.
- Fixed issue where MPI_TYPE_CREATE_RESIZED would create a datatype
 with unexpected extent on oddly-aligned datatypes.
- collectives: Adjust default tuning thresholds for many collective
 algorithms
- runtime: fix situation where rank-by argument does not work
- Portals4: Clean up error handling corner cases
- runtime: Remove --enable-install-libpmix option, which has not
 worked since it was added
- opal: Disable memory patcher component on MacOS
- UCX: Allow UCX 1.8 to be used with the btl uct
- UCX: Replace usage of the deprecated NB API of UCX with NBX
- OMPIO: Add support for the IME file system
- OFI/libfabric: Added support for multiple NICs
- OFI/libfabric: Added support for Scalable Endpoints
- OFI/libfabric: Added btl for one-sided support
- OFI/libfabric: Multiple small bugfixes
- libnbc: Adding numerous performance-improving algorithms

--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>

___
announce mailing list
annou...@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/announce


--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Open MPI 4.1.0rc4

2020-11-24 Thread Jeff Squyres (jsquyres) via ompi-packagers
We're getting close: 4.1.0rc4 is now available.

https://www.open-mpi.org/software/ompi/v4.1/

The list of things left before v4.1.0 is final is getting very, very short.

Changes since rc3:

- Configury fixes for macOS Big Sur
- Minor one-sided RDMA performance improvements
- Fix OSHMEM compile error with some compilers
- hcoll: Scatterv MPI_IN_PLACE fixes
- mtl/ofi: Check cq_data_size without querying providers again
- Fix computation of process relative locality

We made an rc3 a short while ago, but I neglected to send the email about it.  
So here's the list of differences since rc2:

- HAN and ADAPT coll modules
- UCX zero-size datatype transfer fixes
- Take more steps towards "reproducible" builds
- AVX fixes
- Updates for mpirun(1) man page about "slots" and PE=x values
- Fix buffer allocation for large environment variables
- Fix cpu-list for non-uniform nodes
- Update Internal PMIx to OpenPMIx v3.2.1
- Disable man pages for internal OpenPMIx
- Fix some symbol pollution
- Make Type_create_resized set FLAG_USER_UB
- Fix OFI configury CPPFLAGS to find fabric.h
- mtl/ofi: Do not fail if error CQ is empty
- mtl/ofi: Fix erroneous FI_PEEK/FI_CLAIM usage
- Update coll tuned thresholds for modern HPC environments

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Open MPI: v5.0 delayed, v4.1 soon

2020-06-08 Thread Jeff Squyres (jsquyres) via ompi-packagers
Heads up for packagers: the v5.0.0 release has been delayed.  It's taking 
longer than expected to get master stable -- which has ended up slotting v5.0.0 
in to scheduling for later this year, and possibly even 2021.

Instead, we're going to release v4.1.0 "soon".

v4.1.0 will be branched from v4.0.4 (due immanently).  It will therefore have 
all the bug fixes that have gone into the v4.0.x line, and also have a 
relatively small number of minor new features (i.e., features that some 
community members needed in an official release).

Backwards compatibility -- including preserving ABI -- should be preserved 
across the v4.0.x and v4.1.x series.  So if you have released v4.0.x binary 
packages, you should be able to seamlessly upgrade to a v4.1.x binary package.

Expect more details to come on this soon, but we wanted to give all you 
packagers a heads up that that is coming.

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


Re: [OMPI packagers] Open MPI v5.0 packaging change: require pandoc

2020-05-12 Thread Jeff Squyres (jsquyres) via ompi-packagers
On Apr 14, 2020, at 5:39 PM, Jeff Squyres 
mailto:jsquy...@cisco.com>> wrote:

I should be clear: the build requirements here are ***only if you try to build 
a git clone***.

If you're building from an Open MPI distribution tarball, the man pages will be 
pre-built and included in the tarball -- no pandoc is needed (just like we do 
with GNU Flex, GNU Autotools, ... etc.).

So I should re-phrase my question: do you ever have the need to build Open MPI 
from a git clone?  And if so, would having pandoc available -- like Flex / 
Autotools -- be a problem?

Didn't hear back from any of you on this -- just want to make sure you saw it:

- Pandoc is only required ***if you build Open MPI master (i.e., >=v5.0.x) from 
a git clone***
- If you're building a tarball from www.open-mpi.org, 
the man pages are pre-built and are included in the tarball (i.e., no Pandoc 
needed)
- This is pretty much the same situation as with the GNU Autotools and Flex.

--
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


Re: [OMPI packagers] New RTE for OMPI v5

2020-04-27 Thread Jeff Squyres (jsquyres) via ompi-packagers
Open MPI packagers --

Just to be clear: this is an open question to you for the upcoming Open MPI 
v5.0.x series.

We'd really appreciate your feedback.

Thanks!



> On Apr 14, 2020, at 12:40 PM, Ralph Castain  wrote:
> 
>  Just pinging you all to ensure you got this. I need to know if we need 
> to coordinate an official PRRTE release to coincide (and sync) with the 
> release of OMPI v5.0, or if you are okay with just using the embedded PRRTE 
> included with the OMPI v5.0 tarball.
> 
> If it helps, PRRTE depends upon hwloc, libevent, and PMIx - none of which are 
> internally embedded.
> 
> Thanks
> Ralph
> 
> 
>> On Apr 1, 2020, at 8:40 PM, Ralph Castain  wrote:
>> 
>> Hi folks
>> 
>> I just wanted to alert you to the fact that we are replacing the ORTE 
>> runtime environment in Open MPI with an external package called PRRTE ("PMIx 
>> Reference RunTime Environment"). We will be including a copy of that package 
>> in the OMPI v5 tarball, just as we do libevent, hwloc, and PMIx.
>> 
>> PRRTE historically has not been generating official releases - there is an 
>> old v1.0, but nothing on a regular release sequence. As part of this change 
>> in OMPI, the PRRTE folks will begin generating official releases that OMPI 
>> will use in their releases. So there will be correlation between the 
>> packages.
>> 
>> My question to you is: is this a package you would prefer to distribute 
>> separately (as you do for PMIx and friends), or shall we just leave it as an 
>> included package? PRRTE does get used by a fairly small community of people 
>> at the national labs and a couple of universities, but it by no means has as 
>> wide-ranging a following as OMPI.
>> 
>> Just need to know if we need to add a --with-prrte option to OMPI's 
>> configure code so one could point it at an external PRRTE installation.
>> Ralph
>> 
>> 
>> ___
>> ompi-packagers mailing list
>> ompi-packagers@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers
> 
> 
> ___
> ompi-packagers mailing list
> ompi-packagers@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


Re: [OMPI packagers] Open MPI v5.0 packaging change: require pandoc

2020-04-14 Thread Jeff Squyres (jsquyres) via ompi-packagers
On Apr 14, 2020, at 3:45 PM, Marco Atzeri 
mailto:marco.atz...@gmail.com>> wrote:

very complex
https://www.joachim-breitner.de/blog/748-Thoughts_on_bootstrapping_GHC

It is one of the cases where a new system will be almost impossible to 
bootstrap. I saw something similar in GO build system

so for me Pandoc seems a no go.

I should be clear: the build requirements here are ***only if you try to build 
a git clone***.

If you're building from an Open MPI distribution tarball, the man pages will be 
pre-built and included in the tarball -- no pandoc is needed (just like we do 
with GNU Flex, GNU Autotools, ... etc.).

So I should re-phrase my question: do you ever have the need to build Open MPI 
from a git clone?  And if so, would having pandoc available -- like Flex / 
Autotools -- be a problem?

--
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


Re: [OMPI packagers] Open MPI v5.0 packaging change: require pandoc

2020-04-14 Thread Jeff Squyres (jsquyres) via ompi-packagers
On Apr 14, 2020, at 12:23 PM, Marco Atzeri  wrote:
> 
>> Open MPI packagers --
>> We would like to require "pandoc" for building Open MPI >=v5.0.x from a git 
>> clone, at least version v1.12.
>> Is this ok with all of you?
> 
> give me time to try build it.
> 
> Currently is not available on cygwin


Ok.  It may be a little annoying, because it is written in Haskell, so you have 
to also have the GNU Haskell compiler.

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Open MPI v5.0 packaging change: require pandoc

2020-04-14 Thread Jeff Squyres (jsquyres) via ompi-packagers
Open MPI packagers --

We would like to require "pandoc" for building Open MPI >=v5.0.x from a git 
clone, at least version v1.12.

Is this ok with all of you?

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Updates for packagers

2019-04-17 Thread Jeff Squyres (jsquyres) via ompi-packagers
Open MPI packagers:

I have a few updates for you.

1. As you noticed, we just had releases for Open MPI v4.0.1, v3.0.4, and v3.1.4.

2. For v4.0.1, in addition to the things listed in 
https://www.mail-archive.com/announce@lists.open-mpi.org//msg00122.html, it is 
worth noting that we removed the MPI-1 deleted constructs from mpi.h by 
default.  We *meant* to do this in v4.0.0 (and we said so in 
https://www.mail-archive.com/announce@lists.open-mpi.org//msg00119.html and 
other places), and had a bug that prevented that from happening.  We fixed the 
bug in v4.0.1, but neglected to mention it in the v4.0.1 email.  This has 
caused at least some confusion (e.g., 
https://github.com/open-mpi/ompi/issues/6601 and 
https://bitbucket.org/mpi4py/mpi4py/issues/121/pip-install-error).

3. v3.0.4 and v3.1.4 have just the usual array of bug fixes; there isn't a lot 
of note there.

4. We do anticipate probably having v3.0.5 and v3.1.5 releases someday.  
There's at least one issue that we are interested in fixing in the v4.0.x, 
v3.1.x, and v3.0.x series: https://github.com/open-mpi/ompi/issues/6501.  It 
didn't make it into v4.0.1, but should be in a future v4.0.x release.  If the 
fix is not too invasive, we hope to include it in the final releases for v3.0.x 
and v3.1.x (hopefully x==5 in both cases).

5. v5.0.x is brewing, but we have nothing concrete in terms of timeline to 
report on it yet (see https://github.com/open-mpi/ompi/wiki/5.0.x-FeatureList). 
 It will definitely be an ABI break from v3.0.x/v3.x.1/v4.0.x.

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Unexpected releases: 2.1.5, 3.1.2

2018-08-15 Thread Jeff Squyres (jsquyres) via ompi-packagers
Packagers --

Heads up: we just released 2.1.4 on Friday, but unrelated to that release, we 
found a fairly serious bug in our shared memory plugin (the "vader" BTL).

We anticipate doing a 2.1.5 release in the immediate future, and will likely do 
a 3.1.2 release very shortly as well.  Here's the full list...

* 2.0.x series

If you have any 2.0.x version, you should upgrade to 2.1.5 (it's the first one 
with a fully-functional "vader").

* 2.1.x series

If you have any 2.1.x version, you should upgrade to 2.1.5.

We thought that 2.1.4 was going to be the last in this series, but this vader 
bug is serious enough to warrant a 2.1.5.  2.1.5 will almost certainly be the 
last in this release series.

* 3.0.x series

If you have 3.0.0, you need to upgrade to at least 3.0.1 (3.0.1, released in 
March 2018, and 3.0.2 are ok).

* 3.1.x series

If you have any 3.1.x version, you need to upgrade to 3.1.2 (expected to be 
released shortly).

The 3.1.2 series was always planned -- but we had anticipated it to be in 
September.  This vader bug warranted pulling in that release date to August.

If it is difficult for you to upgrade on short order, we can send you the 
patch: it's a 2-line fix.  Probably very easy to apply to your existing 
packages.

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Open MPI v2.1.4rc1

2018-08-06 Thread Jeff Squyres (jsquyres) via ompi-packagers
Open MPI v2.1.4rc1 has been pushed.  It is likely going to be the last in the 
v2.1.x series (since v4.0.0 is now visible on the horizon).  It is just a bunch 
of bug fixes that have accumulated since v2.1.3; nothing huge.  We'll encourage 
users who are still using the v2.1.x series to upgrade to this release; it 
should be a non-event for anyone who has already upgraded to the v3.0.x or 
v3.1.x series.

https://www.open-mpi.org/software/ompi/v2.1/

If no serious-enough issues are found, we plan to release 2.1.4 this Friday, 
August 10, 2018.

Please test!

Bug fixes/minor improvements:
- Disable the POWER 7/BE block in configure.  Note that POWER 7/BE is
  still not a supported platform, but it is no longer automatically
  disabled.  See
  https://github.com/open-mpi/ompi/issues/4349#issuecomment-374970982
  for more information.
- Fix bug with request-based one-sided MPI operations when using the
  "rdma" component.
- Fix issue with large data structure in the TCP BTL causing problems
  in some environments.  Thanks to @lgarithm for reporting the issue.
- Minor Cygwin build fixes.
- Minor fixes for the openib BTL:
  - Support for the QLogic RoCE HCA
  - Support for the Boradcom Cumulus RoCE HCA
  - Enable support for HDR link speeds
- Fix MPI_FINALIZED hang if invoked from an attribute destructor
  during the MPI_COMM_SELF destruction in MPI_FINALIZE.  Thanks to
  @AndrewGaspar for reporting the issue.
- Java fixes:
  - Modernize Java framework detection, especially on OS X/MacOS.
Thanks to Bryce Glover for reporting and submitting the fixes.
  - Prefer "javac -h" to "javah" to support newer Java frameworks.
- Fortran fixes:
  - Use conformant dummy parameter names for Fortran bindings.  Thanks
to Themos Tsikas for reporting and submitting the fixes.
  - Build the MPI_SIZEOF() interfaces in the "TKR"-style "mpi" module
whenever possible.  Thanks to Themos Tsikas for reporting the
issue.
  - Fix array of argv handling for the Fortran bindings of
MPI_COMM_SPAWN_MULTIPLE (and its associated man page).
  - Make NAG Fortran compiler support more robust in configure.
- Disable the "pt2pt" one-sided MPI component when MPI_THREAD_MULTIPLE
  is used.  This component is simply not safe in MPI_THREAD_MULTIPLE
  scenarios, and will not be fixed in the v2.1.x series.
- Make the "external" hwloc component fail gracefully if it is tries
  to use an hwloc v2.x.y installation.  hwloc v2.x.y will not be
  supported in the Open MPI v2.1.x series.
- Fix "vader" shared memory support for messages larger than 2GB.
  Thanks to Heiko Bauke for the bug report.
- Configure fixes for external PMI directory detection.  Thanks to
  Davide Vanzo for the report.


-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Symbol versioning (was: Shift focus to external libevent, hwloc, pmix)

2018-07-17 Thread Jeff Squyres (jsquyres) via ompi-packagers
On Jul 11, 2018, at 3:56 AM, Alastair McKinstry  wrote:
> 
> Also, I'd like to revisit the Symbol versioning feature 
> (https://github.com/open-mpi/ompi/issues/1956).

(Splitting this into its own thread)

> This was closed due to lack of followup on my part at the time (sorry about 
> that). AFAIK the problem is making my gcc patch work on other compilers; is 
> this the case, and if it was reopened what compilers should be targetted ?

SHORT VERSION
=

We had a lengthy discussion about this today on the weekly OMPI engineering 
webex, and we generally agree with you.  We'll try hard not to increase the .so 
version number for v4.0.0, but we're not going to promise to do this until we 
do a full ABI-backwards-compatibilty analysis.

But heads up that we will almost certainly break ABI compatibility (and 
therefore increase the .so version number) in v5.0.0 (sometime in 2019 -- 
probably mid- to late- year, but we haven't put a date on it yet).  So if you 
want to revisit symbol versioning/etc., it's right about the right time to do 
so.

MORE DETAIL
===

We're nearly out of time for 4.0.0 -- we're branching for that tomorrow (i.e., 
feature complete, but not necessarily bug free).  But I think the door is open 
for a symbol versioning feature for future releases.

I re-read the following to re-familiarize myself with this proposal:

- https://github.com/open-mpi/ompi/issues/1906
- https://github.com/open-mpi/ompi/pull/1955
- https://github.com/open-mpi/ompi/issues/1956

Re-reading this all at once helped page back in all the issues, and I think I 
understand it better than I did back then.

The real issue -- as you stated multiple times, but I don't think I really 
fully grokked at the time -- is that you just want libmpi.so.X (and OSHMEM), 
and you don't want X to change so that you don't have to recompile a bazillion 
other packages (especially since Debian's MPI dependencies can get to be 5 
deep!). 

Per Geoff's/Howard's replies, I now see how this ties in to your request to not 
increment the SO major version number.

A few additional points to add into the conversation:

1. In Open MPI v4.0.0, we are finally not including some deleted MPI-1 
functions in mpi.h/mpif.h/mpi/mpi_f08 by default any more (i.e., they were 
actually deleted 5+ years ago, but we've still been carrying+building them).  A 
packager can configure with --enable-mpi1-compatibility if they want those 
functions in the devel headers (the symbols will still be available in all 
cases -- see below).

2. The .so versions (i.e., Libtool c:r:a versions) are determined by the 
*default* build build options.  For example, a v4.0.0 default build will 
actually delete some MPI functions that were there in v3.1.x.  We talked about 
making the "a" value be dependent upon configure options... but that seems 
problematic / full of dragons.

--> We're not 100% what the Right Thing is to do here yet.

3. Depending on how you packaged Open MPI v3.1.x, you'll likely also want to 
--enable-mpi-cxx in v4.0.0 to keep building the C++ MPI bindings.  These 
bindings are a separate library than the C bindings, meaning that when you 
enable this option, you get a new library when you install.

4. Per our discussion today, this is what we decided on as a roadmap for v4.0.x 
and v5.0.x.  We think that this is congruent with your request, but please 
chime in with your thoughts:

4a. In v4.0.0:
- By default, we'll disable the prototypes for various MPI-1 functions that 
were deleted from the MPI standard years ago.
- Packagers can --enable-mpi1-compat to restore these deleted MPI-1 
functions/globals in mpi.h/mpif.h/mpi+mpi_f08 modules.  Meaning: this option 
mainly affects *compiling* applications that use the deleted MPI-1 
functions/globals.
- Regardless of whether --enable-mpi1-compat is used, the symbols for these 
functions and globals will still be in libmpi.  Assuming the rest of the ABI 
analysis works out well and v4.0.0 is ABI compatible with v3.0.x, then you can 
have a app/library compiled against OMPI v3.0.x that will ABI link successfully 
with v4.0.0, even if they're using these deleted MPI-1 functions/globals.

4b. In v5.0.0:

- The C++ bindings will no longer be available.
  Meaning: this library will no longer be generated.
- The deleted MPI-1 functions will no longer be available.
  Meaning: these functions/globals will disappear both from 
mpi.h/mpif.h/mpi+mpi_f08 and from all the corresponding libraries.

BOTTOM LINE: this is an announcement of our intent to actually delete the C++ 
bindings and the deleted MPI-1 functions in Open MPI v5.0.0.

As mentioned above, there is no strict timeline for v5.0.0 yet -- it'll be in 
2019 sometime (I'd estimate mid-year at the earliest, but that's a SWAG).

Thoughts?

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org

Re: [OMPI packagers] Shift focus to external libevent, hwloc, pmix

2018-07-17 Thread Jeff Squyres (jsquyres) via ompi-packagers
On Jul 10, 2018, at 5:11 PM, Geoffrey Paulsen  wrote:
> 
>In Open MPI v4.0, one of the changes that we're making is to prefer 
> external packages in configure versus our internal packages for libevent, 
> hwloc, and pmix (also see https://github.com/open-mpi/ompi/issues/5031).  
> There are some restrictions about only preferring external packages that are 
> "compatible", but in general we hope this should make redistributing Open MPI 
> easier as a whole.   We're working on this feature here 
> https://github.com/open-mpi/ompi/pull/5395 if you want to follow along.

Heads up: Arrgh.  It looks like we're going to miss this feature -- 
unfortunately, no one was able to devote enough time to finish it.  :-(

Sorry folks.  The existing "external" mechanisms will all still work, of 
course, but our configure script won't "prefer" them, so to speak.  So it's not 
a major deal for this feature to have missed v4.0.0, but it's still a bit of a 
bummer.

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


Re: [OMPI packagers] [OMPI devel] openmpi 3.1.x examples

2018-07-16 Thread Jeff Squyres (jsquyres) via ompi-packagers
On Jul 13, 2018, at 4:35 PM, Marco Atzeri  wrote:
> 
>> For one. The C++ bindings are no longer part of the standard and they are 
>> not built by default in v3.1x. They will be removed entirely in Open MPI 
>> v5.0.0.

Hey Marco -- you should probably join our packagers mailing list:

https://lists.open-mpi.org/mailman/listinfo/ompi-packagers

Low volume, but intended exactly for packagers like you.  It's fairly recent; 
we realized we needed to keep in better communication with our downstream 
packagers.

(+ompi-packagers to the CC)

As Nathan mentioned, we stopped building the MPI C++ bindings by default in 
Open MPI 3.0.  You can choose to build them with the configure --enable-mpi-cxx.

This is the current plan:

- In v4.0, we're no longer building a bunch of other deleted MPI-1 functions by 
default (which can be restored via --enable-mpi1-compat, and --enable-mpi-cxx 
will still work).

- In v5.0, delete all the C++ bindings and the deleted MPI-1 functions.

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


Re: [OMPI packagers] Open MPI packagers email list

2018-07-11 Thread Jeff Squyres (jsquyres) via ompi-packagers
Ralph: do you mean to say "Open MPI v4.0.x will be compatible with any PMIx >= 
v3.0.0"?


> On Jul 11, 2018, at 12:02 PM, r...@open-mpi.org wrote:
> 
> Yes - with any OMPI of v3.x and above
> 
>> On Jul 11, 2018, at 3:41 AM, Alastair McKinstry 
>>  wrote:
>> 
>> Hi Jeff
>> 
>> Whats the relationship between OpenMPI and the PMIX 3.0.0 release  ?
>> 
>> Are they compatible ?
>> 
>> regards
>> 
>> Alastair
>> 
>> 
>> 
>> On 10/07/2018 23:19, Jeff Squyres (jsquyres) wrote:
>>> Yo Alastair --
>>> 
>>> Would you mind joining the Open MPI packagers email list?  It's very low 
>>> volume:
>>> 
>>> https://lists.open-mpi.org/mailman/listinfo/ompi-packagers
>>> 
>>> As part of our attempt to provide better communications to our downstream 
>>> packagers, we just sent out a note about the upcoming v4.0.0:
>>> 
>>> 
>>> https://www.mail-archive.com/ompi-packagers@lists.open-mpi.org/msg2.html
>>> 
>>> And there was one previous to this, too:
>>> 
>>> 
>>> https://www.mail-archive.com/ompi-packagers@lists.open-mpi.org/msg1.html
>>> 
>> 
>> -- 
>> Alastair McKinstry, , , 
>> https://diaspora.sceal.ie/u/amckinstry
>> Commander Vimes didn’t like the phrase “The innocent have nothing to fear,”
>> believing the innocent had everything to fear, mostly from the guilty but in 
>> the longer term
>> even more from those who say things like “The innocent have nothing to fear.”
>> - T. Pratchett, Snuff
>> 
>> ___
>> ompi-packagers mailing list
>> ompi-packagers@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers
> 


-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers


[OMPI packagers] Shift focus to external libevent, hwloc, pmix

2018-04-06 Thread Jeff Squyres (jsquyres)
In case you didn't see it, I've posted a concrete proposal for shifting 
configure's bias to using external libevent, hwloc, and pmix:

https://github.com/open-mpi/ompi/issues/5031

(We talked about this in kinda generalities in Dallas last month; this issue is 
an attempt to actually nail down some specifics before I go implement)

Comments / discussion welcome -- it would probably be best to reply/keep the 
conversation on that issue so that it's all in one place.

-- 
Jeff Squyres
jsquy...@cisco.com

___
ompi-packagers mailing list
ompi-packagers@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/ompi-packagers