On 19/03/2026 11:49, Drew Parsons wrote:
On 2026-03-19 11:47, Adrian Bunk wrote:
On Thu, Mar 19, 2026 at 02:13:25AM +0100, Drew Parsons wrote:
Source: petsc
Followup-For: Bug #1102465

As pointed out already in this bug, this warning is not a bug in PETSc.
It is working entirely as intended.

No information was given with this bug reopening


The bug is still present in 3.24.4+dfsg1-1 (see i386):

https://tracker.debian.org/pkg/mpich
...
∙ ∙ Autopkgtest for petsc/3.24.4+dfsg1-1: amd64: Pass, arm64: Pass, i386: Regression ♻ (reference ♻), ppc64el: Pass, s390x: Pass
...
I presume it refers to the 32-bit build failures of reverse
dependencies, which is happening due to the upgrade of mpich from v4 to >v5.

Yes.

petsc (and all other MPI packages) needs to be rebuilt against
the new mpich (on 32-bit arches)
(the mpich upgrade needs a transition bug).
...

If this is true, then mpich v5 needs a new soname (or at least a package
rename) to ensure that packages built against v4 won't use v5.

One of the following claims is incorrect:
- mpich claims v5 is compatible with v4
- petsc claims v4 and v5 are incompatible

Thanks for the extra detail

I think part of the situation comes from the historical development of the MPI implementations, and from PETSc trying to manage bug reports coming from different MPI libraries and their different versions.

True, mpich is still provided by libmpich12, so it's not a question of transitioning libmpich4 to libmpich5 (I had forgotten that point). So there is some ABI compatibility. It is possible that PETSc is being over-sensitive due to problems that had been arising in the past but are no longer a great problem.

There is (supposed to be) no transition; MPI standard v5 designates a standard ABI, but no change in the API.

I will need to dig into PETSc to understand how it sees two versions.

MPICH provides a new libmpi_abi.so that supplies the new ABI, so as to not break libmpich.so.12

I will need to see if this is specified in v5, or what OpenMPI is expected to do.

The question of ABI compatibility will get even more complex, or simpler depending on your perspective, in the future when the MPI implementations move to MPI standard 5, which will provide a common stable ABI that will enable swap-out between OpenMPI and MPI (and other implementations).  I say "future", but actually that's what mpich v5 is already, it's supporting the new MPI-5 standard.

The apparent compatibility that we're seeing between mpich v4 and v5 might be part of that move to a more stable interface, which is a new development. Once the MPI-5 standard and its common ABI is fully implemented and operating (i.e. once users are routinely using openmpi v5 or mpich v5), I can expect PETSc upstream will be able reevaluate and relax their version tests (at the moment their attention is on ensuring past library versions continue to work, with, as an example of the types of bugs they have to deal with, nvidia's cuda fortran compiler still not supporting an 8-year-old fortran standard [1]).


In the short term I think the simplest thing right now for petsc is to just rebuild for the new version 3.24.5, which will reset the 32-bit builds to mpich v5.

Beyond that, perhaps both the openmpi and mpich packages will want to be reconfigured so they can equally and alternatively be installed as the preferred MPI on any system. The situation would then probably be similar to what we do with BLAS/LAPACK and the various alternative optimised BLAS implementations.

 Yes, work will be needed in both to make them intercompatible.

Alastair

Drew

[1] true story.  https://gitlab.com/petsc/petsc/-/work_items/1879

Reply via email to