> On 16Nov 2022, at 5:11 PM, Wes Bland <wes...@wesbland.com> wrote:
> 
> The rules say you need to get four IMOVE (voting) orgs to support creating a 
> WG at a meeting:
> 
> > Working groups can be established at MPI Forum meetings once at least four 
> > IMOVE organizations indicate support for that proposed Working Group.
> 
> So feel free to propose it at the December meeting and have some folks lined 
> up to give it a thumbs up. Of course, in the meantime, folks are free to 
> start getting together and discussing the topic. The old mailing list from 
> 2008 is still around <https://lists.mpi-forum.org/mailman/listinfo/mpi3-abi>, 
> but I’d recommend you not use it since it uses our old naming scheme. Once 
> the group is official, I’ll make the new list and GitHub org.

Thanks, I’ll rally the votes.

Interested parties should ping me on Slack to be added to #wg-abi.

Somebody was optimistic enough to create https://github.com/mpiwg-abi/ this 
morning but it’s a mystery who 😉

Jeff

> Thanks,
> Wes
> 
>> On Nov 16, 2022, at 1:54 AM, Jeff Hammond via mpi-forum 
>> <mpi-forum@lists.mpi-forum.org> wrote:
>> 
>> I don't know what we do to create new working groups with the post-COVID 
>> rules, but I would like to create and chair a WG focused on ABI 
>> standardization.
>> 
>> There is strong support for this effort in many user communities, including 
>> developers and maintainers of Spack, mpi4py, Julia MPI (MPI.jl), Rust MPI 
>> (rsmpi), PETSc and NVHPC SDK, to name a few.  There are even a few 
>> implementers who have expressed support, but I won't name them for their own 
>> protection.
>> 
>> The problem is so exasperating for our users that there are at least two 
>> different projects devoted to mitigating ABI problems (not including shims 
>> built in to the aforementioned MPI wrappers):
>> 
>> https://github.com/cea-hpc/wi4mpi <https://github.com/cea-hpc/wi4mpi>
>> https://github.com/eschnett/MPItrampoline 
>> <https://github.com/eschnett/MPItrampoline>
>> 
>> I've written about this a bit already, for those who are interested.  More 
>> material will be forthcoming once I have time for more experiments.
>> 
>> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI.md 
>> <https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI.md>
>> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_2.md 
>> <https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_2.md>
>> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_3.md 
>> <https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_3.md>
>> https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_4.md 
>> <https://github.com/jeffhammond/blog/blob/main/MPI_Needs_ABI_Part_4.md>
>> 
>> I understand this is a controversial topic, particularly for implementers.  
>> I hope that we can proceed objectively.
>> 
>> Thanks,
>> 
>> Jeff
>> 
>> -- 
>> Jeff Hammond
>> jeff.scie...@gmail.com <mailto:jeff.scie...@gmail.com>
>> http://jeffhammond.github.io/ 
>> <http://jeffhammond.github.io/>_______________________________________________
>> mpi-forum mailing list
>> mpi-forum@lists.mpi-forum.org
>> https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
> 

_______________________________________________
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum

Reply via email to