Dear all, I updated the issue based on the discussion today:
https://github.com/mpi-forum/mpi-issues/files/5086553/mpi-report-issue120-topol-2020-08-17-annotated-pages334-335.pdf contains the change for the no-no-vote tomorrow. Best regards and thanks to all for this thorough review, Rolf ----- Original Message ----- > From: "Main MPI Forum mailing list" <mpi-forum@lists.mpi-forum.org> > To: "Main MPI Forum mailing list" <mpi-forum@lists.mpi-forum.org> > Cc: "Guillaume Mercier" <guillaume.merc...@u-bordeaux.fr> > Sent: Monday, August 3, 2020 6:39:24 PM > Subject: Re: [Mpi-forum] Final Reminder: Deadline for Ballots, Readings, ... > for the August MPI Forum Meeting is > Tomorrow/Monday > Hi all, > > The Hardware Topologies Working Group wants to announce the first vote > for the MPI_Cart_create_weighted / Topology aware Cartesian communicators: > > Annotated pdf as read in the June 2020 meeting: > https://github.com/mpi-forum/mpi-issues/files/4823780/mpi-report-issue120-topol-2020-06-14-annotated.pdf > > Issue https://github.com/mpi-forum/mpi-issues/issues/120 > PR#98 https://github.com/mpi-forum/mpi-standard/pull/98 > > This is one of the tickets for MPI-4.0 for performance enhancements. > > It fills the gap that the MPI-1.1 MPI_Dims_create wasn't hardware nor > application topology aware. > > For this, it combines both functionalities, > - the factorization of #processes into the given number of dimensions > (as non-hardware/application aware in MPI_Dims_create) > - the reordering of the processes and providing comm_cart > (as in MPI_Cart_create) > into one new hardware & application aware routine: MPI_Cart_create_weighted. > > The hardware awareness comes through the old_comm, > the application awareness through the weights. > > > The design of the interface was developed with the help of many forum > members (thanks a lot!) and is along the principles we already use for > the weights for graph topologies for application topology awareness. > This also means, there is enough room for further research and > development through an info argument to allow additional features in the > future or for specific vendor platforms. > > The interface is as simple as possible according to the main goal of > MPI as defined on page 1 of all MPI standards: > > "The goal of the Message-Passing Interface simply stated is to > develop a widely used standard for writing message-passing programs. > As such the interface should establish a practical, portable, > efficient, and flexible standard for message passing." > > For Cartesian applications, the new interface fulfills this short list > of being "practical, portable, efficient, and flexible" and can be > widely used for all applications that use already the old MPI-1.1 > interface MPI_Dims_create + MPI_Cart_create to realize better > hardware-awareness and to allow also application-awareness through > the new weights. > > And for portable development of MPI applications, it is equally > important that the new interface is part of the MPI standard as it > was for the old Cartesian interface. In addition, best optimization > may require knowledge of the hardware which may not be disclosed > to the public which would prevent third party solutions - this serves > as second argument to vote the new interface into the MPI library. > > Therefore, we'll be glad if you vote for this new performance > oriented interface for MPI-4.0. > > And thank you for reading this announcement to the end. > > Best regards, > Guillaume and the Hardware Topologies Working Group > > > > > > _______________________________________________ > mpi-forum mailing list > mpi-forum@lists.mpi-forum.org > https://lists.mpi-forum.org/mailman/listinfo/mpi-forum -- Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseif...@hlrs.de . High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 . University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 . Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner . Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) . _______________________________________________ mpi-forum mailing list mpi-forum@lists.mpi-forum.org https://lists.mpi-forum.org/mailman/listinfo/mpi-forum