May iI open and assign to you preliminarily for at least the background and 
issue discussion (from emails) if not the solution?
You are welcome to pose the solution too, but you know, that's not required to 
get us started 🙂



Anthony Skjellum, PhD

Professor of Computer Science and Chair of Excellence

Director, SimCenter

University of Tennessee at Chattanooga (UTC)

tony-skjel...@utc.edu  [or skjel...@gmail.com]

cell: 205-807-4968


________________________________
From: mpi-forum <mpi-forum-boun...@lists.mpi-forum.org> on behalf of Rolf 
Rabenseifner via mpi-forum <mpi-forum@lists.mpi-forum.org>
Sent: Saturday, September 28, 2019 3:53 AM
To: Anthony Skjellum <skjel...@auburn.edu>
Cc: Rolf Rabenseifner <rabenseif...@hlrs.de>; Main MPI Forum mailing list 
<mpi-forum@lists.mpi-forum.org>; Simone Chiocchetti 
<simone.chiocche...@unitn.it>; MPI-3 Collective Subgroup Discussions 
<mpiwg-c...@lists.mpi-forum.org>
Subject: Re: [Mpi-forum] Error/gap in MPI_NEIGHBOR_ALLTOALL/ALLGATHER

Yes, if after seven years of MPI_NEIGHBOR_ALLTOALL, neither the user know 
whether their MPI library is wrong nor all implementors are sure how to 
implement this routine for 1 or 2 processes in cyclic Cartesian direction, then 
some wording is missing in the MPI standard.

Best regards
Rolf

----- Anthony Skjellum <skjel...@auburn.edu> wrote:
> Rolf let’s open a Ticket
>
> Anthony Skjellum, PhD
> 205-807-4968
>
>
> > On Sep 27, 2019, at 6:09 PM, Rolf Rabenseifner via mpi-forum 
> > <mpi-forum@lists.mpi-forum.org> wrote:
> >
> > Dear MPI collective WG,
> >
> >    you may try to resolve the problem with a maybe wrong
> >    MPI specification for MPI_NEIGHBOR_ALLTOALL/ALLGATHER
> >
> > Dear MPI Forum member,
> >
> >    you may own/use an MPI implementation that implements
> >    MPI_NEIGHBOR_ALLTOALL/ALLGATHER
> >    with race conditions if #nprocs in one dimension is
> >    only 1 or 2 and periodic==true
> >
> > The problem was reported as a bug of the OpenMPI library
> > by Simone Chiochetti from DICAM at the University of Trento,
> > but seems to be a bug in the MPI specification,
> > or at least an advice to implementors is missing.
> >
> > I produced a set of animated slides.
> > Please look at them in presentation mode with animation.
> >
> > Have fun with a problem that clearly prevents the use
> > of MPI_NEIGHBOR_... routines with cyclic boundary condition
> > if one wants to verify that mpirun -np 1 is doing
> > the same as the sequential code.
> >
> > Best regards
> > Rolf
> >
> > --
> > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseif...@hlrs.de .
> > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
> > Head of Dpmt Parallel Computing . . . 
> > www.hlrs.de/people/rabenseifner<http://www.hlrs.de/people/rabenseifner> .
> > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
> > <neighbor_mpi-3_bug.pptx>
> > _______________________________________________
> > mpi-forum mailing list
> > mpi-forum@lists.mpi-forum.org
> > https://lists.mpi-forum.org/mailman/listinfo/mpi-forum

--
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseif...@hlrs.de .
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
Head of Dpmt Parallel Computing . . . 
www.hlrs.de/people/rabenseifner<http://www.hlrs.de/people/rabenseifner> .
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
_______________________________________________
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum
_______________________________________________
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum

Reply via email to