Yes, if after seven years of MPI_NEIGHBOR_ALLTOALL, neither the user know 
whether their MPI library is wrong nor all implementors are sure how to 
implement this routine for 1 or 2 processes in cyclic Cartesian direction, then 
some wording is missing in the MPI standard.

Best regards
Rolf

----- Anthony Skjellum <skjel...@auburn.edu> wrote:
> Rolf let’s open a Ticket 
> 
> Anthony Skjellum, PhD
> 205-807-4968
> 
> 
> > On Sep 27, 2019, at 6:09 PM, Rolf Rabenseifner via mpi-forum 
> > <mpi-forum@lists.mpi-forum.org> wrote:
> > 
> > Dear MPI collective WG,
> > 
> >    you may try to resolve the problem with a maybe wrong 
> >    MPI specification for MPI_NEIGHBOR_ALLTOALL/ALLGATHER
> > 
> > Dear MPI Forum member,
> > 
> >    you may own/use an MPI implementation that implements
> >    MPI_NEIGHBOR_ALLTOALL/ALLGATHER
> >    with race conditions if #nprocs in one dimension is
> >    only 1 or 2 and periodic==true
> > 
> > The problem was reported as a bug of the OpenMPI library 
> > by Simone Chiochetti from DICAM at the University of Trento, 
> > but seems to be a bug in the MPI specification,
> > or at least an advice to implementors is missing.
> > 
> > I produced a set of animated slides.
> > Please look at them in presentation mode with animation.
> > 
> > Have fun with a problem that clearly prevents the use
> > of MPI_NEIGHBOR_... routines with cyclic boundary condition
> > if one wants to verify that mpirun -np 1 is doing
> > the same as the sequential code.
> > 
> > Best regards
> > Rolf
> > 
> > -- 
> > Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseif...@hlrs.de .
> > High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
> > University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
> > Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner .
> > Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
> > <neighbor_mpi-3_bug.pptx>
> > _______________________________________________
> > mpi-forum mailing list
> > mpi-forum@lists.mpi-forum.org
> > https://lists.mpi-forum.org/mailman/listinfo/mpi-forum

-- 
Dr. Rolf Rabenseifner . . . . . . . . . .. email rabenseif...@hlrs.de .
High Performance Computing Center (HLRS) . phone ++49(0)711/685-65530 .
University of Stuttgart . . . . . . . . .. fax ++49(0)711 / 685-65832 .
Head of Dpmt Parallel Computing . . . www.hlrs.de/people/rabenseifner .
Nobelstr. 19, D-70550 Stuttgart, Germany . . . . (Office: Room 1.307) .
_______________________________________________
mpi-forum mailing list
mpi-forum@lists.mpi-forum.org
https://lists.mpi-forum.org/mailman/listinfo/mpi-forum

Reply via email to