Hello,

I'd like to report a bug with OpenMPI version 2.0.1.

Attached to this email is a C program that runs with

only one MPI rank, uses MPI_Dist_graph_create

to create a graph topology with no communication

edges, and then uses MPI_Neighbor_alltoall

with that graph topology. This results in an error

similar to the following:


[Dans-MacBook-Air:52159] *** An error occurred in MPI_Neighbor_alltoall
[Dans-MacBook-Air:52159] *** reported by process [2935488513,0]
[Dans-MacBook-Air:52159] *** on communicator MPI COMMUNICATOR 3 CREATE FROM 0
[Dans-MacBook-Air:52159] *** MPI_ERR_INTERN: internal error
[Dans-MacBook-Air:52159] *** MPI_ERRORS_ARE_FATAL (processes in this 
communicator will now abort,
[Dans-MacBook-Air:52159] ***    and potentially your MPI job)


This symptom is exhibited by OpenMPI at git tag v2.0.1

and git hash a49422f (latest commit to master).

Note that MPICH 3.3a1 runs this program without error.

I have a report that OpenMPI v1.10.4 also runs this program

without error, hence I'm labeling it a regression.

I am confirming this now and will try git bisect to find the point of 
regression.
#include <mpi.h>
#include <stdio.h>

int main(int argc, char** argv) {
  MPI_Init(&argc, &argv);
  int n = 1;
  int sources[1] = {0};
  int degrees[1] = {0};
  int destinations[1];
  int reorder = 0;
  MPI_Comm graph_comm;
  MPI_Dist_graph_create(MPI_COMM_WORLD, n, sources, degrees, destinations,
      MPI_UNWEIGHTED, MPI_INFO_NULL, reorder, &graph_comm);
  int sendbuf[1];
  int recvbuf[1];
  MPI_Neighbor_alltoall(sendbuf, 1, MPI_INT,
      recvbuf, 1, MPI_INT, graph_comm);
  MPI_Comm_free(&graph_comm);
  MPI_Finalize();
}
_______________________________________________
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Reply via email to