Spenser, There are several issues with the code you provided.
1. You are using a 1D process grid to create a 2D block cyclic distribution. That’s just not possible. 2. You forgot to take in account the extent of the datatype. By default the extent of a vector type is starting from the first byte till the last, thus in your case it encompass all the data from the other processes. You have to resize the datatype, in order to be able to jump to the correct data for the next processor. I put below a modified version of your code, that is doing something closer to what you expect. Have you look at subarray and array constructs? They might help with the special datatype you want, or at least they can give you a hint on how to correctly construct the type. George. #include <mpi.h> #include <stdio.h> void print_matrix(float* A, int size, int wrank) { int i, j; for(i = 0; i < size; i++) { printf("<%2d> ", wrank); for (j = 0; j < size; j++) { printf("%6.2g ", A[i*size+j]); } printf("\n"); } printf("\n"); } int main(int argc, char* argv[]) { float A[4][4]; int i, j, wsize, wrank; MPI_Datatype temp, mpi_all_t; MPI_Init(NULL, NULL); MPI_Comm_size(MPI_COMM_WORLD, &wsize); // Assume 2 MPI_Comm_rank(MPI_COMM_WORLD, &wrank); MPI_Type_vector(4, 4/wsize, 4, MPI_FLOAT, &temp); MPI_Type_create_resized(temp, 0, 4/wsize*sizeof(float), &mpi_all_t); MPI_Type_free(&temp); MPI_Type_commit(&mpi_all_t); for(i = 0; i < 4; i++) for (j = 0; j < 4; j++) { A[i][j] = i*4+j; } MPI_Alltoall(A, 1, mpi_all_t, A, 1, mpi_all_t, MPI_COMM_WORLD); print_matrix(A, 4, wrank); MPI_Type_free(&mpi_all_t); MPI_Finalize(); return 0; } On May 7, 2014, at 04:48 , Spenser Gilliland <spen...@gillilanding.com> wrote: > Hi, > > I've recently started working with MPI and I noticed that when a > Alltoall is utilized with a vector datatype, the call only uses the > extent to determine the location for the back to back transactions. > This makes using the vector type with collective communicators > difficult. > > For Example: > Using the code at the bottom. I think I should get > > 0 1 8 9 > 4 5 12 13 > 2 3 10 11 > 6 7 14 15 > > However, the result is > > 0 1 2 3 > 4 5 8 9 > 6 7 10 11 > x x 14 15 > > Is this the way it is supposed to be? > > FYI: This is version 1.6.2 in Rocks 6 > > Thanks, > Spenser > > float A[4][4]; > int wsize, wrank; > MPI_Comm_size(MPI_COMM_WORLD, &wsize); // Assume 2 > MPI_Comm_rank(MPI_COMM_WORLD, &wrank); > MPI_Type_vector(4/wsize, 4/wsize, 4, MPI_FLOAT, &mpi_all_t); > MPI_Type_commit(&mpi_all_t); > > for(int i = 0; i < 4; i++) for (j = 0; j < 4; j++) { > A[i][j] = i*4+j; > } > > MPI_Alltoall(A[rank*4/wsize*4], 1, mpi_all_t, > A[rank*4/wsize*4], 1, mpi_all_t, > MPI_COMM_WORLD); > > for(int i = 0; i < 4; i++) { > for (j = 0; j < 4; j++) { > printf("%6.2g") > } > printf("\n"); > } > > -- > Spenser Gilliland > Computer Engineer > Doctoral Candidate > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users