Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Spenser Gilliland
George, I figured it out. The defined type was MPI_Type_vector(N, wrows, N, MPI_FLOAT, &mpi_all_unaligned_t); Where it should have been MPI_Type_vector(wrows, wrows, N, MPI_FLOAT, &mpi_all_unaligned_t); This clears up all the errors. Thanks, Spenser On Thu, May 8, 2014 at 5:43 PM, S

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Spenser Gilliland
George, > The alltoall exchanges data from all nodes to all nodes, including the > local participant. So every participant will write the same amount of > data. Yes, I believe that is what my code is doing. However, I'm not sure why the out of bounds is occurring. Can you be more specific? I r

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread George Bosilca
The alltoall exchanges data from all nodes to all nodes, including the local participant. So every participant will write the same amount of data. George. On Thu, May 8, 2014 at 6:16 PM, Spenser Gilliland wrote: > George, > >> Here is basically what is happening. On the top left, I depicted t

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Spenser Gilliland
George, > Here is basically what is happening. On the top left, I depicted the datatype > resulting from the vector type. The two arrows point to the lower bound and > upper bound (thus the extent) of the datatype. On the top right, the resized > datatype, where the ub is now moved 2 elements a

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread George Bosilca
Spenser, Here is basically what is happening. On the top left, I depicted the datatype resulting from the vector type. The two arrows point to the lower bound and upper bound (thus the extent) of the datatype. On the top right, the resized datatype, where the ub is now moved 2 elements after th

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Spenser Gilliland
Matthieu & George, Thanks you both for helping me. I really appreciate it. > A simple test would be to run it with valgrind, so that out of bound > reads and writes will be obvious. I ran it through valgrind (i left the command line I used in there so you can verify the methods) I am getting er

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Matthieu Brucher
A simple test would be to run it with valgrind, so that out of bound reads and writes will be obvious. Cheers, Matthieu 2014-05-08 21:16 GMT+02:00 Spenser Gilliland : > George & Mattheiu, > >> The Alltoall should only return when all data is sent and received on >> the current rank, so there sho

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread George Bosilca
The segfault indicates that you overwrite outside of the allocated memory (and conflicts with the ptmalloc library). I’m quite certain that you write outside the allocated array … George. On May 8, 2014, at 15:16 , Spenser Gilliland wrote: > George & Mattheiu, > >> The Alltoall should only

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Spenser Gilliland
George & Mattheiu, > The Alltoall should only return when all data is sent and received on > the current rank, so there shouldn't be any race condition. Your right this is MPI not pthreads. That should never happen. Duh! > I think the issue is with the way you define the send and receive > buff

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread George Bosilca
I think the issue is with the way you define the send and receive buffer in the MPI_Alltoall. You have to keep in mind that the all-to-all pattern will overwrite the entire data in the receive buffer. Thus, starting from a relative displacement in the data (in this case matrix[wrank*wrows]), begs f

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Matthieu Brucher
The Alltoall should only return when all data is sent and received on the current rank, so there shouldn't be any race condition. Cheers, Matthieu 2014-05-08 15:53 GMT+02:00 Spenser Gilliland : > George & other list members, > > I think I may have a race condition in this example that is masked

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Spenser Gilliland
George & other list members, I think I may have a race condition in this example that is masked by the print_matrix statement. For example, lets say rank one has a large sleep before reaching the local transpose, will the other ranks have completed the Alltoall and when rank one reaches the local

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-07 Thread Spenser Gilliland
George, > Do you mind posting your working example here on the mailing list? > This might help future users understanding how to correctly use the > MPI datatype. No problem. I wrote up this simplified example so others can learn to use the functionality. This is a matrix transpose operation usin

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-07 Thread George Bosilca
Spenser, Do you mind posting your working example here on the mailing list? This might help future users understanding how to correctly use the MPI datatype. Thanks, George. On Wed, May 7, 2014 at 3:16 PM, Spenser Gilliland wrote: > George, > > Thanks for taking the time to respond to my qu

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-07 Thread Spenser Gilliland
George, Thanks for taking the time to respond to my question! I've succeeded on getting my program to run using the information you provided. I'm actually doing a matrix transpose with data distribute on contiguous rows. However, the code I provided did not show this clearly. Thanks for your i

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-07 Thread George Bosilca
Spenser, There are several issues with the code you provided. 1. You are using a 1D process grid to create a 2D block cyclic distribution. That’s just not possible. 2. You forgot to take in account the extent of the datatype. By default the extent of a vector type is starting from the first by

[OMPI users] MPI_Alltoall with Vector Datatype

2014-05-07 Thread Spenser Gilliland
Hi, I've recently started working with MPI and I noticed that when a Alltoall is utilized with a vector datatype, the call only uses the extent to determine the location for the back to back transactions. This makes using the vector type with collective communicators difficult. For Example: Using