George, thanks for the quick answer!
I thought about using alltoall before the alltoallv, but it "feels" like
this might end up slow having two alltoall, at least doubling the latency.
Might still be faster than a large bunch of sendrecvs of course. I'll
simply have to do some short tests, anyway if it turns out the
alltoall/alltoallv combo is too slow.
Thanks again!
Daniel
Den 2008-09-10 17:10:06 skrev George Bosilca <bosi...@eecs.utk.edu>:
Daniel,
Your understanding of he MPI standard requirement with regard to
MPI_Alltoallv is now 100% accurate. The send count and datatype should
match what the receiver expect. You can always use an MPI_Alltoall
before the MPI_Alltoallv to exchange the lengths that you expect.
george.
On Sep 10, 2008, at 1:46 PM, Daniel Spångberg wrote:
Dear all,
First some background, the real question is at the end of this
(longish) mail.
I have a problem where I need to exchange data between all processes.
The data is unevenly distributed and I thought at first I could use
MPI_Alltoallv to transfer the data. However, in my case, the receivers
do not know how many data items the senders will send, but it is
relatively easy to set up so the receiver can figure out the maximum
number of items the sender will send, so I set the recvcounts to the
maximum possible, and the sendcounts to the actual number of elements
(smaller than recvcounts).
The mpi-forum description (from
http://www.mpi-forum.org/docs/mpi21-report/node99.htm) describes the
following:
MPI_ALLTOALLV(sendbuf, sendcounts, sdispls, sendtype, recvbuf,
recvcounts, rdispls, recvtype, comm)
IN sendbuf starting address of send buffer (choice)
IN sendcounts integer array equal to the group size specifying the
number of elements to send to each processor
IN sdispls integer array (of length group size). Entry j specifies the
displacement (relative to sendbuf) from which to take the outgoing data
destined for process j
IN sendtype data type of send buffer elements (handle)
OUT recvbuf address of receive buffer (choice)
IN recvcounts integer array equal to the group size specifying the
number of elements that can be received from each processor
IN rdispls integer array (of length group size). Entry i specifies the
displacement (relative to recvbuf) at which to place the incoming data
from process i
IN recvtype data type of receive buffer elements (handle)
IN comm communicator (handle)
In particular the wording is "the number of elements that can be
received from each processor" for recvcounts, and does not say that
this must be exactly the same as the number of elements sent.
It also mentions that it should work similarly as a number of
independent MPI_Send/MPI_Recv calls. The amount of data sent in such a
case does not need to exactly match the amount of data received.
I, unfortunately, missed the following:
The type signature associated with sendcounts[j], sendtypes[j] at
process i must be equal to the type signature associated with
recvcounts[i], recvtypes[i] at process j. This implies that the amount
of data sent must be equal to the amount of data received, pairwise
between every pair of processes. Distinct type maps between sender and
receiver are still allowed.
And the openmpi man page shows
When a pair of processes exchanges data, each may pass
different ele-
ment count and datatype arguments so long as the sender
specifies the
same amount of data to send (in bytes) as the receiver
expects to
receive.
I did test my program on different send/recv counts, and while it
sometimes works, sometimes it does not. Even if it worked I would not
be comfortable using it anyway.
The question is: If there is no way of determining the length of the
data sent by the sender on the receiving end, I see two options: Either
always transmit too much data using MPI_Alltoall(v) or cook up my own
routine based on PTP calls, probably MPI_Sendrecv is the best option.
Am I missing something?
--Daniel Spångberg
Materials Chemistry
Uppsala University
Sweden
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Daniel Spångberg
Materialkemi
Uppsala Universitet