Thanks George for the explanation,
with the default eager size, the first message is received *after* the
last message is sent, regardless the progress thread is used or not.
an other way to put it is that MPI_Isend() (and probably MPI_Irecv()
too) do not involve any progression,
so i naively
On Thu, Jul 20, 2017 at 8:57 PM, Gilles Gouaillardet
wrote:
> Sam,
>
>
> this example is using 8 MB size messages
>
> if you are fine with using more memory, and your application should not
> generate too much unexpected messages, then you can bump the eager_limit
> for
Gilles Gouaillardet, on ven. 21 juil. 2017 10:57:36 +0900, wrote:
> if you are fine with using more memory, and your application should not
> generate too much unexpected messages, then you can bump the eager_limit
> for example
>
> mpirun --mca btl_tcp_eager_limit $((8*1024*1024+128)) ...
Hello,
George Bosilca, on jeu. 20 juil. 2017 19:05:34 -0500, wrote:
> Can you reproduce the same behavior after the first batch of messages ?
Yes, putting a loop around the whole series of communications, event
with a 1-second pause in between, gets the same behavior repeated.
> Assuming the
Sam,
this example is using 8 MB size messages
if you are fine with using more memory, and your application should not
generate too much unexpected messages, then you can bump the eager_limit
for example
mpirun --mca btl_tcp_eager_limit $((8*1024*1024+128)) ...
worked for me
George,
in
Sam,
Open MPI aggregates messages only when network constraints prevent the
messages from being timely delivered. In this particular case I think that
our delayed business card exchange and connection setup is delaying the
delivery of the first batch of messages (and the BTL will aggregate them
Hello,
We are getting a strong performance issue, which is due to a missing
pipelining behavior from OpenMPI when running over TCP. I have attached
a test case. Basically what it does is
if (myrank == 0) {
for (i = 0; i < N; i++)
MPI_Isend(...);
} else {
for (i =