Jeff,
Thanks for the explanation. It's very clear.
Best regards,
Zhen
On Mon, May 9, 2016 at 10:19 AM, Jeff Squyres (jsquyres) wrote:
> On May 9, 2016, at 8:23 AM, Zhen Wang wrote:
> >
> > I have another question. I thought MPI_Test is a local call,
On May 9, 2016, at 8:23 AM, Zhen Wang wrote:
>
> I have another question. I thought MPI_Test is a local call, meaning it
> doesn't send/receive message. Am I misunderstanding something? Thanks again.
>From the user's perspective, MPI_TEST is a local call, in that it checks to
Jeff,
I have another question. I thought MPI_Test is a local call, meaning it
doesn't send/receive message. Am I misunderstanding something? Thanks again.
Best regards,
Zhen
On Thu, May 5, 2016 at 9:45 PM, Jeff Squyres (jsquyres)
wrote:
> It's taking so long because you
Jeff,
The hardware limitation doesn't allow me to use anything other than TCP...
I think I have a good understanding of what's going on, and may have a
solution. I'll test it out. Thanks to you all.
Best regards,
Zhen
On Fri, May 6, 2016 at 7:13 AM, Jeff Squyres (jsquyres)
On May 5, 2016, at 10:09 PM, Zhen Wang wrote:
>
> It's taking so long because you are sleeping for .1 second between calling
> MPI_Test().
>
> The TCP transport is only sending a few fragments of your message during each
> iteration through MPI_Test (because, by definition,
Jeff,
Thanks.
Best regards,
Zhen
On Thu, May 5, 2016 at 8:45 PM, Jeff Squyres (jsquyres)
wrote:
> It's taking so long because you are sleeping for .1 second between calling
> MPI_Test().
>
> The TCP transport is only sending a few fragments of your message during
> each
It's taking so long because you are sleeping for .1 second between calling
MPI_Test().
The TCP transport is only sending a few fragments of your message during each
iteration through MPI_Test (because, by definition, it has to return
"immediately"). Other transports do better handing off
2016-05-05 9:27 GMT-05:00 Gilles Gouaillardet :
> Out of curiosity, can you try
> mpirun --mca btl self,sm ...
>
Same as before. Many MPI_Test calls.
> and
> mpirun --mca btl self,vader ...
>
A requested component was not found, or was unable to be opened. This
Out of curiosity, can you try
mpirun --mca btl self,sm ...
and
mpirun --mca btl self,vader ...
and see if one performs better than the other ?
Cheers,
Gilles
On Thursday, May 5, 2016, Zhen Wang wrote:
> Gilles,
>
> Thanks for your reply.
>
> Best regards,
> Zhen
>
> On Wed,
Gilles,
Thanks for your reply.
Best regards,
Zhen
On Wed, May 4, 2016 at 8:43 PM, Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:
> Note there is no progress thread in openmpi 1.10
> from a pragmatic point of view, that means that for "large" messages, no
> data is sent in
Note there is no progress thread in openmpi 1.10
from a pragmatic point of view, that means that for "large" messages, no
data is sent in MPI_Isend, and the data is sent when MPI "progresses" e.g.
call a MPI_Test, MPI_Probe, MPI_Recv or some similar subroutine.
in your example, the data is
Hi,
I'm having a problem with Isend, Recv and Test in Linux Mint 16 Petra. The
source is attached.
Open MPI 1.10.2 is configured with
./configure --enable-debug --prefix=/home//Tool/openmpi-1.10.2-debug
The source is built with
~/Tool/openmpi-1.10.2-debug/bin/mpiCC a5.cpp
and run in one node
12 matches
Mail list logo