Hi George, thank you very much for your answer. Can you please explain me a 
little more about "If you need to guarantee progress you might either have your 
own thread calling MPI functions (such as MPI_Test)". Regards

Martín

________________________________
De: George Bosilca <bosi...@icl.utk.edu>
Enviado: martes, 31 de diciembre de 2019 13:47
Para: Open MPI Users <users@lists.open-mpi.org>
Cc: Martín Morales <martineduardomora...@hotmail.com>
Asunto: Re: [OMPI users] Non-blocking send issue

Martin,

The MPI standard does not mandate progress outside MPI calls, thus 
implementations are free to provide, or not, asynchronous progress. Calling 
MPI_Test provides the MPI implementation with an opportunity to progress it's 
internal communication queues. However, an implementation could try a best 
effort to limit the time it spent in MPI_Test* and to provide the application 
with more time for computation, even when this might limit its own internal 
progress. Thus, as a non-blocking collective is composed of a potentially large 
number of point-to-point communications, it might require a significant number 
of MPI_Test to reach completion.

If you need to guarantee progress you might either have your own thread calling 
MPI functions (such as MPI_Test) or you can use the asynchronous progress some 
MPI libraries provide. For this last option read the documentation of your MPI 
implementation to see how to enable asynchronous progress.

  George.


On Mon, Dec 30, 2019 at 2:31 PM Martín Morales via users 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>> wrote:
Hello all!
Im with OMPI 4.0.1 and I have a strange behaviour (or at least, unexpected) 
with some non-blocking sending calls: MPI_Isend and MPI_Ibcast. I really need 
asyncronous sending so I dont use MPI_Wait after the send call (MPI_Isend or 
MPI_Ibcast); insted of this I check "on demand" with MPI_Test to verify if 
sending its or not complete. Test Im doing it sends just an int value. Here 
some code (with MPI_Ibcast):

***SENDER***

//Note that It use an intercommunicator
MPI_Ibcast(&send_some_int_data, 1, MPI_INT, MPI_ROOT, mpi_intercomm, 
&request_sender);
//MPI_Wait(&request_sender, MPI_STATUS_IGNORE); <-- I dont want this


***RECEIVER***

MPI_Ibcast(&recv_some_int_data, 1, MPI_INT, 0, parentcomm, &request_receiver);
MPI_Wait(&request_receiver, MPI_STATUS_IGNORE);

***TEST RECEPTION (same sender instance program)***

void test_reception() {

    int request_complete;

    MPI_Test(&request_sender, &request_complete, MPI_STATUS_IGNORE);

    if (request_complete) {
        ...
    } else {
        ...
    }

}

But when I invoke this test function after some time has elapsed since I sent, 
the request isnt complete and i have to invoque this test function again and 
againg... x (variable) times, until it finally its completed. Its just an int 
it was sended, just that (all on a local machine); has no sense such delay. The 
request should be completed on the first function test invocation.

If, instead of this, I uncomment the unwanted MPI_Wait (i.e. doing it like a 
synchronous request), it completes immediately, like expected.
If I send with MPI_Isend I get the same behaviour.

I dont understand whats is going on. Any help will be very appreciated.

Regards.

Martín

Reply via email to