Re: [OMPI users] Non-blocking send issue

2020-01-02 Thread George Bosilca via users
This is going back to the fact that you, as a developer, are the best
placed to know exactly when asynchronous progress is needed for your
algorithm, so from that perspective you can provide that progress in the
most timely manner. One way to force MPI to do progress, is to spawn
another thread (using pthread_create as an example), dedicated to providing
a means to the MPI library to execute progress. This communication thread
can use a non-blocking ANY_SOURCE receive, using a tag that will never
match any other message. This way, you can safely cancel this pending
request upon completion of your app. You can then rely on this thread to
call MPI_Test when you know you need guaranteed progress, and the rest of
the time you can park it on some synchronization primitive
(mutex/condition).

  George.


On Tue, Dec 31, 2019 at 5:12 PM Martín Morales <
martineduardomora...@hotmail.com> wrote:

> Hi George, thank you very much for your answer. Can you please explain me
> a little more about "If you need to guarantee progress you might either
> have your own thread calling MPI functions (such as MPI_Test)". Regards
>
> Martín
>
> --
> *De:* George Bosilca 
> *Enviado:* martes, 31 de diciembre de 2019 13:47
> *Para:* Open MPI Users 
> *Cc:* Martín Morales 
> *Asunto:* Re: [OMPI users] Non-blocking send issue
>
> Martin,
>
> The MPI standard does not mandate progress outside MPI calls, thus
> implementations are free to provide, or not, asynchronous progress. Calling
> MPI_Test provides the MPI implementation with an opportunity to progress
> it's internal communication queues. However, an implementation could try a
> best effort to limit the time it spent in MPI_Test* and to provide the
> application with more time for computation, even when this might limit its
> own internal progress. Thus, as a non-blocking collective is composed of a
> potentially large number of point-to-point communications, it might require
> a significant number of MPI_Test to reach completion.
>
> If you need to guarantee progress you might either have your own thread
> calling MPI functions (such as MPI_Test) or you can use the asynchronous
> progress some MPI libraries provide. For this last option read the
> documentation of your MPI implementation to see how to enable asynchronous
> progress.
>
>   George.
>
>
> On Mon, Dec 30, 2019 at 2:31 PM Martín Morales via users <
> users@lists.open-mpi.org> wrote:
>
> Hello all!
> Im with OMPI 4.0.1 and I have a strange behaviour (or at least,
> unexpected) with some non-blocking sending calls: MPI_Isend and MPI_Ibcast.
> I really need asyncronous sending so I dont use MPI_Wait after the send
> call (MPI_Isend or MPI_Ibcast); insted of this I check "on demand" with
> MPI_Test to verify if sending its or not complete. Test Im doing it sends
> just an int value. Here some code (with MPI_Ibcast):
>
> ***SENDER***
>
> //Note that It use an intercommunicator
> MPI_Ibcast(_some_int_data, 1, MPI_INT, MPI_ROOT, mpi_intercomm,
> _sender);
> //MPI_Wait(_sender, MPI_STATUS_IGNORE); <-- I dont want this
>
>
> ***RECEIVER***
>
> MPI_Ibcast(_some_int_data, 1, MPI_INT, 0, parentcomm,
> _receiver);
> MPI_Wait(_receiver, MPI_STATUS_IGNORE);
>
> ***TEST RECEPTION (same sender instance program)***
>
> void test_reception() {
>
> int request_complete;
>
> MPI_Test(_sender, _complete, MPI_STATUS_IGNORE);
>
> if (request_complete) {
> ...
> } else {
> ...
> }
>
> }
>
> But when I invoke this test function after some time has elapsed since I
> sent, the request isnt complete and i have to invoque this test function
> again and againg... x (variable) times, until it finally its completed. Its
> just an int it was sended, just that (all on a local machine); has no sense
> such delay. The request should be completed on the first function test
> invocation.
>
> If, instead of this, I uncomment the unwanted MPI_Wait (i.e. doing it like
> a synchronous request), it completes immediately, like expected.
> If I send with MPI_Isend I get the same behaviour.
>
> I dont understand whats is going on. Any help will be very appreciated.
>
> Regards.
>
> Martín
>
>


Re: [OMPI users] Non-blocking send issue

2019-12-31 Thread Martín Morales via users
Hi George, thank you very much for your answer. Can you please explain me a 
little more about "If you need to guarantee progress you might either have your 
own thread calling MPI functions (such as MPI_Test)". Regards

Martín


De: George Bosilca 
Enviado: martes, 31 de diciembre de 2019 13:47
Para: Open MPI Users 
Cc: Martín Morales 
Asunto: Re: [OMPI users] Non-blocking send issue

Martin,

The MPI standard does not mandate progress outside MPI calls, thus 
implementations are free to provide, or not, asynchronous progress. Calling 
MPI_Test provides the MPI implementation with an opportunity to progress it's 
internal communication queues. However, an implementation could try a best 
effort to limit the time it spent in MPI_Test* and to provide the application 
with more time for computation, even when this might limit its own internal 
progress. Thus, as a non-blocking collective is composed of a potentially large 
number of point-to-point communications, it might require a significant number 
of MPI_Test to reach completion.

If you need to guarantee progress you might either have your own thread calling 
MPI functions (such as MPI_Test) or you can use the asynchronous progress some 
MPI libraries provide. For this last option read the documentation of your MPI 
implementation to see how to enable asynchronous progress.

  George.


On Mon, Dec 30, 2019 at 2:31 PM Martín Morales via users 
mailto:users@lists.open-mpi.org>> wrote:
Hello all!
Im with OMPI 4.0.1 and I have a strange behaviour (or at least, unexpected) 
with some non-blocking sending calls: MPI_Isend and MPI_Ibcast. I really need 
asyncronous sending so I dont use MPI_Wait after the send call (MPI_Isend or 
MPI_Ibcast); insted of this I check "on demand" with MPI_Test to verify if 
sending its or not complete. Test Im doing it sends just an int value. Here 
some code (with MPI_Ibcast):

***SENDER***

//Note that It use an intercommunicator
MPI_Ibcast(_some_int_data, 1, MPI_INT, MPI_ROOT, mpi_intercomm, 
_sender);
//MPI_Wait(_sender, MPI_STATUS_IGNORE); <-- I dont want this


***RECEIVER***

MPI_Ibcast(_some_int_data, 1, MPI_INT, 0, parentcomm, _receiver);
MPI_Wait(_receiver, MPI_STATUS_IGNORE);

***TEST RECEPTION (same sender instance program)***

void test_reception() {

int request_complete;

MPI_Test(_sender, _complete, MPI_STATUS_IGNORE);

if (request_complete) {
...
} else {
...
}

}

But when I invoke this test function after some time has elapsed since I sent, 
the request isnt complete and i have to invoque this test function again and 
againg... x (variable) times, until it finally its completed. Its just an int 
it was sended, just that (all on a local machine); has no sense such delay. The 
request should be completed on the first function test invocation.

If, instead of this, I uncomment the unwanted MPI_Wait (i.e. doing it like a 
synchronous request), it completes immediately, like expected.
If I send with MPI_Isend I get the same behaviour.

I dont understand whats is going on. Any help will be very appreciated.

Regards.

Martín


Re: [OMPI users] Non-blocking send issue

2019-12-31 Thread George Bosilca via users
Martin,

The MPI standard does not mandate progress outside MPI calls, thus
implementations are free to provide, or not, asynchronous progress. Calling
MPI_Test provides the MPI implementation with an opportunity to progress
it's internal communication queues. However, an implementation could try a
best effort to limit the time it spent in MPI_Test* and to provide the
application with more time for computation, even when this might limit its
own internal progress. Thus, as a non-blocking collective is composed of a
potentially large number of point-to-point communications, it might require
a significant number of MPI_Test to reach completion.

If you need to guarantee progress you might either have your own thread
calling MPI functions (such as MPI_Test) or you can use the asynchronous
progress some MPI libraries provide. For this last option read the
documentation of your MPI implementation to see how to enable asynchronous
progress.

  George.


On Mon, Dec 30, 2019 at 2:31 PM Martín Morales via users <
users@lists.open-mpi.org> wrote:

> Hello all!
> Im with OMPI 4.0.1 and I have a strange behaviour (or at least,
> unexpected) with some non-blocking sending calls: MPI_Isend and MPI_Ibcast.
> I really need asyncronous sending so I dont use MPI_Wait after the send
> call (MPI_Isend or MPI_Ibcast); insted of this I check "on demand" with
> MPI_Test to verify if sending its or not complete. Test Im doing it sends
> just an int value. Here some code (with MPI_Ibcast):
>
> ***SENDER***
>
> //Note that It use an intercommunicator
> MPI_Ibcast(_some_int_data, 1, MPI_INT, MPI_ROOT, mpi_intercomm,
> _sender);
> //MPI_Wait(_sender, MPI_STATUS_IGNORE); <-- I dont want this
>
>
> ***RECEIVER***
>
> MPI_Ibcast(_some_int_data, 1, MPI_INT, 0, parentcomm,
> _receiver);
> MPI_Wait(_receiver, MPI_STATUS_IGNORE);
>
> ***TEST RECEPTION (same sender instance program)***
>
> void test_reception() {
>
> int request_complete;
>
> MPI_Test(_sender, _complete, MPI_STATUS_IGNORE);
>
> if (request_complete) {
> ...
> } else {
> ...
> }
>
> }
>
> But when I invoke this test function after some time has elapsed since I
> sent, the request isnt complete and i have to invoque this test function
> again and againg... x (variable) times, until it finally its completed. Its
> just an int it was sended, just that (all on a local machine); has no sense
> such delay. The request should be completed on the first function test
> invocation.
>
> If, instead of this, I uncomment the unwanted MPI_Wait (i.e. doing it like
> a synchronous request), it completes immediately, like expected.
> If I send with MPI_Isend I get the same behaviour.
>
> I dont understand whats is going on. Any help will be very appreciated.
>
> Regards.
>
> Martín
>


[OMPI users] Non-blocking send issue

2019-12-30 Thread Martín Morales via users
Hello all!
Im with OMPI 4.0.1 and I have a strange behaviour (or at least, unexpected) 
with some non-blocking sending calls: MPI_Isend and MPI_Ibcast. I really need 
asyncronous sending so I dont use MPI_Wait after the send call (MPI_Isend or 
MPI_Ibcast); insted of this I check "on demand" with MPI_Test to verify if 
sending its or not complete. Test Im doing it sends just an int value. Here 
some code (with MPI_Ibcast):

***SENDER***

//Note that It use an intercommunicator
MPI_Ibcast(_some_int_data, 1, MPI_INT, MPI_ROOT, mpi_intercomm, 
_sender);
//MPI_Wait(_sender, MPI_STATUS_IGNORE); <-- I dont want this


***RECEIVER***

MPI_Ibcast(_some_int_data, 1, MPI_INT, 0, parentcomm, _receiver);
MPI_Wait(_receiver, MPI_STATUS_IGNORE);

***TEST RECEPTION (same sender instance program)***

void test_reception() {

int request_complete;

MPI_Test(_sender, _complete, MPI_STATUS_IGNORE);

if (request_complete) {
...
} else {
...
}

}

But when I invoke this test function after some time has elapsed since I sent, 
the request isnt complete and i have to invoque this test function again and 
againg... x (variable) times, until it finally its completed. Its just an int 
it was sended, just that (all on a local machine); has no sense such delay. The 
request should be completed on the first function test invocation.

If, instead of this, I uncomment the unwanted MPI_Wait (i.e. doing it like a 
synchronous request), it completes immediately, like expected.
If I send with MPI_Isend I get the same behaviour.

I dont understand whats is going on. Any help will be very appreciated.

Regards.

Martín