The MPI standard requires that when there is a free running task posting
isends to a task that is not keeping up on receives, the sending task will
switch to synchronous isend BEFORE the receive side runs out of memory  and
fails.

There should be no need for the sender to us MPI_Issend because the MPI
library should do it for you (under the covers)


Dick Treumann  -  MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846         Fax (845) 433-8363



|------------>
| From:      |
|------------>
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|
  |Gijsbert Wiesenekker <gijsbert.wiesenek...@gmail.com>                        
                                                                     |
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| To:        |
|------------>
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|
  |Open MPI Users <us...@open-mpi.org>                                          
                                                                     |
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Date:      |
|------------>
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|
  |05/11/2010 03:19 AM                                                          
                                                                     |
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Subject:   |
|------------>
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|
  |[OMPI users] Questions about MPI_Isend                                       
                                                                     |
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Sent by:   |
|------------>
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|
  |users-boun...@open-mpi.org                                                   
                                                                     |
  
>--------------------------------------------------------------------------------------------------------------------------------------------------|





An OpenMPI program of mine that uses MPI_Isend and MPI_Irecv crashes after
some non-reproducible time my Fedora Linux kernel (invalid opcode), which
makes it hard to debug (there is no trace, even with the debug kernel, and
if I run it under valgrind it does not crash).
My guess is that the kernel crash is caused by OpenMPI running out if
memory because too many MPI_Irecv messages have been sent but not been
processed yet.
My questions are:
What does the OpenMPI specification say about the behaviour of MPI_Isend
when many messages have been sent but have not been processed yet? Will it
fail? Will it block until more memory becomes available (I hope not,
because this would cause my program to deadlock)?
Ideally I would like to check how many MPI_Isend messages have not been
processed yet, so that I can stop sending messages if there are 'too many'
waiting. Is there a way to do this?

Regards,
Gijsbert


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to