Hello George,

        What you're saying here is very interesting. I am presently profiling 
communication patterns for Parallel Genetic Algorithms and could not figure out 
why the async versions tended to be worst than the sync counterpart (imho, that 
was counter-intuitive). What you're basically saying here is that the async 
communications actually add some sychronization overhead that can only be 
compensated if the application overlaps computation with the async 
communications? Is there some "official" reference/documentation to this 
behaviour from OpenMPI (I know the MPI standard doesn't define the actual 
implementation of the communications and therefore lets the implementer do as 
he pleases).

Thanks,

Eric

Le October 15, 2007, George Bosilca a écrit :
> Your conclusion is not necessarily/always true. The MPI_Isend is just  
> the non blocking version of the send operation. As one can imagine, a  
> MPI_Isend + MPI_Wait increase the execution path [inside the MPI  
> library] compared with any blocking point-to-point communication,  
> leading to worst performances. The main interest of the MPI_Isend  
> operation is the possible overlap of computation with communications,  
> or the possible overlap between multiple communications.
> 
> However, depending on the size of the message this might not be true.  
> For large messages, in order to keep the memory usage on the receiver  
> at a reasonable level, a rendezvous protocol is used. The sender  
> [after sending a small packet] wait until the receiver confirm the  
> message exchange (i.e. the corresponding receive operation has been  
> posted) to send the large data. Using MPI_Isend can lead to longer  
> execution times, as the real transfer will be delayed until the  
> program enter in the next MPI call.
> 
> In general, using non-blocking operations can improve the performance  
> of the application, if and only if the application is carefully crafted.
> 
>    george.
> 
> On Oct 14, 2007, at 2:38 PM, Jeremias Spiegel wrote:
> 
> > Hi,
> > I'm working with Open-Mpi on an infiniband-cluster and have some  
> > strange
> > effect when using MPI_Isend(). To my understanding this should  
> > always be
> > quicker than MPI_Send() and MPI_Ssend(), yet in my program both  
> > MPI_Send()
> > and MPI_Ssend() reproducably perform quicker than SSend(). Is there  
> > something
> > obvious I'm missing?
> >
> > Regards,
> > Jeremias
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 



-- 
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517

Reply via email to