Eugene,
No what I'd like is that when doing something like
call mpi_bcast(data, 1, MPI_INTEGER, 0, .....)
the program continues AFTER the Bcast is completed (so no control
returned to user), but while threads with rank > 0 are waiting in Bcast
they are not taking CPU resources
I hope it is more clear, I apologize for not being clear in the first place
Vincent
Eugene Loh wrote:
Vincent Rotival wrote:
The solution I retained was for the main thread to isend data
separately to each other threads that are using Irecv + loop on
mpi_test to test the finish of the Irecv. It mught be dirty but
works much better than using Bcast
Thanks for the clarification.
But this strikes me more as a question about the MPI standard than
about the Open MPI implementation. That is, what you really want is
for the MPI API to support a non-blocking form of collectives. You
want control to return to the user program before the
barrier/bcast/etc. operation has completed. That's an API change.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users