On Jun 26, 2007, at 5:06 PM, Georg Wassen wrote:
Hello all,
I temporarily worked around my former problem by using synchronous
communication and shifting the initialization
into the first call of a collective operation.
But nevertheless, I found a performance bug in btl_openib.
When I exec
Awesome; ditto.
On Jun 27, 2007, at 4:19 PM, Terry D. Dontje wrote:
Cool this sounds good enough to me.
--td
Brian Barrett wrote:
THe function name changes are pretty obvious (s/mca_pml_base/ompi/),
and I thought I'd try something new and actually document the
interface in the header file :
Hello,
FWIW: the reason you have to use PML_CALL() is by design. The MPI
API has all the error checking stuff for ensuring that MPI_INIT
completed, error checking of parameters, etc. We never invoke the
top-level MPI API from elsewhere in the OMPI code base (except for
from within ROMIO
Nobody except George haven't commented/complained about this patch, so I
assume everybody except George are OK with it. And from George mails I
don't understand if he is OK with me applying it to the trunk and he simply
thinks that further work should be done in this area. So I'll ask him
directly:
Gleb,
I'm not against the patch (at least not against your second version).
I really want to have the dynamic way to feed the BTLs based on the
order in which they complete the previous send. Give me one or two
days, I want to test your patch on a heterogeneous Ethernet
environment, and r
On Thu, Jun 28, 2007 at 12:02:14PM -0400, George Bosilca wrote:
> I'm not against the patch (at least not against your second version).
> I really want to have the dynamic way to feed the BTLs based on the
> order in which they complete the previous send. Give me one or two
> days, I want to