Gleb,
Here are my findings with TCP and MX. In fact for TCP results on
heterogeneous networks we should wait one or two days (still
upgrading my cluster). But I go some very interesting results for MX.
Thanks to Myricom guys for the access to their resources. I was able
to run Open MPI
On Thu, Jun 28, 2007 at 12:02:14PM -0400, George Bosilca wrote:
> I'm not against the patch (at least not against your second version).
> I really want to have the dynamic way to feed the BTLs based on the
> order in which they complete the previous send. Give me one or two
> days, I want to
Gleb,
I'm not against the patch (at least not against your second version).
I really want to have the dynamic way to feed the BTLs based on the
order in which they complete the previous send. Give me one or two
days, I want to test your patch on a heterogeneous Ethernet
environment, and
Nobody except George haven't commented/complained about this patch, so I
assume everybody except George are OK with it. And from George mails I
don't understand if he is OK with me applying it to the trunk and he simply
thinks that further work should be done in this area. So I'll ask him
On Jun 27, 2007, at 10:06 AM, Gleb Natapov wrote:
Btw, did you compare my patch with yours on your multi-NIC system ?
With my patch on our system with 3 networks (2*1Gbs and one 100 Mbs)
I'm close to 99% of the total bandwidth. I'll try to see what I get
with yours.
Your patch SEGV on my
On Tue, Jun 26, 2007 at 05:42:05PM -0400, George Bosilca wrote:
> Gleb,
>
> Simplifying the code and getting better performance is always a good
> approach (at least from my perspective). However, your patch still
> dispatch the messages over the BTLs in a round robin fashion, which
>
On Tue, Jun 26, 2007 at 05:42:05PM -0400, George Bosilca wrote:
> Gleb,
>
> Simplifying the code and getting better performance is always a good
> approach (at least from my perspective). However, your patch still
> dispatch the messages over the BTLs in a round robin fashion, which
>
Gleb,
Simplifying the code and getting better performance is always a good
approach (at least from my perspective). However, your patch still
dispatch the messages over the BTLs in a round robin fashion, which
doesn't look to me as the best approach. How about merging your patch
and mine
Hello,
Attached patch improves OB1 scheduling algorithm between multiple
links. Current algorithm perform very poorly if interconnects with very
different bandwidth values are used. For big message sizes it always
divide traffic equally between all available interconnects. Attached
patch change