Are both the IB HCA and the ethernet interfaces on the same physical bus?

If they're not, the need for multiplexing them is diminished (but, of course, it depends on what you're trying to do -- if everything is using huge memory transfers, then your bottleneck will be RAM, not the bus that the NICs reside on).

That being said, something we have not explored at all is the idea of multiplexing at the MPI layer. Perhaps something like "this is a low priority communicator; I want you to only use the 'tcp' BTL on it" and "this is a high priority communicator; I want you to only use the 'openib' BTL on it".

I haven't thought at all about whether that is possible. It would probably take some mucking around in both the bml and the ob1 pml. Hmm. It may or may not be worth it, but I raise the possibility...


On Apr 19, 2007, at 9:18 PM, po...@cc.gatech.edu wrote:

Hi,

Some of our clusters uses Gigabit Ethernet and Infiniband.
So we are trying to multiplex them.

Thanks and Regards
Pooja


On Thu, Apr 19, 2007 at 06:58:37PM -0400, po...@cc.gatech.edu wrote:

I am Pooja working with chaitali on this project.
The idea behind this is while running a parallelized code ,if a huge
chunks of serial computation is encountered at that time underlying
network infrastructure can be used for some other data transfer.
This increases the network utilization.
But this (non Mpi) data transfer should not keep Mpi calls blocking.
So we need to give them priorities.
Also we are trying to predict a behavior of the code (like if there are more MPi calls coming with short interval or if they are coming after
large interval ) based on previous calls.
As a result we can make this mechanism more efficient.

Ok, so you have a Cluster with Infiniband a while the network traffic is low you want to utilize the Infiniband network for other data transfers
with a lower priority?

What does this have to do with TCP or are you using TCP over Infiniband?

Regards
Christian Leber

--
http://rettetdieti.vde-uni-mannheim.de/

_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel


_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel


--
Jeff Squyres
Cisco Systems

Reply via email to