Hi again... I was on a break from Xensocket stuff.... This time some general 
questions...

Forgive me for the question.... its  a quick one and related to some of my 
development work on Xen, I will explain the rationale after the question. What 
if I have multiple Ethernet cards (say 5) on two of my quad core machines.  The 
IP addresses (and the subnets of course) are 
Machine A                               Machine B
eth0 is y.y.1.a                             y.y.1.z     
eth1 is y.y.4.b                            y.y.4.y
eth2 is y.y.4.c                           ...
eth3 is y.y.4.d                           ...

 ...
Now from the FAQ's (Refer 9: How does Open MPI know which TCP addresses are 
routable to each other?) it is clear that if I want to run a job on multiple 
ethernets, I can use --mca btl_tcp_if_include  eth0,eth1. This will run the job 
on two of the subnets utilizing both the Ethernet cards. Is it doing some sort 
of load balancing? or some round robin mechanism? What part of code is 
responsible for this work?

Now what if I want to run the job like --mca btl_tcp_if_include 
eth1,eth2,eth3,eth4. Notice that all of these ethNs are on same subnet. Even in 
the FAQ's (which mostly answers our lame questions)  its not entirely clear how 
communication will be done.  Each process will have tcp_num_btls equal to 
interfaces, but then what? Is it some sort of load balancing or similar stuff 
which is not clear in tcpdump?

Another related question is what if I want to run 8 process job (on 2x4 
cluster) and want to pin a process to an network interface. OpenMPI to my 
understanding does not give any control of allocating IP to a process (like 
MPICH) or is there some magical --mca thingie. I think only way to go is adding 
routing tables... am i thinking in right direction? If yes, then the 
performance of my boxes decrease when i trying to force the routing (obviously 
something terrible with my configuration)
 Its related to my Xen (virtualization) work. We are in a scenario, where all 
the virtual machines on one Xen host need to use eth2 (which is virtualized but 
optimized for intra-domain communication) and for communication outside the 
physical machine (i.e. to other Xen hosts)  we want to use eth1. Is 'route add' 
the only way again?

I will ask Xensocket BTL related questions later :)

Best Regards and thanks in advance,
Muhammad Atif



Reply via email to