A common problem that I have seen is that all nodes in the cluster may
not be configured identically.  For example, can you confirm that eth1
is your gigE interface on all nodes?  It might have accidentally been
configured to be your IPoIB interface on some nodes. 

If that's not the case, let us know and we'll track this down further.

Thanks!


> -----Original Message-----
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of Swamy Kandadai
> Sent: Friday, June 23, 2006 10:22 PM
> To: us...@open-mpi.org
> Subject: [OMPI users] Fw: OpenMPI version 1.1
> 
> 
> 
> Dr. Swamy N. Kandadai
> Certified Sr. Consulting IT Specialist
> HPC Benchmark Center
> System & Technology Group, Poughkeepsie, NY
> Phone:( 845) 433 -8429 (8-293) Fax:(845)432-9789
> sw...@us.ibm.com
> http://w3.ibm.com/sales/systems/benchmarks
> 
> 
> 
> 
> 
> ----- Forwarded by Swamy Kandadai/Poughkeepsie/IBM on 
> 06/23/2006 10:21 PM
> -----
>                                                               
>              
>              Swamy                                            
>              
>              Kandadai/Poughkee                                
>              
>              psie/IBM                                         
>           To 
>                                        us...@open-mpi.org     
>              
>              06/23/2006 09:52                                 
>           cc 
>              PM                                               
>              
>                                                               
>      Subject 
>                                        OpenMPI version 1.1    
>              
>                                                               
>              
>                                                               
>              
>                                                               
>              
>                                                               
>              
>                                                               
>              
>                                                               
>              
> 
> 
> 
> Hi:
> 
> I am trying to run OpenMPI on a couple of nodes. These nodes 
> have several
> interfaces: eth0 (which is a GigE),
> eth1 (which is a GigE with Jumbo frames enabled), IpoIB, myr0 
> in addition
> to loopback (l0).
> 
> I want to use exclusively eth1 and I am running with this option:
> 
> mpirun --mca btl_tcp_if_include eth1 -machinefile hf -np 2 IMB-MPI1
> 
> where IMB-MPI1 is the Intel message passing benchmark.
> 
> Different times, it has different behaviors:
> 
> I ran on one set of nodes and I got a typical GigE behavior 
> (around 100
> MB/s). On a different pairs of nodes
> it is giving me the BW consistent with IpoIB (around 700 MB/s).
> 
> Can u help me what I am doing wrong? How can I force it to 
> use eth1 on all
> nodes?
> 
> I just built OpenMPI with the following option:
> 
> ./configure --prefix=$BINDIR  --disable-io-romio
> 
> Thanks
> Swamy
> 
> 
> 
> Dr. Swamy N. Kandadai
> Certified Sr. Consulting IT Specialist
> HPC Benchmark Center
> System & Technology Group, Poughkeepsie, NY
> Phone:( 845) 433 -8429 (8-293) Fax:(845)432-9789
> sw...@us.ibm.com
> http://w3.ibm.com/sales/systems/benchmarks
> 
> 
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 

Reply via email to