That really did fix it, George:

# mpirun --prefix $MPIHOME -hostfile ~/testdir/hosts --mca btl
tcp,self --mca btl_tcp_if_exclude ib0,ib1 ~/testdir/hello
Hello from Alex' MPI test program
Process 0 on dr11.lsf.platform.com out of 2
Hello from Alex' MPI test program
Process 1 on compute-0-0.local out of 2

It never occurred to me that the headnode would try to communicate
with the slave using infiniband interfaces... Orthogonally, what are
the industry standard OpenMPI benchmark tests I could run to perform a
real test?

Thanks,
Alex.

On 2/2/07, George Bosilca <bosi...@cs.utk.edu> wrote:
Alex,

Can should try to limit the ethernet devices used by Open MPI during
the execution. Please add "--mca btl_tcp_if_exclude eth1,ib0,ib1" to
your mpirun command line and give it a try.

   Thanks,
     george.

Reply via email to