I have a test rig comprising 2 i7 systems with Melanox III HCA 10G cards

running Centos 5.7 Kernel 2.6.18-274
Open MPI 1.4.3
MLNX_OFED_LINUX-1.5.3-1.0.0.2 (OFED-1.5.3-1.0.0.2):
On a Cisco 24 pt switch

Normal performance is:
$ mpirun --mca btl openib,self -n 2 -hostfile mpi.hosts  PingPong
results in:
 Max rate = 958.388867 MB/sec  Min latency = 4.529953 usec
and:
$ mpirun --mca btl tcp,self -n 2 -hostfile mpi.hosts  PingPong
Max rate = 653.547293 MB/sec  Min latency = 19.550323 usec


My application exchanges about a gig of data between the processes with 2 
sender and 2 consumer processes on each node with 1 additional controler 
process on the starting node.
The program splits the data into 64K blocks and uses non blocking sends and 
receives with busy/sleep loops to monitor progress until completion.

My problem is I see better performance under IPoIB then I do on native IB 
(RDMA_CM).
My understanding is that IPoIB is limited to about 1G/s so I am at a loss to 
know why it is faster.

These 2 configurations are equivelant (about 8-10 seconds per cycle)
mpirun --mca btl_openib_flags 2 --mca mpi_leave_pinned 1 --mca btl tcp,self -H 
vh2,vh1 -np 9 --bycore prog
mpirun --mca btl_openib_flags 3 --mca mpi_leave_pinned 1 --mca btl tcp,self -H 
vh2,vh1 -np 9 --bycore prog

And this one produces similar run times but seems to degrade with repeated 
cycles:
mpirun --mca btl_openib_eager_limit 64 --mca mpi_leave_pinned 1 --mca btl 
openib,self -H vh2,vh1 -np 9 --bycore  prog

Other  btl_openib_flags settings result in much lower performance. 
Changing the first of the above configs to use openIB results in a 21 second 
run time at best.  Sometimes it takes up to 5 minutes.
With openib:

- Repeated cycles during a single run seem to slow down with each cycle.
- On occasions it seems to stall indefinately, waiting on a single receive. 

Any ideas appreciated.

Thanks in advance,
Randolph

Reply via email to