On Mar 3, 2009, at 2:48 AM, Jie Cai wrote:

We have installed a dual-port ConnectX HCA cluster with PIC-E 2.0 slots,
and each port represented as individual interface.

How to configure the Open MPI and hardware system
to correctly use the both ports for communication?

Open MPI should just see and use both ports automatically (assuming that they are ACTIVE).

Are we expecting to see wider bandwidth with Open MPI?

It depends on both your server and network setup.

The transfer rate across PCIe 2.0 cannot send enough data to keep 2 DDR HCA ports full. So it is unlikely that you will see much of a bandwidth improvement.

Assuming that your 2 HCA ports are plugged into either 2 separate IB subnets or different locations in the same subnet, you'll get a wider dispersion of fragments across your network, potentially avoiding some network congestion. But this behavior is very much dependent on what else is occurring simultaneously elsewhere in your IB subnet, which is likely to be application- / cluster-specific behavior.

In order to see the improvement of bandwidth, do I need to specific
configure Open MPI and the hardware?


To really get 2 x DDR performance, you likely need two separate busses with two separate HCAs.

--
Jeff Squyres
Cisco Systems

Reply via email to