On Tuesday 24 February 2009, Jie Cai wrote:
> I have implemented a uDAPL program to measure the bandwidth on IB with
> multirail connections.
>
> The HCA used in the cluster is Mellanox ConnectX HCA. Each HCA has two
> ports.
>
> The program utilize the two port on each node of cluster to build
> multirail IB connections.
>
> The peak bandwidth I can get is ~ 1.3 GB/s (not bi-directional), which
> is almost the same as single rail connections.

Assuming you have a 2.5 GT/s pci-express x8 that speed is a result of the bus 
not being able to keep up with the HCA. Since the bus is holding even a 
single DDR IB port back you see no improvement with two ports.

To fully drive a DDR IB port you need either 16x pci-express 2.5 GT/s or a 8x 
5 GT/s. For one QDR or two DDR you'll need even more...

/Peter

> Does anyone have similar experience?

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to