John,

Who's adapter (manufacturer) are you using? It is usually an adapter implementation or driver issue that occures when you cannot scale across multiple links. The fact that you don't scale up from one link, but it appears they share a fixed bandwidth across N links means that there is a driver or stack issue. At one time I think that IPoIB and maybe other IB drivers used only one event queue across multiple links which would be a bottleneck. We added code in the IBM EHCA driver to get round this bottleneck.

Are your measurements using MPI or IP. Are you using separate tasks/sockets per link and using different subnets if using IP?

Bernie King-Smith  
IBM Corporation
Server Group
Cluster System Performance  
[EMAIL PROTECTED]    (845)433-8483
Tie. 293-8483 or wombat2 on NOTES

"We are not responsible for the world we are born into, only for the world we leave when we die.
So we have to accept what has gone before us and work to change the only thing we can,
-- The Future." William Shatner


john t" <[EMAIL PROTECTED]> wrote on 10/03/2006 09:42:24 AM:
>
> Hi,

>  
> I have two HCA cards, each having two ports and each connected to a
> separate PCI-E x8 slot.

>  
> Using one HCA port I get end to end BW of 11.6 Gb/sec (uni-direction RDMA).
> If I use two ports of the same HCA or different HCA, I get between 5
> to 6.5 Gb/sec point-to-point BW on each port. BW on each port
> further reduces if I use more ports. I am not able to understand
> this behaviour. Is there any limitation on max. BW that a system can
> provide? Does the available BW get divided among multiple HCA ports
> (which means having multiple ports will not increase the BW)?

>  
>  
> Regards,
> John T
_______________________________________________
openib-general mailing list
[email protected]
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to