On Tuesday, March 22, 2011 06:15:35 am Atul Vidwansa wrote: > Hi Brian, > > With one 4x QDR IB port, you can achieve 2 GB/Sec on single client, > multi-threaded workload provided that you have right storage (with enough > bandwidth) at other end. We have tested this multiple times at DDN. > > I have seen sites that do IB-bonding across 2 ports but mostly in failover > configuration. To get 10GB/Sec to a single node requires aggregating 5 QDR > IB ports. You will need to confirm from your IB vendor (Mellanox? ), OS > vendor (SGI/RedHat/Novell) and Lustre vendor whether they support > aggregating so many links. I think the challenge you will have is to find > a Lustre client node that has enough x8 PCIe slots to sustain 3 dual-port > Infiniband adapters at full rate
Just adding a small detail, a single port of QDR consumes all of the HCAs pci bandwidth so you would need 5 x8 IB HCAs for a total of 40 lanes of pci- express. This will of course change with the introduction of future pci- express generations... /Peter > (think multiple such nodes in a typical > Lustre filesystem, not so economical). Other alternative is to find a > server that can support 8X or 12X QDR IB port on the motherboard to get > more bandwidth. > > With a typical Lustre client memory of 24-64GB and memory to CPU bandwidth > of 10GB/Sec (with standard DDR3-1333MHz DIMMS), it is not possible to fit > dataset larger than 2/3rd of memory. If you still want to achieve > 10GB/Sec of bandwidth between storage and memory, there are clever > alternatives. You will have to stage your data into memory beforehand and > keep memory pages locked and continue feeding data as these pages are > consumed. It is lot harder than it seems on the paper. > > Cheers, > -Atul
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
