On 2/4/2011 9:25 AM, Yves Dorfsman wrote:
>> I have some trouble with infiniband - and admittedly I am an infiniband
>> novice. I get 7GBit/s using TCP (iperf/wget and others). Oddly though
>> because it's a QDR (40GBit/s) switch.
>> Same results with 2.6.36, 2.6.38-rc2 kernel.
>> QLogic PCI-e cards and QLogic switches.
>>
>> I changed kernels/bios/tcp settings and what have you, but no luck. I
>> notice the cpu can be quite busy during these tests.
>>
>> Anyone seen something like this before or knows more about inifiniband
>> than me ? ;))
> Yes we ran into a similar issue. We had put the IB card in the first available
> PCIU slot on the machines. On most machines some PCI slot are faster than
> others, these IB needed to be on a faster slot to be able to attain maximum
> speeds, of course this wasn't documented.
>
> Look for "connector width", on newer machines you typically have slots with
> "x8" and one or two with "x16" width. Move the cards to the faster/widest 
> slots.
>
I'm going to reply to a whole bunch of messages in this thread, so just 
be advised that I'm not picking on any one in particular.

Let's assume that the original card in questions is a QDR card. That 
means 40Gbits/sec, but infiniband has an 8/10 overhead for encoding, so 
the actual maximum achievable bandwidth is 32Gbits/sec. (Anybody that 
says 35 is trying to pull something). DDR
infiniband is 20 gbits/sec (actually 16). In the next version of 
Infiniband, later this year, they eliminate the 8/10 off-the-top 
overhead, for what it's worth.

Regarding pricing: the switches and HCAs are on par with 10gbit ethernet 
in price. The switches are a bit less, about $8k for 36 ports all full 
duplex of 32Gbit (40) infiniband at sub microsecond latency. The cards 
are a bit more. The cables are QSFP cables, basically the same cables as 
for 40gbit Ethernet.

Re: 7gbits/etc. As others have said, you're over-running your CPU. IP 
over IB works just fine, but the packet sizes aren't matched up. What 
you really want to do is test it using actual native infiniband 
protocols. I recommend the Intel Benchmark Suite (IBM - nee Pallas). It 
will test infiniband typical MPI operations.

Re: NFS over IP: sure, in fact there are several native ways to do it. 
You could do NFS over IP over IB, but the native way would be to run 
something like SRP (storage reservation protocol). Or perhaps ISOR  
(iscsi over RDMA). RDMA is where Infiniband really shines. You basically 
tell the remote side about your local memory buffer and let the 
infiniband card and drivers take care of transferring that region of 
memory to the remote host, or let the remote host access it without 
having to use the CPU for the operation. Many storage vendors are using 
IB these days in the back end.. Isilon, TMS, DDN, Ibrix, etc.

Re: PCI - you want PCI express x8 (note, that's PCI-E, not PCI-X) slots. 
They provide enough bandwidth for a QDR card to run full-out. If you 
have a dual port card, you'll want an x16 slot. They are usually 
labelled on the motherboard.

_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to