Correction:
They are only able to bring the link up to *20Gb* on the *QDR* card
because of the PCI bus limitations.

On Sat, Feb 13, 2010 at 11:37 AM, Erik Froese <[email protected]> wrote:
> What kind of machine are you using? If it doesn't have a PCIe 2.0 bus
> it won't be able to bring the card up to 40Gb.
> I have SunFire x4200's with QDR and DDR cards in them (lustre routers
> to DDR IB networks).
> They are only able to bring the link up to 20Gb on the DDR card
> because of the PCI bus limitations.
> Erik
>
> On Fri, Feb 12, 2010 at 5:09 AM, Peter Kjellstrom <[email protected]> wrote:
>> On Thursday 11 February 2010, Jagga Soorma wrote:
>>> I have a QDR ib switch that should support up to 40Gbps.  After installing
>>> the kernel-ib and lustre client rpms on my SuSe nodes I see the following:
>>>
>>> hpc102:~ # ibstatus mlx4_0:1
>>> Infiniband device 'mlx4_0' port 1 status:
>>>     default gid:     fe80:0000:0000:0000:0002:c903:0006:de19
>>>     base lid:     0x7
>>>     sm lid:         0x1
>>>     state:         4: ACTIVE
>>>     phys state:     5: LinkUp
>>>     rate:         20 Gb/sec (4X DDR)
>>>
>>> Why is this only picking up 4X DDR at 20Gb/sec?  Do the lustre rpm's not
>>> support QDR?  Is there something that I need to do on my side to force
>>> 40Gb/sec on these ports?
>>
>> This is a bit OT, but, a 20G rate typically means that you have a problem 
>> with
>> one of: switch, hca, cable. Maybe your HCA is a DDR HCA? Maybe you need to
>> upgrade the HCA firmware?
>>
>> /Peter
>>
>>> Thanks in advance,
>>> -J
>>
>> _______________________________________________
>> Lustre-discuss mailing list
>> [email protected]
>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>>
>>
>
_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to