Hello,
When i set: disable_msi=1 to bnx2 driver (version 1.9.3) the problem is
resolved (on latest rhel 5 kernel).

But when i have used the latest bnx2 kernel (version 1.9.20b) - as
recommended by Dell's site) some alarming results have occured!!

I can see Errors on the interfaces on high full duplex gbit load !!!
The errors can be reproduced easily when you bidirectional saturate the
gbit links. 

No errrors can be seen using the rhel driver!

Check the following benchmarks using iperf utility:


RHEL 5.3 BNX2 Driver v 1.9.3
=================================================================================

> One way direction test
-----------------------------------------------------------------------

TCP window size: 31.2 KByte 
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-120.0 sec  13.0 GBytes    931 Mbits/sec
[  3]  0.0-120.0 sec  13.0 GBytes    933 Mbits/sec

interface errors:0

> Simultaneously bidirectional test 
------------------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-120.0 sec  7.61 GBytes    545 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-120.0 sec  5.80 GBytes    415 Mbits/sec

interface errors:0



RHEL 5.3 BNX2 Driver v 1.9.3 + bnx2 disable_msi=1
=================================================================================

> One way direction test
-----------------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-120.0 sec  13.0 GBytes    928 Mbits/sec
[  3]  0.0-120.0 sec  13.0 GBytes    929 Mbits/sec

> Simultaneously bidirectional test 
------------------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  6]  0.0-120.0 sec  8.29 GBytes    593 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-120.1 sec  4.92 GBytes    352 Mbits/sec

RECOMENDED DELL BNX2 Driver v 1.9.20b
=================================================================================

> One way direction test
-----------------------------------------------------------------------
[  3]  0.0-120.0 sec  12.9 GBytes    921 Mbits/sec
[  3]  0.0-120.0 sec  12.9 GBytes    925 Mbits/sec

errors:0

> Simultaneously bidirectional test 
------------------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-120.0 sec  5.53 GBytes    396 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-120.0 sec  8.69 GBytes    622 Mbits/sec

errors:170770 dropped:171519 overruns:0 frame:170770

>> Same test 2nd time

[ ID] Interval       Transfer     Bandwidth
[  6]  0.0-120.0 sec  8.40 GBytes    601 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-120.1 sec  6.55 GBytes    468 Mbits/sec

errors:373751 dropped:375594 overruns:0 frame:373751

>> Same test 3rd time

[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-120.0 sec  7.54 GBytes    540 Mbits/sec
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-120.0 sec  7.00 GBytes    501 Mbits/sec

errors:554623 dropped:557521 overruns:0 frame:554623




On Mon, 2009-10-26 at 12:36 +0530, [email protected] wrote:
> Hello,
> 
> >>>> We are experiencing network problems on i/o and network 
> >loaded R710 
> >>>> Poweredge servers. Network connectivity dies after some time.
> >>>> The systems needs to be powered down to bring the NICs 
> >back to life.
> >>>
> >>> Supposedly, loading the bnx2 module with disable_msi=1 
> >resolves this 
> >>> problem.
> >>>
> >>> There is a version of the netxtreme driver available that is newer 
> >>> than the one provided by RHEL/CentOS.  You can get this from Dell's 
> >>> site.  In my experience, using this driver resolves the problem.
> >>
> >> I have just taken delivery of a T710 w/BCM5709's, and can 
> >confirm that 
> >> the
> >> disable_msi=1 trick does *not* resolve this problem; I have had the 
> >> system hang while idle, 5 minutes after a cold start. Installing the 
> >> netxtreme driver *does* fix the issue. Not very clever, guys.
> >
> >
> >I have a R410 here. I wonder if they have the same driver. I 
> >thought the eth NICs on this one were Broadcoms.
> 
> 
> 1. On RHEL 5.3 and RHEL 5.4, issue will not be seen if disable_msi=1
> parameter is passed to the native bnx2 driver(bnx2 driver version
> 1.7.9-1)
> 2. The issue will not be seen if bnx2 driver posted on support.dell.com
> (bnx2 version >=1.8.7b) is used. 
> 
> 
> With regards,
> Narendra K  
> 
> _______________________________________________
> Linux-PowerEdge mailing list
> [email protected]
> https://lists.us.dell.com/mailman/listinfo/linux-poweredge
> Please read the FAQ at http://lists.us.dell.com/faq
-- 
=============================================================================
Evangelos Souglakos           OTE S.A Internet Service Provider 
Unix Systems Engineer         Systems Design Department
Phone: +302106116282          OTE Megaro Building 2A8
=============================================================================

Great minds have purposes, others have wishes.

_______________________________________________
Linux-PowerEdge mailing list
[email protected]
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Reply via email to