Having gone through the drivers at dell the only difference is that they have 
the config file set to not compile in MSI.  Advantage goes to putting this line 

options bnx2 disable_msi=1,1,1,1

(Note this is on an R610 with 4 nics adjust the number of 1's accordingly with 
your hardware) 

into /etc/modprobe.conf. Now, on boot, the bnx2 module is loaded with msi 
disabled for each nic.  The big advantage is when your kernel has to upgrade it 
will continue to work.  (At least it is for me.)

James Sparenberg
Sr IT Systems Engineer
Stoke Inc
408-855-2854

----- Original Message -----
From: "Warren Thurber" <[email protected]>
To: "stephan van hienen" <[email protected]>, [email protected], 
[email protected]
Sent: Tuesday, February 9, 2010 11:41:49 AM GMT -08:00 US/Canada Pacific
Subject: RE: Returning Network stability problems on R710 servers and BCM5709

I would recommend passing the following option for the bnx2 driver:

modprobe bnx2 disable_msi=1

Another option is to use the Dell bnx2 driver found at support.dell.com.

Thanks!

Brett

-----Original Message-----
From: linux-poweredge-bounces-Lists On Behalf Of Stephan van Hienen
Sent: Tuesday, February 09, 2010 1:31 PM
To: Stephan van Hienen; 'James Sparenberg'; linux-poweredge-Lists
Subject: RE: Returning Network stability problems on R710 servers and BCM5709

> -----Original Message-----
> From: [email protected] 
> [mailto:[email protected]] On Behalf Of Stephan van Hienen
> Sent: woensdag 3 februari 2010 12:37
> To: 'James Sparenberg'; [email protected]
> Subject: RE: Returning Network stability problems on R710 servers and BCM5709
> 
> No issues on a new R510 server.
> I have transferred a few TB's the past days with full gigabit speeds (arround 
> 100Megabyte/sec using
> rsync)

Looks like we also have the issue after we put the server into production this 
week.
The server is being used as a fileserver (with samba/nfs)
Clients are getting network path not found errors.
But also a ssh client to and from the server are giving timeouts.

Changes after last week: 
We are now using bonding with eth0 and eth1 connected to 2 Powerconnect 5424 
switches.
Last week only eth0 (without bonding) was connected to a Powerconnect 6224.

Today I tried to disable eth0 / eth1, so only 1 interface was active, but still 
a lot of disconnects.

We also got this error (while eth1 was down for testing) :

NETDEV WATCHDOG: eth0: transmit timed out
bnx2: eth0 NIC Copper Link is Down
bonding: bond0: link status definitely down for interface eth0, disabling it
bonding: bond0: Warning: No 802.3ad response from the link partner for any 
adapters in the bond

Any hints ?
(maybe i'll just put in a intel gigabit card)

Stephan

_______________________________________________
Linux-PowerEdge mailing list
[email protected]
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

_______________________________________________
Linux-PowerEdge mailing list
[email protected]
https://lists.us.dell.com/mailman/listinfo/linux-poweredge
Please read the FAQ at http://lists.us.dell.com/faq

Reply via email to