On Tue, Apr  6 at 17:56, Markus Kovero wrote:
Our Dell T610 is and has been working just fine for the last year and
a half, without a single network problem.  Do you know if they're
using the same integrated part?

--eric

Hi, as I should have mentioned, integrated nics that cause issues
are using Broadcom BCM5709 chipset and these connectivity issues
have been quite widespread amongst linux people too, Redhat tries to
fix this; http://kbase.redhat.com/faq/docs/DOC-26837 but I believe
it's messed up in firmware somehow, as in our tests show
4.6.8-series firmware seems to be more stable.

And what comes to workarounds, disabling msi is bad if it creates
latency for network/disk controllers and disabling c-states from
Nehalem processors is just stupid (having no turbo, power saving
etc).

Definitely no go for storage imo.

Seems like this issue only occurs when MSI-X interrupts are enabled
for the BCM5709 chips, or am I reading it wrong?

If I type 'echo ::interrupts | mdb -k', and isolate for
network-related bits, I get the following output:


 IRQ  Vect IPL Bus   Trg Type   CPU Share APIC/INT# ISR(s)
 36   0x60 6   PCI   Lvl Fixed  3   1     0x1/0x4   bnx_intr_1lvl
 48   0x61 6   PCI   Lvl Fixed  2   1     0x1/0x10  bnx_intr_1lvl


Does this imply that my system is not in a vulnerable configuration?
Supposedly i'm losing some performance without MSI-X, but I'm not sure
in which environments or workloads we would notice since the load on
this server is relatively low, and the L2ARC serves data at greater
than 100MB/s (wire speed) without stressing much of anything.

The BIOS settings in our T610 are exactly as they arrived from Dell
when we bought it over a year ago.

Thoughts?
--eric

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to