Ok, now we have been doing some testing and probably found the problem.

All tests were done on the same machine with an Intel S5000VSA MB and a
Xeon E5420 2,5 Ghz processor, running OpenBSD 4.8 amd64 GENERIC (SP
kernel).

We tested the performance with iperf, running two clients connected
through a bridge.

With the Intel Pro/1000 PCIe (82576) dual port cards with the bridge
between two cards (in this case em0 to em2) we got the worst
performance, 150 Mbit/s.

While testing this and watching with 'systat vmstat', the CPU was 99% busy
handling interrupts. The amount of interrupts were about 3000/s on em0
(where the iperf client was connected) and 1500/s on em2 (iperf server).
At the same time 'systat ifs' showed about 10 new livelocks per second.

Next we tested regular PCI Intel Pro/1000MT (82545GM) cards and now we
got the performance we had hoped for in the first place. 910 Mbit/s with
8000 intr/s on both cards at 50% CPU (intr). No livelocks.

We thought perhaps the issue was related to the PCIe bus, so we did one
final test, this time with quad port Intel Pro/1000 QP (82576) PCIe
cards.

These performed excellent, with 940 Mbit/s, 8200 intr/s per card and 60%
CPU (intr).

So, it seems the dual port PCIe cards suck and we have to replace them.

//Peter


On 2011-03-29 07:40, Peter Hallin wrote:
> I realized now that this measurement is wrong. 
> 
> "vmstat -iz" seems to calculate the interrupt rate based a longer
> period, and this measurement was taken just after we started to push
> traffic through the machine again.
> 
> The amount of interrupts per second when checking with systat (which
> seems to have a shorter measurement period) at the same time was way
> higher, about 5000 intr/s on em0 and em2.
> 
> Sorry for the wrong data

Reply via email to