On 12/27/06, jamal <[EMAIL PROTECTED]> wrote:
On Wed, 2006-27-12 at 01:28 +0100, Arjan van de Ven wrote:

> current irqbalance accounts for napi by using the number of packets as
> indicator for load, not the number of interrupts. (for network
> interrupts obviously)
>

Sounds a lot more promising.
Although still insufficient in certain cases. All flows are not equal; as an
example, an IPSEC flow with 1000 packets bound to one CPU  will likely
utilize more cycles than 5000 packets that are being plain forwarded on
another CPU.

I do agree with Jamal, that there is a problem here.

My scenario is treatment of RTP packets in kernel space with a single network
card (both Rx and Tx). The default of the Intel 5000 series chipset is
affinity of each
network card to a certain CPU. Currently, neither with irqbalance nor
with kernel
irq-balancing (MSI and io-apic attempted) I do not find a way to
balance that irq.

This is a good design in general to keep a static CPU-affinity for
network card interrupt.
However, what I have is that CPU0 is idle less than 10%, whereas 3
other core are
(2 dual-core CPUs, Intel) doing about nothing.
There is a real problem of CPU scaling with such design. Some day we
can wish to
add a 10Gbps network card and 16 cores/CPUs, but it will not be
helpful to scale.

Probably, some cards have separated Rx and Tx interrupts. Still,
scaling is an issue.

I will look into PCI-E option, thanks Jamal.


--
Sincerely,
Robert Iakobashvili,
coroberti %x40 gmail %x2e com
...................................................................
Navigare necesse est, vivere non est necesse
...................................................................
http://sourceforge.net/projects/curl-loader
A powerful open-source HTTP/S, FTP/S traffic
generating, loading and testing tool.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to