I am planning a setup with thousands of classes in a HTB qdisc, say from
1:1000 to 1:2000, each with a very small rate and a big ceil, for fair
sharing of a 45mbit link.
I suspect some problems could be lurking in there.
Anyone having good/bad experience with such number of classes?
Simon
for it?
Simon
Tomasz Paszkowski skrev:
On Fri, Aug 27, 2004 at 10:46:59AM +0200, Simon Lodal wrote:
I am planning a setup with thousands of classes in a HTB qdisc, say from
1:1000 to 1:2000, each with a very small rate and a big ceil, for fair
sharing of a 45mbit link.
Consider using HFSC. HTB
Routing, firewalling and shaping run in kernel and has no pid. Instead you can
get/set /proc flags, and check for the presence of certain data structures.
/proc/sys/net/ipv4/ip_forward is the routing master switch. If 0, the machine
forwards nothing. You can both set and get the value, should
I have similar hardware, load and trouble.
Interrupts are only sent to one cpu, instead of all of them, because that was
only overhead. I think the default was changed somewhere around 2.6.10
or .12, but I have forgotten the url.
There is a CONFIG_IRQBALANCE option in the kernel, but last
If you use HTB, you need to compile it with HTB_HSIZE set to at least 256 (in
sch_htb.c). Else your CPU will be fully loaded with even a few kpps traffic.
The problem is how HTB stores the classes, not very efficient when there are
thousands of them. I do not know if other qdiscs have the same
and scales better too.
The patch is for 2.6.20-rc6, I have older ones for 2.6.18 and 2.6.19 if anyone
is interested.
Signed-off-by: Simon Lodal [EMAIL PROTECTED]
--- linux-2.6.20-rc6.base/net/sched/sch_htb.c 2007-01-25 03:19:28.0 +0100
+++ linux-2.6.20-rc6/net/sched/sch_htb.c 2007-02-01 05:44
On Thursday 01 February 2007 07:08, Patrick McHardy wrote:
Simon Lodal wrote:
This patch changes HTB's class storage from hash+lists to a two-level
linear array, so it can do constant time (O(1)) class lookup by classid.
It improves scalability for large number of classes.
Without
On Monday 05 February 2007 11:16, Jarek Poplawski wrote:
On 01-02-2007 12:30, Andi Kleen wrote:
Simon Lodal [EMAIL PROTECTED] writes:
Memory is generally not an issue, but CPU is, and you can not beat the
CPU efficiency of plain array lookup (always faster, and constant time).
Probably
On Thursday 01 February 2007 12:30, Andi Kleen wrote:
Simon Lodal [EMAIL PROTECTED] writes:
Memory is generally not an issue, but CPU is, and you can not beat the
CPU efficiency of plain array lookup (always faster, and constant time).
Actually that's not true when the array doesn't fit