So more specifically, for 500mbit I can use a calculated burst/cburst of 62500
(1000 * 50 / 8000), here’s the change:
default: 320mbit up / 268mbit down, 3ms latency, 8.8ms tcp rtt
burst/cburst 62500: 200mbit up / 480mbit down, 40ms latency, 40ms tcp rtt
Aggregate throughput goes from
Well the idea would be to scale the buffer to cover, say Xms at the configured
bandwidth, so HTB could deal with CPU stalls up to X-Yms (with Y << X)... We
just switched sqm-scripts to automatically scale the buffering to 1ms
Would be interested to learn whether that would increase HTB's
The experiments I did with those didn’t yield great results, with changing a
value by one MTU sometimes causing sudden throughput or inter-flow latency
increases, with the tradeoffs not being clear. I’m afraid admins could easily
cause problems fiddling with these. Fortunately most customer
Hi Pete,
you might want to have a look at htb's burst and cburst parameters, as these
should allow to trade in latency under load for bandwidth utilization.
Best Regards
Sebastian
> On Dec 30, 2018, at 21:42, Pete Heist wrote:
>
> It’s a bit more complicated than this. It looks like
It’s a bit more complicated than this. It looks like the htb rate limiter is
different in that as rates increase the actual rate starts to deviate from the
specified rate early on, but it rather gracefully handles the “out of CPU”
situation, where it still maintains control of the queue, just
For whatever reason, I’m seeing the rate limiters in cake and hfsc vastly
outperform htb in the one-armed router configuration I described in my previous
thread. To simplify things, I apply the qdiscs with a single class only at
egress of eth0 on apu1a:
apu2a <— default VLAN —> apu1a <—