after we found out serious out of memory issues on smaller embedded
devices (128 mb ram) we made some benchmarks with different schedulers
with the result that cake takes a serious amount of memory. we use the
out of tree cake module and we use it class based since we have complex
methods of doing qos per interface, per mac addresse or even per
ip/network. so its not just simple cake on a single interface solution.
we made some benchmarks with different schedulers. does anybody have a
solution for making that better?
HTB/FQ_CODEL ------- 62M
HTB/SFQ ------- 62M
HTB/PIE ------- 62M
HTB/FQ_CODEL_FAST ------- 67M
HTB/CAKE -------111M
HFSC/FQ_CODEL_FAST ------- 47M
HTB/PIE ------- 49M
HTB/SFQ ------- 50M
HFSC /FQ_CODEL ------- 52M
HFSC/CAKE -------109M
consider that the benchmark doesnt show the real values. its system
overall and does not consider memory taken by the wireless driver for
instance which is about 45 mb of ram for ath10k
so this makes all even more worse unfortunatly since there is not that
many ram left for cake. just about 70mb maybe.
Am 08.09.2019 um 19:27 schrieb Jonathan Morton:
You could also set it back to 'internet' and progressively reduce the
bandwidth parameter, making the Cake shaper into the actual bottleneck.
This is the correct fix for the problem, and you should notice an
instant improvement as soon as the bandwidth parameter is correct.
Hand tuning this one link is not a problem. I'm searching for a set of settings
that will provide generally good performance across a wide range of devices,
links, and situations.
From what you've indicated so far there's nothing as effective as a correct
bandwidth estimation if we consider the antenna (link) a black box. Expecting
the user to input expected throughput for every link and then managing that
information is essentially a non-starter.
Radio tuning provides some improvement, but until ubiquiti starts shipping with
Codel on non-router devices I don't think there's a good solution here.
Any way to have the receiving device detect bloat and insert an ECN?
That's what the qdisc itself is supposed to do.
I don't think the time spent in the intermediate device is detectable at the
kernel level but we keep track of latency for routing decisions and could
detect bloat with some accuracy, the problem is how to respond.
As long as you can detect which link the bloat is on (and in which direction),
you can respond by reducing the bandwidth parameter on that half-link by a
small amount. Since you have a cooperating network, maintaining a time
standard on each node sufficient to observe one-way delays seems feasible, as
is establishing a normal baseline latency for each link.
The characteristics of the bandwidth parameter being too high are easy to
observe. Not only will the one-way delay go up, but the received throughput in
the same direction at the same time will be lower than configured. You might
use the latter as a hint as to how far you need to reduce the shaped bandwidth.
Deciding when and by how much to *increase* bandwidth, which is presumably
desirable when link conditions improve, is a more difficult problem when the
link hardware doesn't cooperate by informing you of its status. (This is
something you could reasonably ask Ubiquiti to address.)
I would assume that link characteristics will change slowly, and run an
occasional explicit bandwidth probe to see if spare bandwidth is available. If
that probe comes through without exhibiting bloat, *and* the link is otherwise
loaded to capacity, then increase the shaper by an amount within the probe's
capacity of measurement - and schedule a repeat.
A suitable probe might be 100x 1500b packets paced out over a second, bypassing
the shaper. This will occupy just over 1Mbps of bandwidth, and can be expected
to induce 10ms of delay if injected into a saturated 100Mbps link. Observe the
delay experienced by each packet *and* the quantity of other traffic that
appears between them. Only if both are favourable can you safely open the
shaper, by 1Mbps.
Since wireless links can be expected to change their capacity over time, due to eg.
weather and tree growth, this seems to be more generally useful than a static guess. You
could deploy a new link with a conservative "guess" of say 10Mbps, and just
probe from there.
- Jonathan Morton
_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake
_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake