Hi Mikael,


> On Sep 3, 2020, at 15:10, Mikael Abrahamsson via Bloat 
> <bloat@lists.bufferbloat.net> wrote:
> 
> On Mon, 31 Aug 2020, Toke Høiland-Jørgensen wrote:
> 
>> And what about when you're running CAKE in 'unlimited' mode?
> 
> I tried this:
> 
> # tc qdisc add dev eth0 root cake bandwidth 900mbit

        That still employs the cake shaper, so is not equivalent with 
unlimited, I believe.

[PEDANT_MODE]

900 Mbps without explicit overhead will result in a typical maximum TCP/IPv4 
goodput of

900 * ((1500-20-20)/(1500+14)) = 867.899603699 Mbps
but since ethernet overhead is actually 38 bytes instead of 14 this actually 
occupies 

(900 * ((1500-20-20)/(1500+14))) * ((1500+38)/(1500-20-20)) = 914.266842801 on 
the ethernet link

which for small packets will become problematic:
(900 * ((150-20-20)/(100+14))) * ((150+38)/(150-20-20)) = 1484.21052632 Mbps 
gross speed out of the 1000.0 Gigabit ethernet offers.

in fact, packet sizes below 202 will spend all the "credit" you got from 
reducing the shaper rate to 900 Mbps in the first place.
(900 * ((202-20-20)/(202 +14))) * ((202 +38)/(202-20-20)) = 1000  

Maybe tell cake that you run on ethernet by adding the "ethernet keyword" which 
will both take care of the per-packet overhead of 38 bytes and the minimum 
packet size on the link of 88 bytes?

Please note that for throughput this does not really matter that much, but 
latency-under-load is not going to be pretty when too many small packets are in 
flight...

[/PEDANT_MODE]


> 
> This seems fine from a performance point of view (not that high sirq%, around 
> 35%) and does seem to limit my upstream traffic correctly. Not sure it helps 
> though, at these speeds the bufferbloat problem is not that obvious and easy 
> to test over the Internet :)

        Mmmh, how did you measure the sirq percentage? Some top versions show 
overall percentage with 100% meaning all CPUs, so 35% in a quadcore could mean 
1 fully maxed out CPU (25%) plus an additional 10% spread over the other three, 
or something more benign. Better top (so not busybox's) or htop versions also 
can show the load per CPU which is helpful to pinpoint hotspots...

Best Regards
        Sebastian

> 
> root@OpenWrt:~# tc -s qdisc
> qdisc noqueue 0: dev lo root refcnt 2
> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
> qdisc cake 8034: dev eth0 root refcnt 9 bandwidth 900Mbit diffserv3 
> triple-isolate nonat nowash no-ack-filter split-gso rtt 100.0ms raw overhead 0
> Sent 1111772001 bytes 959703 pkt (dropped 134, overlimits 221223 requeues 179)
> backlog 0b 0p requeues 179
> memory used: 2751976b of 15140Kb
> capacity estimate: 900Mbit
> min/max network layer size:           42 /    1514
> min/max overhead-adjusted size:       42 /    1514
> average network hdr offset:           14
> 
>                   Bulk  Best Effort        Voice
>  thresh      56250Kbit      900Mbit      225Mbit
>  target          5.0ms        5.0ms        5.0ms
>  interval      100.0ms      100.0ms      100.0ms
>  pk_delay          0us         22us        232us
>  av_delay          0us          6us          7us
>  sp_delay          0us          4us          5us
>  backlog            0b           0b           0b
>  pkts                0       959747           90
>  bytes               0   1111935437        39440
>  way_inds            0        22964            0
>  way_miss            0          275            2
>  way_cols            0            0            0
>  drops               0          134            0
>  marks               0            0            0
>  ack_drop            0            0            0
>  sp_flows            0            3            1
>  bk_flows            0            1            0
>  un_flows            0            0            0
>  max_len             0        68130         3714
>  quantum          1514         1514         1514
> 
> 
> -- 
> Mikael Abrahamsson    email: 
> swm...@swm.pp.se_______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to