I have a simpler setup now to remove some variables, both hosts are APU2 on 
Debian 9.6, kernel 4.9.0-8:

apu2a (iperf3 client) <— default VLAN —>  apu2b (iperf3 server)

Both have cake at 100mbit only on egress, with dual-srchost on client and 
dual-dsthost on server. With this setup (and probably previous ones, I just 
didn’t test it this way), bi-directional fairness with these flow counts works:

        IP1 8-flow TCP up: 46.4
        IP2 1-flow TCP up: 47.3
        IP1 8-flow TCP down: 46.8
        IP2 1-flow TCP down: 46.7

but with the original flow counts reported it’s still similarly imbalanced as 
before:

        IP1 8-flow TCP up: 82.9
        IP2 1-flow TCP up: 10.9
        IP1 1-flow TCP down: 10.8
        IP2 8-flow TCP down: 83.3

and now with ack-filter on both ends (not much change):

        IP1 8-flow TCP up: 82.8
        IP2 1-flow TCP up: 10.9
        IP1 1-flow TCP down: 10.5
        IP2 8-flow TCP down: 83.2

Before I go further, what I’m seeing with this rig is that when 
“interplanetary” is used and the number of iperf3 TCP flows goes above the 
number of CPUs minus one (in my case, 4 cores), the UDP send rate starts 
dropping. This only happens with interplanetary for some reason, but such as it 
is, I’m changed my tests to pit 8 UDP flows against 1 TCP flow instead, giving 
the UDP senders more CPU, as this seems to work much better. All tests except 
the last are with “interplanetary”.

UDP upload competition (looks good):

        IP1 1-flow TCP up: 48.6
        IP2 8-flow UDP 48-mbit up: 48.2 (0% loss)

UDP download competition (some imbalance, maybe a difference in how iperf3 
reverse mode works?):

        IP1 8-flow UDP 48-mbit down: 43.1 (0% loss)
        IP2 1-flow TCP down: 53.4 (0% loss)

All four at once (looks similar to previous two tests not impacting one 
another, which is good):

        IP1 1-flow TCP up: 47.7
        IP2 8-flow UDP 48-mbit up: 48.2 (0% loss)
        IP1 8-flow UDP 48-mbit down: 43.3 (0% loss)
        IP2 1-flow TCP down: 52.3

All four at once, up IPs flipped (less fair):

        IP1 8-flow UDP 48-mbit up: 37.7 (0% loss)
        IP2 1-flow TCP up: 57.9
        IP1 8-flow UDP 48-mbit down: 38.9 (0% loss)
        IP2 1-flow TCP down: 56.3

All four at once, interplanetary off again, to double check it, and yes, UDP 
gets punished in this case:

        IP1 1-flow TCP up: 60.6
        IP2 8-flow UDP 48-mbit up: 6.7 (86% loss)
        IP1 8-flow UDP 48-mbit down: 2.9 (94% loss)
        IP2 1-flow TCP down: 63.1

So have we learned something from this? Yes, fairness is improved when using 
UDP instead of TCP for the 8-flow clients, but by turning AQM off we’re also 
testing a very different scenario, one that’s not too realistic. Does this 
prove the cause of the problem is TCP ack traffic?

Thanks again for the help on this. After a whole day on it, I’ll have to shift 
gears tomorrow to FreeNet router changes. I’ll show them the progress on Monday 
so of course I’d like to have a great host fairness story for Cake, as this is 
one of the main reasons to use it instead of fq_codel, but perhaps this will 
get sorted out before then. :)

I agree with George that we’ve been through this before, and also with how he 
explained it in his latest email, but there have been many changes to Cake 
since we tested in 2017, so this could be a regression. I’m almost sure I 
tested this exact scenario, and would not have put 8 up / 8 down on one IP and 
1 up / 1 down on the other, which works with fairness for some reason.

FWIW, I also reproduced it in flent between the same APU2s used above, to be 
sure iperf3 wasn’t somehow causing it:

https://www.heistp.net/downloads/fairness_8_1/ 
<https://www.heistp.net/downloads/fairness_8_1/>

_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake

Reply via email to