Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-27 Thread Georgios Amanakis
Thanks for testing Pete! I should note though, that patch is incorrect
in terms of triple-isolate. It can be further improved my
differentiating between srchost and dsthost. The results are the same
nevertheless. The same principle can also be applied to the sparse
flows.

However, I completely understand Jonathan when he says that this might
not be the optimal solution, and perhaps a different model of
flow-selection is necessary (e.g. doing exactly what the man page
says: first decide based on host priority, and then based on priority
among the flows of that host).

On Sat, Jan 26, 2019 at 2:35 AM Pete Heist  wrote:
>
> I ran my original iperf3 test with and without the patch, through my 
> one-armed router with hfsc+cake on egress each direction at 100Mbit:
>
> Unpatched:
>
> IP1 1-flow TCP up: 11.3
> IP2 8-flow TCP up: 90.1
> IP1 8-flow TCP down: 89.8
> IP2 1-flow TCP down: 11.3
> Jain’s fairness index, directional: 0.623 up, 0.631 down
> Jain’s fairness index, aggregate: 0.997
>
> Patched:
>
> IP1 1-flow TCP up: 51.0
> IP2 8-flow TCP up: 51.0
> IP1 8-flow TCP down: 50.7
> IP2 1-flow TCP down: 50.6
> Jain’s fairness index, directional: 1.0 up, 0.999 down
> Jain’s fairness index, aggregate: 0.999
>
> So this confirms George’s result. :)
>
> Obviously if we look at _aggregate_ fairness it’s essentially the same in 
> both cases. I think directional fairness is what users would expect though.
>
> Can anyone think of any potentially pathological cases from considering only 
> bulk flows for fairness, that I can test? Otherwise, I’d like to see this 
> idea taken in...
>
> > On Jan 16, 2019, at 4:47 AM, gamana...@gmail.com wrote:
> >
> > Of course I pasted the results for IP1 and IP2 the wrong way. Sorry!
> > These are the correct results, along with the *.flent.gz files.
> >
> > IP1:
> > flent -H 192.168.1.2 tcp_8down &
> > Data file written to ./tcp_8down-2019-01-15T223703.709305.flent.gz.
> > Summary of tcp_8down test run at 2019-01-16 03:37:03.709305:
> >
> > avg   median  # data pts
> > Ping (ms) ICMP   : 0.78 0.72 ms  342
> > TCP download avg : 6.03 5.83 Mbits/s 301
> > TCP download sum :48.2446.65 Mbits/s 301
> > TCP download::1  : 6.03 5.83 Mbits/s 298
> > TCP download::2  : 6.03 5.83 Mbits/s 297
> > TCP download::3  : 6.03 5.83 Mbits/s 297
> > TCP download::4  : 6.03 5.83 Mbits/s 298
> > TCP download::5  : 6.03 5.83 Mbits/s 298
> > TCP download::6  : 6.03 5.83 Mbits/s 298
> > TCP download::7  : 6.03 5.83 Mbits/s 297
> > TCP download::8  : 6.03 5.83 Mbits/s 298
> >
> >
> > flent -H 192.168.1.2 tcp_1up &
> > Data file written to ./tcp_1up-2019-01-15T223704.711193.flent.gz.
> > Summary of tcp_1up test run at 2019-01-16 03:37:04.711193:
> >
> >   avg   median  # data pts
> > Ping (ms) ICMP : 0.79 0.73 ms  342
> > TCP upload :48.1246.69 Mbits/s 294
> >
> >
> >
> > IP2:
> > flent -H 192.168.1.2 tcp_1down &
> > Data file written to ./tcp_1down-2019-01-15T223705.693550.flent.gz.
> > Summary of tcp_1down test run at 2019-01-16 03:37:05.693550:
> >
> >   avg   median  # data pts
> > Ping (ms) ICMP : 0.77 0.69 ms  341
> > TCP download   :48.1046.65 Mbits/s 300
> >
> >
> > flent -H 192.168.1.2 tcp_8up &
> > Data file written to ./tcp_8up-2019-01-15T223706.706614.flent.gz.
> > Summary of tcp_8up test run at 2019-01-16 03:37:06.706614:
> >
> >   avg   median  # data pts
> > Ping (ms) ICMP : 0.74 0.70 ms  341
> > TCP upload avg : 6.03 5.83 Mbits/s 301
> > TCP upload sum :48.2546.63 Mbits/s 301
> > TCP upload::1  : 6.04 5.86 Mbits/s 226
> > TCP upload::2  : 6.03 5.86 Mbits/s 226
> > TCP upload::3  : 6.03 5.86 Mbits/s 226
> > TCP upload::4  : 6.03 5.86 Mbits/s 225
> > TCP upload::5  : 6.03 5.86 Mbits/s 226
> > TCP upload::6  : 6.03 5.86 Mbits/s 226
> > TCP upload::7  : 6.03 5.78 Mbits/s 220
> > TCP upload::8  : 6.03 5.88 Mbits/s 277
> >
> >
> > 
>
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-25 Thread Pete Heist
I ran my original iperf3 test with and without the patch, through my one-armed 
router with hfsc+cake on egress each direction at 100Mbit:

Unpatched:

IP1 1-flow TCP up: 11.3
IP2 8-flow TCP up: 90.1
IP1 8-flow TCP down: 89.8
IP2 1-flow TCP down: 11.3
Jain’s fairness index, directional: 0.623 up, 0.631 down
Jain’s fairness index, aggregate: 0.997

Patched:

IP1 1-flow TCP up: 51.0
IP2 8-flow TCP up: 51.0
IP1 8-flow TCP down: 50.7
IP2 1-flow TCP down: 50.6
Jain’s fairness index, directional: 1.0 up, 0.999 down
Jain’s fairness index, aggregate: 0.999

So this confirms George’s result. :)

Obviously if we look at _aggregate_ fairness it’s essentially the same in both 
cases. I think directional fairness is what users would expect though.

Can anyone think of any potentially pathological cases from considering only 
bulk flows for fairness, that I can test? Otherwise, I’d like to see this idea 
taken in...

> On Jan 16, 2019, at 4:47 AM, gamana...@gmail.com wrote:
> 
> Of course I pasted the results for IP1 and IP2 the wrong way. Sorry!
> These are the correct results, along with the *.flent.gz files.
> 
> IP1: 
> flent -H 192.168.1.2 tcp_8down &
> Data file written to ./tcp_8down-2019-01-15T223703.709305.flent.gz.
> Summary of tcp_8down test run at 2019-01-16 03:37:03.709305:
> 
> avg   median  # data pts
> Ping (ms) ICMP   : 0.78 0.72 ms  342
> TCP download avg : 6.03 5.83 Mbits/s 301
> TCP download sum :48.2446.65 Mbits/s 301
> TCP download::1  : 6.03 5.83 Mbits/s 298
> TCP download::2  : 6.03 5.83 Mbits/s 297
> TCP download::3  : 6.03 5.83 Mbits/s 297
> TCP download::4  : 6.03 5.83 Mbits/s 298
> TCP download::5  : 6.03 5.83 Mbits/s 298
> TCP download::6  : 6.03 5.83 Mbits/s 298
> TCP download::7  : 6.03 5.83 Mbits/s 297
> TCP download::8  : 6.03 5.83 Mbits/s 298
> 
> 
> flent -H 192.168.1.2 tcp_1up &
> Data file written to ./tcp_1up-2019-01-15T223704.711193.flent.gz.
> Summary of tcp_1up test run at 2019-01-16 03:37:04.711193:
> 
>   avg   median  # data pts
> Ping (ms) ICMP : 0.79 0.73 ms  342
> TCP upload :48.1246.69 Mbits/s 294
> 
> 
> 
> IP2:
> flent -H 192.168.1.2 tcp_1down &
> Data file written to ./tcp_1down-2019-01-15T223705.693550.flent.gz.
> Summary of tcp_1down test run at 2019-01-16 03:37:05.693550:
> 
>   avg   median  # data pts
> Ping (ms) ICMP : 0.77 0.69 ms  341
> TCP download   :48.1046.65 Mbits/s 300
> 
> 
> flent -H 192.168.1.2 tcp_8up &
> Data file written to ./tcp_8up-2019-01-15T223706.706614.flent.gz.
> Summary of tcp_8up test run at 2019-01-16 03:37:06.706614:
> 
>   avg   median  # data pts
> Ping (ms) ICMP : 0.74 0.70 ms  341
> TCP upload avg : 6.03 5.83 Mbits/s 301
> TCP upload sum :48.2546.63 Mbits/s 301
> TCP upload::1  : 6.04 5.86 Mbits/s 226
> TCP upload::2  : 6.03 5.86 Mbits/s 226
> TCP upload::3  : 6.03 5.86 Mbits/s 226
> TCP upload::4  : 6.03 5.86 Mbits/s 225
> TCP upload::5  : 6.03 5.86 Mbits/s 226
> TCP upload::6  : 6.03 5.86 Mbits/s 226
> TCP upload::7  : 6.03 5.78 Mbits/s 220
> TCP upload::8  : 6.03 5.88 Mbits/s 277
> 
> 
> 

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-18 Thread Toke Høiland-Jørgensen
Jonathan Morton  writes:

>>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>>> others think. I want to see if fairness is preserved in this case with
>>> sparse flows only. Could flent do this?
>> 
>> Well, sparse flows are (by definition) not building a queue, so it
>> doesn't really make sense to talk about fairness for them. How would you
>> measure that?
>> 
>> This is also the reason I agree that they shouldn't be counted for host
>> fairness calculation purposes, BTW...
>
> The trick is that we need to keep fairness of the deficit
> replenishments, which occur for sparse flows as well as bulk ones, but
> in smaller amounts. The number of active flows is presently the
> stand-in for this. It's possible to have a host backlogged with
> hundreds of new flows which are, by definition, sparse.

Right, there's some care needed to ensure we don't get weird behaviour
during transients such as flow startup.

> I'm still trying to get my head around how the modified code works in
> detail.  It's possible that a different implementation would either be
> more concise and readable, or better model what is actually needed.
> But I can't tell until I grok it.

Cool, good to know you are on it; I'm happy to wait until you've had
some time to form an opinion on this :)

-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-18 Thread Toke Høiland-Jørgensen
Sebastian Moeller  writes:

> Hi Toke,
>
>> On Jan 18, 2019, at 14:33, Toke Høiland-Jørgensen  wrote:
>> 
>> Georgios Amanakis  writes:
>> 
>>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>>> others think. I want to see if fairness is preserved in this case with
>>> sparse flows only. Could flent do this?
>> 
>> Well, sparse flows are (by definition) not building a queue, so it
>> doesn't really make sense to talk about fairness for them. How would you
>> measure that?
>> 
>> This is also the reason I agree that they shouldn't be counted for host
>> fairness calculation purposes, BTW...
>
> That leads to a question (revealing my lack of detailed knowledge) if
> there is a sufficient number of new flows (that should qualify as
> new/sparse) that servicing all of them takes longer than each queue
> accumulating new packets, at what point in time are these flows
> considered "unworthy" of sparse flow boosting? Or differetly how i
> cake going to deal with a UDP flood where the 5 tuple hash is
> different for all packets (say by spoofing ports or randomly picking
> dst addresses)?

Well, what is considered a sparse flow is a function of both the flow
rate itself, *as well as* the link rate, number of competing flows, etc.
So a flow can be sparse on one link, but turn into a bulk flow on
another because that link has less capacity.

I explore this in some detail here:
https://doi.org/10.1109/LCOMM.2018.2871457

-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-18 Thread Jonathan Morton
>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>> others think. I want to see if fairness is preserved in this case with
>> sparse flows only. Could flent do this?
> 
> Well, sparse flows are (by definition) not building a queue, so it
> doesn't really make sense to talk about fairness for them. How would you
> measure that?
> 
> This is also the reason I agree that they shouldn't be counted for host
> fairness calculation purposes, BTW...

The trick is that we need to keep fairness of the deficit replenishments, which 
occur for sparse flows as well as bulk ones, but in smaller amounts.  The 
number of active flows is presently the stand-in for this.  It's possible to 
have a host backlogged with hundreds of new flows which are, by definition, 
sparse.

I'm still trying to get my head around how the modified code works in detail.  
It's possible that a different implementation would either be more concise and 
readable, or better model what is actually needed.  But I can't tell until I 
grok it.

 - Jonathan Morton

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-18 Thread Sebastian Moeller
Hi Toke,

> On Jan 18, 2019, at 14:33, Toke Høiland-Jørgensen  wrote:
> 
> Georgios Amanakis  writes:
> 
>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>> others think. I want to see if fairness is preserved in this case with
>> sparse flows only. Could flent do this?
> 
> Well, sparse flows are (by definition) not building a queue, so it
> doesn't really make sense to talk about fairness for them. How would you
> measure that?
> 
> This is also the reason I agree that they shouldn't be counted for host
> fairness calculation purposes, BTW...

That leads to a question (revealing my lack of detailed knowledge) if there is 
a sufficient number of new flows (that should qualify as new/sparse) that 
servicing all of them takes longer than each queue accumulating new packets, at 
what point in time are these flows considered "unworthy" of sparse flow 
boosting? Or differetly how i cake going to deal with a UDP flood where the 5 
tuple hash is different for all packets (say by spoofing ports or randomly 
picking dst addresses)? 

Best Regards


> 
> -Toke
> ___
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-18 Thread Toke Høiland-Jørgensen
Georgios Amanakis  writes:

> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
> others think. I want to see if fairness is preserved in this case with
> sparse flows only. Could flent do this?

Well, sparse flows are (by definition) not building a queue, so it
doesn't really make sense to talk about fairness for them. How would you
measure that?

This is also the reason I agree that they shouldn't be counted for host
fairness calculation purposes, BTW...

-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-18 Thread Georgios Amanakis
Yes, exactly. Would be interesting to hear what Jonathan, Toke and others
think. I want to see if fairness is preserved in this case with sparse
flows only. Could flent do this?

On Fri, Jan 18, 2019, 5:07 AM Toke Høiland-Jørgensen  George Amanakis  writes:
>
> > A better version of the patch for testing.
>
> So basically, you're changing the host fairness algorithm to only
> consider bulk flows instead of all active flows to that host, right?
> Seems reasonable to me. Jonathan, any opinion?
>
> -Toke
>
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-18 Thread Toke Høiland-Jørgensen
George Amanakis  writes:

> A better version of the patch for testing.

So basically, you're changing the host fairness algorithm to only
consider bulk flows instead of all active flows to that host, right?
Seems reasonable to me. Jonathan, any opinion?

-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-15 Thread Pete Heist
Thanks for working on it, looks promising! I’d be interested in hearing some 
more feedback if this is the right approach, but it looks like it from the 
experiments. I should be able to put some more testing time into it in a few 
days...

> On Jan 16, 2019, at 4:47 AM,   
> wrote:
> 
> Of course I pasted the results for IP1 and IP2 the wrong way. Sorry!
> These are the correct results, along with the *.flent.gz files.
> 
> IP1: 
> flent -H 192.168.1.2 tcp_8down &
> Data file written to ./tcp_8down-2019-01-15T223703.709305.flent.gz.
> Summary of tcp_8down test run at 2019-01-16 03:37:03.709305:
> 
> avg   median  # data pts
> Ping (ms) ICMP   : 0.78 0.72 ms  342
> TCP download avg : 6.03 5.83 Mbits/s 301
> TCP download sum :48.2446.65 Mbits/s 301
> TCP download::1  : 6.03 5.83 Mbits/s 298
> TCP download::2  : 6.03 5.83 Mbits/s 297
> TCP download::3  : 6.03 5.83 Mbits/s 297
> TCP download::4  : 6.03 5.83 Mbits/s 298
> TCP download::5  : 6.03 5.83 Mbits/s 298
> TCP download::6  : 6.03 5.83 Mbits/s 298
> TCP download::7  : 6.03 5.83 Mbits/s 297
> TCP download::8  : 6.03 5.83 Mbits/s 298
> 
> 
> flent -H 192.168.1.2 tcp_1up &
> Data file written to ./tcp_1up-2019-01-15T223704.711193.flent.gz.
> Summary of tcp_1up test run at 2019-01-16 03:37:04.711193:
> 
>   avg   median  # data pts
> Ping (ms) ICMP : 0.79 0.73 ms  342
> TCP upload :48.1246.69 Mbits/s 294
> 
> 
> 
> IP2:
> flent -H 192.168.1.2 tcp_1down &
> Data file written to ./tcp_1down-2019-01-15T223705.693550.flent.gz.
> Summary of tcp_1down test run at 2019-01-16 03:37:05.693550:
> 
>   avg   median  # data pts
> Ping (ms) ICMP : 0.77 0.69 ms  341
> TCP download   :48.1046.65 Mbits/s 300
> 
> 
> flent -H 192.168.1.2 tcp_8up &
> Data file written to ./tcp_8up-2019-01-15T223706.706614.flent.gz.
> Summary of tcp_8up test run at 2019-01-16 03:37:06.706614:
> 
>   avg   median  # data pts
> Ping (ms) ICMP : 0.74 0.70 ms  341
> TCP upload avg : 6.03 5.83 Mbits/s 301
> TCP upload sum :48.2546.63 Mbits/s 301
> TCP upload::1  : 6.04 5.86 Mbits/s 226
> TCP upload::2  : 6.03 5.86 Mbits/s 226
> TCP upload::3  : 6.03 5.86 Mbits/s 226
> TCP upload::4  : 6.03 5.86 Mbits/s 225
> TCP upload::5  : 6.03 5.86 Mbits/s 226
> TCP upload::6  : 6.03 5.86 Mbits/s 226
> TCP upload::7  : 6.03 5.78 Mbits/s 220
> TCP upload::8  : 6.03 5.88 Mbits/s 277
> 
> 
> 

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


[Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-15 Thread George Amanakis
A better version of the patch for testing.

Setup:
IP{1,2}(flent) <> Router <> Server(netserver)

Router:
tc qdisc add dev enp1s0 root cake bandwidth 100mbit dual-srchost besteffort
tc qdisc add dev enp4s0 root cake bandwidth 100mbit dual-dsthost besteffort

IP1:
Data file written to ./tcp_8down-2019-01-15T222742.358874.flent.gz.
Summary of tcp_8down test run at 2019-01-16 03:27:42.358874:

 avg   median  # data pts
 Ping (ms) ICMP   : 0.86 0.78 ms  342
 TCP download avg : 6.16 5.86 Mbits/s 301
 TCP download sum :49.2846.90 Mbits/s 301
 TCP download::1  : 6.23 5.86 Mbits/s 297
 TCP download::2  : 6.16 5.87 Mbits/s 297
 TCP download::3  : 6.15 5.87 Mbits/s 297
 TCP download::4  : 6.14 5.87 Mbits/s 297
 TCP download::5  : 6.15 5.87 Mbits/s 297
 TCP download::6  : 6.15 5.87 Mbits/s 297
 TCP download::7  : 6.15 5.87 Mbits/s 297
 TCP download::8  : 6.15 5.87 Mbits/s 297

Data file written to ./tcp_1up-2019-01-15T222743.387906.flent.gz.
Summary of tcp_1up test run at 2019-01-16 03:27:43.387906:

   avg   median  # data pts
 Ping (ms) ICMP : 0.87 0.80 ms  343
 TCP upload :47.0246.20 Mbits/s 265


IP2:
Data file written to ./tcp_1up-2019-01-15T222744.371050.flent.gz.
Summary of tcp_1up test run at 2019-01-16 03:27:44.371050:

   avg   median  # data pts
 Ping (ms) ICMP : 0.89 0.77 ms  342
 TCP upload :46.8946.36 Mbits/s 293
Data file written to ./tcp_8down-2019-01-15T222745.382941.flent.gz.
Summary of tcp_8down test run at 2019-01-16 03:27:45.382941:

 avg   median  # data pts
 Ping (ms) ICMP   : 0.90 0.81 ms  343
 TCP download avg : 6.15 5.86 Mbits/s 301
 TCP download sum :49.2346.91 Mbits/s 301
 TCP download::1  : 6.15 5.87 Mbits/s 297
 TCP download::2  : 6.15 5.87 Mbits/s 297
 TCP download::3  : 6.15 5.87 Mbits/s 296
 TCP download::4  : 6.15 5.87 Mbits/s 297
 TCP download::5  : 6.15 5.87 Mbits/s 297
 TCP download::6  : 6.16 5.87 Mbits/s 297
 TCP download::7  : 6.16 5.87 Mbits/s 297
 TCP download::8  : 6.16 5.87 Mbits/s 297



---
 sch_cake.c | 67 --
 1 file changed, 50 insertions(+), 17 deletions(-)

diff --git a/sch_cake.c b/sch_cake.c
index d434ae0..962a090 100644
--- a/sch_cake.c
+++ b/sch_cake.c
@@ -148,6 +148,7 @@ struct cake_host {
u32 dsthost_tag;
u16 srchost_refcnt;
u16 dsthost_refcnt;
+   u16 bulk_flow_count;
 };
 
 struct cake_heap_entry {
@@ -1921,12 +1922,22 @@ static s32 cake_enqueue(struct sk_buff *skb, struct 
Qdisc *sch,
flow->deficit = (b->flow_quantum *
 quantum_div[host_load]) >> 16;
} else if (flow->set == CAKE_SET_SPARSE_WAIT) {
+   struct cake_host *srchost = >hosts[flow->srchost];
+   struct cake_host *dsthost = >hosts[flow->dsthost];
+
/* this flow was empty, accounted as a sparse flow, but actually
 * in the bulk rotation.
 */
flow->set = CAKE_SET_BULK;
b->sparse_flow_count--;
b->bulk_flow_count++;
+
+   if (cake_dsrc(q->flow_mode))
+   srchost->bulk_flow_count++;
+
+   if (cake_ddst(q->flow_mode))
+   dsthost->bulk_flow_count++;
+
}
 
if (q->buffer_used > q->buffer_max_used)
@@ -2097,23 +2108,8 @@ retry:
dsthost = >hosts[flow->dsthost];
host_load = 1;
 
-   if (cake_dsrc(q->flow_mode))
-   host_load = max(host_load, srchost->srchost_refcnt);
-
-   if (cake_ddst(q->flow_mode))
-   host_load = max(host_load, dsthost->dsthost_refcnt);
-
-   WARN_ON(host_load > CAKE_QUEUES);
-
/* flow isolation (DRR++) */
if (flow->deficit <= 0) {
-   /* The shifted prandom_u32() is a way to apply dithering to
-* avoid accumulating roundoff errors
-*/
-   flow->deficit += (b->flow_quantum * quantum_div[host_load] +
- (prandom_u32() >> 16)) >> 16;
-   list_move_tail(>flowchain, >old_flows);
-
/* Keep all flows with deficits out of the sparse and decaying
 * rotations.  No non-empty flow 

Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-15 Thread Georgios Amanakis
The patch I previously sent had the host_load manipulated only when
dual-dsthost is sent, since that was what I was primarily testing. For
dual-srchost to behave the same way line 2107 has to be changed, too. Will
resubmit in case anybody wants to test, later today.

On Tue, Jan 15, 2019, 2:22 PM George Amanakis 
> I think what is happening here is that if a client has flows such as "a
> (bulk upload)" and "b (bulk download)", the incoming ACKs of flow "a"
> compete with the incoming bulk traffic on flow "b". With compete I mean
> in terms of flow selection.
>
> So if we adjust the host_load to be the same with the bulk_flow_count of
> *each* host, the problem seems to be resolved.
> I drafted a patch below.
>
> Pete's setup, tested with the patch (ingress in mbit/s):
> IP1: 8down  49.18mbit/s
> IP1: 1up46.73mbit/s
> IP2: 1down  47.39mbit/s
> IP2: 8up49.21mbit/s
>
>
> ---
>  sch_cake.c | 34 --
>  1 file changed, 28 insertions(+), 6 deletions(-)
>
> diff --git a/sch_cake.c b/sch_cake.c
> index d434ae0..5c0f0e1 100644
> --- a/sch_cake.c
> +++ b/sch_cake.c
> @@ -148,6 +148,7 @@ struct cake_host {
> u32 dsthost_tag;
> u16 srchost_refcnt;
> u16 dsthost_refcnt;
> +   u16 bulk_flow_count;
>  };
>
>  struct cake_heap_entry {
> @@ -1897,10 +1898,10 @@ static s32 cake_enqueue(struct sk_buff *skb,
> struct Qdisc *sch,
> q->last_packet_time = now;
> }
>
> +   struct cake_host *srchost = >hosts[flow->srchost];
> +   struct cake_host *dsthost = >hosts[flow->dsthost];
> /* flowchain */
> if (!flow->set || flow->set == CAKE_SET_DECAYING) {
> -   struct cake_host *srchost = >hosts[flow->srchost];
> -   struct cake_host *dsthost = >hosts[flow->dsthost];
> u16 host_load = 1;
>
> if (!flow->set) {
> @@ -1927,6 +1928,11 @@ static s32 cake_enqueue(struct sk_buff *skb, struct
> Qdisc *sch,
> flow->set = CAKE_SET_BULK;
> b->sparse_flow_count--;
> b->bulk_flow_count++;
> +   if (cake_dsrc(q->flow_mode))
> +   srchost->bulk_flow_count++;
> +
> +   if (cake_ddst(q->flow_mode))
> +   dsthost->bulk_flow_count++;
> }
>
> if (q->buffer_used > q->buffer_max_used)
> @@ -2101,7 +2107,7 @@ retry:
> host_load = max(host_load, srchost->srchost_refcnt);
>
> if (cake_ddst(q->flow_mode))
> -   host_load = max(host_load, dsthost->dsthost_refcnt);
> +   host_load = max(host_load, dsthost->bulk_flow_count);
>
> WARN_ON(host_load > CAKE_QUEUES);
>
> @@ -2110,8 +2116,6 @@ retry:
> /* The shifted prandom_u32() is a way to apply dithering to
>  * avoid accumulating roundoff errors
>  */
> -   flow->deficit += (b->flow_quantum * quantum_div[host_load]
> +
> - (prandom_u32() >> 16)) >> 16;
> list_move_tail(>flowchain, >old_flows);
>
> /* Keep all flows with deficits out of the sparse and
> decaying
> @@ -2122,6 +2126,11 @@ retry:
> if (flow->head) {
> b->sparse_flow_count--;
> b->bulk_flow_count++;
> +   if (cake_dsrc(q->flow_mode))
> +   srchost->bulk_flow_count++;
> +
> +   if (cake_ddst(q->flow_mode))
> +   dsthost->bulk_flow_count++;
> flow->set = CAKE_SET_BULK;
> } else {
> /* we've moved it to the bulk rotation for
> @@ -2131,6 +2140,8 @@ retry:
> flow->set = CAKE_SET_SPARSE_WAIT;
> }
> }
> +   flow->deficit += (b->flow_quantum * quantum_div[host_load]
> +
> + (prandom_u32() >> 16)) >> 16;
> goto retry;
> }
>
> @@ -2151,6 +2162,11 @@ retry:
>>decaying_flows);
> if (flow->set == CAKE_SET_BULK) {
> b->bulk_flow_count--;
> +   if (cake_dsrc(q->flow_mode))
> +   srchost->bulk_flow_count--;
> +
> +   if (cake_ddst(q->flow_mode))
> +   dsthost->bulk_flow_count--;
> b->decaying_flow_count++;
> } else if (flow->set == CAKE_SET_SPARSE ||
>flow->set ==
> CAKE_SET_SPARSE_WAIT) {
> @@ -2164,8 +2180,14 @@ retry:
> if (flow->set == 

[Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-15 Thread George Amanakis

I think what is happening here is that if a client has flows such as "a
(bulk upload)" and "b (bulk download)", the incoming ACKs of flow "a"
compete with the incoming bulk traffic on flow "b". With compete I mean
in terms of flow selection.

So if we adjust the host_load to be the same with the bulk_flow_count of
*each* host, the problem seems to be resolved.
I drafted a patch below.

Pete's setup, tested with the patch (ingress in mbit/s):
IP1: 8down  49.18mbit/s
IP1: 1up46.73mbit/s
IP2: 1down  47.39mbit/s
IP2: 8up49.21mbit/s


---
 sch_cake.c | 34 --
 1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/sch_cake.c b/sch_cake.c
index d434ae0..5c0f0e1 100644
--- a/sch_cake.c
+++ b/sch_cake.c
@@ -148,6 +148,7 @@ struct cake_host {
u32 dsthost_tag;
u16 srchost_refcnt;
u16 dsthost_refcnt;
+   u16 bulk_flow_count;
 };
 
 struct cake_heap_entry {
@@ -1897,10 +1898,10 @@ static s32 cake_enqueue(struct sk_buff *skb, struct 
Qdisc *sch,
q->last_packet_time = now;
}
 
+   struct cake_host *srchost = >hosts[flow->srchost];
+   struct cake_host *dsthost = >hosts[flow->dsthost];
/* flowchain */
if (!flow->set || flow->set == CAKE_SET_DECAYING) {
-   struct cake_host *srchost = >hosts[flow->srchost];
-   struct cake_host *dsthost = >hosts[flow->dsthost];
u16 host_load = 1;
 
if (!flow->set) {
@@ -1927,6 +1928,11 @@ static s32 cake_enqueue(struct sk_buff *skb, struct 
Qdisc *sch,
flow->set = CAKE_SET_BULK;
b->sparse_flow_count--;
b->bulk_flow_count++;
+   if (cake_dsrc(q->flow_mode))
+   srchost->bulk_flow_count++;
+
+   if (cake_ddst(q->flow_mode))
+   dsthost->bulk_flow_count++;
}
 
if (q->buffer_used > q->buffer_max_used)
@@ -2101,7 +2107,7 @@ retry:
host_load = max(host_load, srchost->srchost_refcnt);
 
if (cake_ddst(q->flow_mode))
-   host_load = max(host_load, dsthost->dsthost_refcnt);
+   host_load = max(host_load, dsthost->bulk_flow_count);
 
WARN_ON(host_load > CAKE_QUEUES);
 
@@ -2110,8 +2116,6 @@ retry:
/* The shifted prandom_u32() is a way to apply dithering to
 * avoid accumulating roundoff errors
 */
-   flow->deficit += (b->flow_quantum * quantum_div[host_load] +
- (prandom_u32() >> 16)) >> 16;
list_move_tail(>flowchain, >old_flows);
 
/* Keep all flows with deficits out of the sparse and decaying
@@ -2122,6 +2126,11 @@ retry:
if (flow->head) {
b->sparse_flow_count--;
b->bulk_flow_count++;
+   if (cake_dsrc(q->flow_mode))
+   srchost->bulk_flow_count++;
+
+   if (cake_ddst(q->flow_mode))
+   dsthost->bulk_flow_count++;
flow->set = CAKE_SET_BULK;
} else {
/* we've moved it to the bulk rotation for
@@ -2131,6 +2140,8 @@ retry:
flow->set = CAKE_SET_SPARSE_WAIT;
}
}
+   flow->deficit += (b->flow_quantum * quantum_div[host_load] +
+ (prandom_u32() >> 16)) >> 16;
goto retry;
}
 
@@ -2151,6 +2162,11 @@ retry:
   >decaying_flows);
if (flow->set == CAKE_SET_BULK) {
b->bulk_flow_count--;
+   if (cake_dsrc(q->flow_mode))
+   srchost->bulk_flow_count--;
+
+   if (cake_ddst(q->flow_mode))
+   dsthost->bulk_flow_count--;
b->decaying_flow_count++;
} else if (flow->set == CAKE_SET_SPARSE ||
   flow->set == CAKE_SET_SPARSE_WAIT) {
@@ -2164,8 +2180,14 @@ retry:
if (flow->set == CAKE_SET_SPARSE ||
flow->set == CAKE_SET_SPARSE_WAIT)
b->sparse_flow_count--;
-   else if (flow->set == CAKE_SET_BULK)
+   else if (flow->set == CAKE_SET_BULK) {
b->bulk_flow_count--;
+   if (cake_dsrc(q->flow_mode))
+   srchost->bulk_flow_count--;
+
+ 

Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-04 Thread Pete Heist

> On Jan 3, 2019, at 2:02 PM, Pete Heist  wrote:
> 
> I tried iperf3 in UDP mode, but cake is treating these flows aggressively. I 
> get the impression that cake penalizes flows heavily that do not respond to 
> congestion control signals. If I pit one 8 TCP flows against a single UDP 
> flow at 40mbit, the UDP flow goes into a death spiral with increasing drops 
> over time (iperf3 output attached).

Sigh, this spiraling was partly because iperf3 in UDP mode sends 8k buffers by 
default. If I use “-l 1472” with the iperf3 client, the send rates are the 
same, but the packet loss is much lower, without interplanetary. So one more 
result:

IP1 1-flow TCP up: 49 - 59.5
IP2 8-flow UDP 48-mbit up: 48 - 36 (loss 0% - 25%)
IP1 8-flow UDP 48-mbit down: 47.5 - 35.8 (loss 0% - 25%)
IP2 1-flow TCP down: 21.8 - 61.5

I do see the rates and loss gradually change over 60 seconds, so numbers are 
shown at t=0 and t=60 seconds.

I’ve read that nuttcp does UDP bulk flows better than iperf3, so one day I may 
try that.
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-04 Thread Pete Heist

> On Jan 4, 2019, at 3:08 AM, Georgios Amanakis  wrote:
> 
> On Thu, 2019-01-03 at 23:06 +0100, Pete Heist wrote:
>> Both have cake at 100mbit only on egress, with dual-srchost on client
>> and dual-dsthost on server. With this setup (and probably previous
>> ones, I just didn’t test it this way), bi-directional fairness with
>> these flow counts works:
>> 
>>  IP1 8-flow TCP up: 46.4
>>  IP2 1-flow TCP up: 47.3
>>  IP1 8-flow TCP down: 46.8
>>  IP2 1-flow TCP down: 46.7
>> 
>> but with the original flow counts reported it’s still similarly
>> imbalanced as before:
>> 
>>  IP1 8-flow TCP up: 82.9
>>  IP2 1-flow TCP up: 10.9
>>  IP1 1-flow TCP down: 10.8
>>  IP2 8-flow TCP down: 83.3
> 
> I just tested on archlinux, latest 4.20 on the router, iproute2 4.19.0,
> using flent 1.2.2/netserver in a setup similar to Pete's:
> 
> client 1,2 <> router <> server
> 
> The results are the same with Pete's.

One more scenario to add, IP1: 1 up / 1 down, IP2: 1 up / 8 down. In the graph, 
IP1 = host1, IP2 = host2, sorry for the longer labels, and watch out that the 
position of the hosts changes.

dual keywords: 
https://www.heistp.net/downloads/fairness_1_1_1_8/bar_combine_fairness_1_1_1_8.svg
 


host keywords: 
https://www.heistp.net/downloads/fairness_1_1_1_8_host/bar_combine_fairness_1_1_1_8_host.svg
 


Also not what I’d expect, but host 2’s upload does get slowed down, even 
disproportionately, in response to the extra aggregate download he gets. Up and 
down are more balanced with the “host” keywords, but without flow fairness 
there’s higher inter-flow latency.___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-03 Thread Pete Heist

> On Jan 3, 2019, at 11:06 PM, Pete Heist  wrote:
> 
> I’m almost sure I tested this exact scenario, and would not have put 8 up / 8 
> down on one IP and 1 up / 1 down on the other, which works with fairness for 
> some reason.

I’m going to dial this statement back. I went back through my old tests and in 
my main series of a thousand tests or so, I was splitting the two uploads and 
downloads across four IPs, so that’s different. Then when we were testing 
fairness in combination with rtt keywords, I was in fact testing 2 up / 2 down 
on one IP and 8 up / 8 down on the other, which is a scenario that produces the 
expected results.

So unless I can find some other past tests, or build an old enough version to 
show that the behavior was different, I can’t be sure I ever tested it this 
way, and don’t know if it’s a regression or it just works as designed and I 
never realized it.

On the one hand the IP1=1/8, IP2=8/1 results are “fair” in the sense that one 
client gets his wish for 8 uploads and the other gets his wish for 8 downloads, 
like “hey, I’ll let you drown out my 1 download if you let me drown out your 1 
upload” :) but on the other hand, when Jon says there should be a difference 
between the triple-isolate and dual modes, that’s not what we’re seeing here.
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-03 Thread Georgios Amanakis
On Thu, 2019-01-03 at 23:06 +0100, Pete Heist wrote:
> Both have cake at 100mbit only on egress, with dual-srchost on client
> and dual-dsthost on server. With this setup (and probably previous
> ones, I just didn’t test it this way), bi-directional fairness with
> these flow counts works:
> 
>   IP1 8-flow TCP up: 46.4
>   IP2 1-flow TCP up: 47.3
>   IP1 8-flow TCP down: 46.8
>   IP2 1-flow TCP down: 46.7
> 
> but with the original flow counts reported it’s still similarly
> imbalanced as before:
> 
>   IP1 8-flow TCP up: 82.9
>   IP2 1-flow TCP up: 10.9
>   IP1 1-flow TCP down: 10.8
>   IP2 8-flow TCP down: 83.3

I just tested on archlinux, latest 4.20 on the router, iproute2 4.19.0,
using flent 1.2.2/netserver in a setup similar to Pete's:

client 1,2 <> router <> server

The results are the same with Pete's.




___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-03 Thread Pete Heist
I have a simpler setup now to remove some variables, both hosts are APU2 on 
Debian 9.6, kernel 4.9.0-8:

apu2a (iperf3 client) <— default VLAN —>  apu2b (iperf3 server)

Both have cake at 100mbit only on egress, with dual-srchost on client and 
dual-dsthost on server. With this setup (and probably previous ones, I just 
didn’t test it this way), bi-directional fairness with these flow counts works:

IP1 8-flow TCP up: 46.4
IP2 1-flow TCP up: 47.3
IP1 8-flow TCP down: 46.8
IP2 1-flow TCP down: 46.7

but with the original flow counts reported it’s still similarly imbalanced as 
before:

IP1 8-flow TCP up: 82.9
IP2 1-flow TCP up: 10.9
IP1 1-flow TCP down: 10.8
IP2 8-flow TCP down: 83.3

and now with ack-filter on both ends (not much change):

IP1 8-flow TCP up: 82.8
IP2 1-flow TCP up: 10.9
IP1 1-flow TCP down: 10.5
IP2 8-flow TCP down: 83.2

Before I go further, what I’m seeing with this rig is that when 
“interplanetary” is used and the number of iperf3 TCP flows goes above the 
number of CPUs minus one (in my case, 4 cores), the UDP send rate starts 
dropping. This only happens with interplanetary for some reason, but such as it 
is, I’m changed my tests to pit 8 UDP flows against 1 TCP flow instead, giving 
the UDP senders more CPU, as this seems to work much better. All tests except 
the last are with “interplanetary”.

UDP upload competition (looks good):

IP1 1-flow TCP up: 48.6
IP2 8-flow UDP 48-mbit up: 48.2 (0% loss)

UDP download competition (some imbalance, maybe a difference in how iperf3 
reverse mode works?):

IP1 8-flow UDP 48-mbit down: 43.1 (0% loss)
IP2 1-flow TCP down: 53.4 (0% loss)

All four at once (looks similar to previous two tests not impacting one 
another, which is good):

IP1 1-flow TCP up: 47.7
IP2 8-flow UDP 48-mbit up: 48.2 (0% loss)
IP1 8-flow UDP 48-mbit down: 43.3 (0% loss)
IP2 1-flow TCP down: 52.3

All four at once, up IPs flipped (less fair):

IP1 8-flow UDP 48-mbit up: 37.7 (0% loss)
IP2 1-flow TCP up: 57.9
IP1 8-flow UDP 48-mbit down: 38.9 (0% loss)
IP2 1-flow TCP down: 56.3

All four at once, interplanetary off again, to double check it, and yes, UDP 
gets punished in this case:

IP1 1-flow TCP up: 60.6
IP2 8-flow UDP 48-mbit up: 6.7 (86% loss)
IP1 8-flow UDP 48-mbit down: 2.9 (94% loss)
IP2 1-flow TCP down: 63.1

So have we learned something from this? Yes, fairness is improved when using 
UDP instead of TCP for the 8-flow clients, but by turning AQM off we’re also 
testing a very different scenario, one that’s not too realistic. Does this 
prove the cause of the problem is TCP ack traffic?

Thanks again for the help on this. After a whole day on it, I’ll have to shift 
gears tomorrow to FreeNet router changes. I’ll show them the progress on Monday 
so of course I’d like to have a great host fairness story for Cake, as this is 
one of the main reasons to use it instead of fq_codel, but perhaps this will 
get sorted out before then. :)

I agree with George that we’ve been through this before, and also with how he 
explained it in his latest email, but there have been many changes to Cake 
since we tested in 2017, so this could be a regression. I’m almost sure I 
tested this exact scenario, and would not have put 8 up / 8 down on one IP and 
1 up / 1 down on the other, which works with fairness for some reason.

FWIW, I also reproduced it in flent between the same APU2s used above, to be 
sure iperf3 wasn’t somehow causing it:

https://www.heistp.net/downloads/fairness_8_1/ 


___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-03 Thread Georgios Amanakis
In my previous test the clients communicated to different flent
servers (flent-newark, flent-newark.bufferbloat.net). Iproute2 was
iproute2-ss4.18.0-4-openwrt. I will try to test on latest 4.20, will
take some time though.

I have the feeling we have discussed a similar issue in the past
(https://lists.bufferbloat.net/pipermail/cake/2017-November/002985.html).
I understand what Jonathan says. However I cannot explain why
*without* bidirectional traffic the "dual- host" mode behaves like
"src/dst-host", but *with* bidirectional traffic it behaves like
"triple-isolate".

The cake instances on the two interfaces are separate, right? So what
happens on one interface should not influence the other. Even with
bidirectional traffic the "dual- host" mode should still behave like
the "src/dst-host" mode in terms of host fairness, or not? At least
this is what I would intuitively expect.


On Thu, Jan 3, 2019 at 11:35 AM Pete Heist  wrote:
>
>
> > On Jan 3, 2019, at 2:20 PM, Toke Høiland-Jørgensen  wrote:
> >
> > Pete Heist  writes:
> >
> >> I’m not sure there’d be any way I can test fairness with iperf3 in UDP
> >> mode. We’d need something that has some congestion control feedback,
> >> right? Otherwise, I don’t think there are any rates I can choose to
> >> both reach saturation and not be severely punished. And if it has
> >> congestion control feedback, it has the ACK-like traffic we’re trying
> >> to avoid for the test. :)
> >
> > Try setting cake to 'interplanetary' - that should basically turn off
> > the AQM dropping...
>
> Ok, so long as we know that we’re not testing any possible interactions 
> between AQM and host fairness, but we may learn more from it anyway. I’m 
> using my client to server rig here (two APU2s on kernel 4.9.0-8), not the 
> APU1 one-armed router middle box.
>
> So, basic single client rig tests (OK):
>
> IP1 8-flow TCP up: 95.8
> IP2 1-flow 48mbit UDP up: 48.0 (0% loss)
> IP1 8-flow x 6mbit/flow = 48mbit UDP down: 48.0 (0% loss)
> IP2 1-flow TCP down: 96.0
>
> Competition up (OK):
>
> IP1 8-flow TCP up: 59.5
> IP2 1-flow 48mbit UDP up: 36.7 (0% loss)
> Note: I don’t know why the UDP send rate slowed down here. 
> It’s probably not the CPU, as it occurs at lower rates also. I’ll forge on.
>
> Competition down (not OK, high UDP loss):
>
> IP1 1-flow TCP down: 53.3
> IP2 8-flow x 6mbit/flow 48mbit UDP down: 8.6 (82% loss)
> Note: I have no idea what happened with the UDP loss rate 
> here, so I’ll go back to a single IP1 UDP test.
>
> Back to single client (weird, still seeing loss):
>
> IP2 8-flow x 6mbit/flow 48mbit UDP down: 48.0 (5.6% loss)
>
> Ok, I know that was working with no loss before. Stop and restart cake, then 
> (loss stops after restart):
>
> IP2 8-flow x 6mbit/flow 48mbit UDP down: 48.0 (0% loss)
>
> That’s better, now stop and restart cake and try the "competition down" test 
> again (second trial):
>
> IP1 1-flow TCP down: 55.3
> IP2 8-flow x 6mbit/flow 48mbit UDP down: 5.8 (88% loss)
> Note: I have no idea what happened with the UDP loss rate 
> here, so I’ll go back to a single IP1 UDP test.
>
> Since this rig hasn’t passed the two-host uni-directional test because of the 
> high loss rate on the “competition down” test, I’m not going to go any 
> further. I’ll rather go back to my one-armed router rig and send those 
> results in a separate email.
>
> However, I consider it strange that I still see UDP loss after the 
> "competition down” test has run and is completed, then it stops happening 
> after restarting cake. That’s another issue I don’t have time to explore at 
> the moment, unless someone has a good idea of what’s going on there.
>
> ___
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-03 Thread Pete Heist

> On Jan 3, 2019, at 2:20 PM, Toke Høiland-Jørgensen  wrote:
> 
> Pete Heist  writes:
> 
>> I’m not sure there’d be any way I can test fairness with iperf3 in UDP
>> mode. We’d need something that has some congestion control feedback,
>> right? Otherwise, I don’t think there are any rates I can choose to
>> both reach saturation and not be severely punished. And if it has
>> congestion control feedback, it has the ACK-like traffic we’re trying
>> to avoid for the test. :)
> 
> Try setting cake to 'interplanetary' - that should basically turn off
> the AQM dropping...

Ok, so long as we know that we’re not testing any possible interactions between 
AQM and host fairness, but we may learn more from it anyway. I’m using my 
client to server rig here (two APU2s on kernel 4.9.0-8), not the APU1 one-armed 
router middle box.

So, basic single client rig tests (OK):

IP1 8-flow TCP up: 95.8
IP2 1-flow 48mbit UDP up: 48.0 (0% loss)
IP1 8-flow x 6mbit/flow = 48mbit UDP down: 48.0 (0% loss)
IP2 1-flow TCP down: 96.0

Competition up (OK):

IP1 8-flow TCP up: 59.5
IP2 1-flow 48mbit UDP up: 36.7 (0% loss)
Note: I don’t know why the UDP send rate slowed down here. It’s 
probably not the CPU, as it occurs at lower rates also. I’ll forge on.

Competition down (not OK, high UDP loss):

IP1 1-flow TCP down: 53.3
IP2 8-flow x 6mbit/flow 48mbit UDP down: 8.6 (82% loss)
Note: I have no idea what happened with the UDP loss rate here, 
so I’ll go back to a single IP1 UDP test.

Back to single client (weird, still seeing loss):

IP2 8-flow x 6mbit/flow 48mbit UDP down: 48.0 (5.6% loss)

Ok, I know that was working with no loss before. Stop and restart cake, then 
(loss stops after restart):

IP2 8-flow x 6mbit/flow 48mbit UDP down: 48.0 (0% loss)

That’s better, now stop and restart cake and try the "competition down" test 
again (second trial):

IP1 1-flow TCP down: 55.3
IP2 8-flow x 6mbit/flow 48mbit UDP down: 5.8 (88% loss)
Note: I have no idea what happened with the UDP loss rate here, 
so I’ll go back to a single IP1 UDP test.

Since this rig hasn’t passed the two-host uni-directional test because of the 
high loss rate on the “competition down” test, I’m not going to go any further. 
I’ll rather go back to my one-armed router rig and send those results in a 
separate email.

However, I consider it strange that I still see UDP loss after the "competition 
down” test has run and is completed, then it stops happening after restarting 
cake. That’s another issue I don’t have time to explore at the moment, unless 
someone has a good idea of what’s going on there.

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-03 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

>> On Jan 3, 2019, at 12:03 PM, Toke Høiland-Jørgensen  wrote:
>> 
>>> Jon, is there anything I can check by instrumenting the code somewhere
>>> specific?
>> 
>> Is there any way you could test with a bulk UDP flow? I'm wondering
>> whether this is a second-order effect where TCP ACKs are limited in a
>> way that cause the imbalance? Are you using ACK compression?
>
>
> Not using ack-filter, if that’s what’s meant by ACK compression. I
> thought about the TCP ACK traffic, but would be very surprised if that
> amount of ACK traffic could cause that large of an imbalance, although
> it’s worth trying to find out.
>
> I tried iperf3 in UDP mode, but cake is treating these flows
> aggressively. I get the impression that cake penalizes flows heavily
> that do not respond to congestion control signals. If I pit one 8 TCP
> flows against a single UDP flow at 40mbit, the UDP flow goes into a
> death spiral with increasing drops over time (iperf3 output attached).
>
> I’m not sure there’d be any way I can test fairness with iperf3 in UDP
> mode. We’d need something that has some congestion control feedback,
> right? Otherwise, I don’t think there are any rates I can choose to
> both reach saturation and not be severely punished. And if it has
> congestion control feedback, it has the ACK-like traffic we’re trying
> to avoid for the test. :)

Try setting cake to 'interplanetary' - that should basically turn off
the AQM dropping...

-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-03 Thread Pete Heist

> On Jan 3, 2019, at 12:03 PM, Toke Høiland-Jørgensen  wrote:
> 
>> Jon, is there anything I can check by instrumenting the code somewhere
>> specific?
> 
> Is there any way you could test with a bulk UDP flow? I'm wondering
> whether this is a second-order effect where TCP ACKs are limited in a
> way that cause the imbalance? Are you using ACK compression?


Not using ack-filter, if that’s what’s meant by ACK compression. I thought 
about the TCP ACK traffic, but would be very surprised if that amount of ACK 
traffic could cause that large of an imbalance, although it’s worth trying to 
find out.

I tried iperf3 in UDP mode, but cake is treating these flows aggressively. I 
get the impression that cake penalizes flows heavily that do not respond to 
congestion control signals. If I pit one 8 TCP flows against a single UDP flow 
at 40mbit, the UDP flow goes into a death spiral with increasing drops over 
time (iperf3 output attached).

I’m not sure there’d be any way I can test fairness with iperf3 in UDP mode. 
We’d need something that has some congestion control feedback, right? 
Otherwise, I don’t think there are any rates I can choose to both reach 
saturation and not be severely punished. And if it has congestion control 
feedback, it has the ACK-like traffic we’re trying to avoid for the test. :)

As another test, I took out the one-armed router and just tried from a client 
to a server, no VLANs. Same result. So, still stumped. Thank you for the help...

---
Server listening on 5202
---
Accepted connection from 10.0.0.239, port 48289
[  5] local 10.0.0.231 port 5202 connected to 10.72.0.239 port 38334
[ ID] Interval   Transfer Bandwidth   JitterLost/Total 
Datagrams
[  5]   0.00-1.00   sec  4.20 MBytes  35.3 Mbits/sec  0.467 ms  21/559 (3.8%)  
[  5]   1.00-2.00   sec  4.27 MBytes  35.8 Mbits/sec  0.555 ms  43/589 (7.3%)  
[  5]   2.00-3.00   sec  4.48 MBytes  37.6 Mbits/sec  0.482 ms  69/642 (11%)  
[  5]   3.00-4.00   sec  3.90 MBytes  32.7 Mbits/sec  0.461 ms  87/586 (15%)  
[  5]   4.00-5.00   sec  3.84 MBytes  32.2 Mbits/sec  0.490 ms  111/603 (18%)  
[  5]   5.00-6.00   sec  3.94 MBytes  33.0 Mbits/sec  0.341 ms  130/634 (21%)  
[  5]   6.00-7.00   sec  3.63 MBytes  30.5 Mbits/sec  0.539 ms  144/609 (24%)  
[  5]   7.00-8.00   sec  3.59 MBytes  30.1 Mbits/sec  0.451 ms  159/618 (26%)  
[  5]   8.00-9.00   sec  3.21 MBytes  26.9 Mbits/sec  0.987 ms  181/592 (31%)  
[  5]   9.00-10.00  sec  3.23 MBytes  27.1 Mbits/sec  0.224 ms  225/639 (35%)  
[  5]  10.00-11.00  sec  3.11 MBytes  26.1 Mbits/sec  0.204 ms  214/612 (35%)  
[  5]  11.00-12.00  sec  2.80 MBytes  23.5 Mbits/sec  0.371 ms  229/587 (39%)  
[  5]  12.00-13.00  sec  2.66 MBytes  22.3 Mbits/sec  0.543 ms  254/594 (43%)  
[  5]  13.00-14.00  sec  2.73 MBytes  22.9 Mbits/sec  0.386 ms  292/642 (45%)  
[  5]  14.00-15.00  sec  2.49 MBytes  20.9 Mbits/sec  0.399 ms  298/617 (48%)  
[  5]  15.00-16.00  sec  2.40 MBytes  20.1 Mbits/sec  0.216 ms  288/595 (48%)  
[  5]  16.00-17.00  sec  2.20 MBytes  18.5 Mbits/sec  0.486 ms  327/609 (54%)  
[  5]  17.00-18.00  sec  2.19 MBytes  18.3 Mbits/sec  0.538 ms  344/624 (55%)  
[  5]  18.00-19.00  sec  2.00 MBytes  16.8 Mbits/sec  0.519 ms  321/577 (56%)  
[  5]  19.00-20.00  sec  1.95 MBytes  16.4 Mbits/sec  0.930 ms  369/619 (60%)  
[  5]  20.00-21.00  sec  1.93 MBytes  16.2 Mbits/sec  0.526 ms  377/624 (60%)  
[  5]  21.00-22.00  sec  1.66 MBytes  13.9 Mbits/sec  0.543 ms  374/586 (64%)  
[  5]  22.00-23.00  sec  1.70 MBytes  14.2 Mbits/sec  0.833 ms  412/629 (66%)  
[  5]  23.00-24.00  sec  1.66 MBytes  13.9 Mbits/sec  0.340 ms  402/614 (65%)  
[  5]  24.00-25.00  sec  1.52 MBytes  12.7 Mbits/sec  0.693 ms  431/625 (69%)  
[  5]  25.00-26.00  sec  1.40 MBytes  11.7 Mbits/sec  0.491 ms  404/583 (69%)  
[  5]  26.00-27.00  sec  1.32 MBytes  11.1 Mbits/sec  1.028 ms  456/625 (73%)  
[  5]  27.00-28.00  sec  1.25 MBytes  10.5 Mbits/sec  0.870 ms  427/587 (73%)  
[  5]  28.00-29.00  sec  1.20 MBytes  10.1 Mbits/sec  0.660 ms  479/633 (76%)  
[  5]  29.00-30.00  sec  1.19 MBytes  9.96 Mbits/sec  0.773 ms  466/618 (75%)  
[  5]  30.00-31.00  sec  1.05 MBytes  8.85 Mbits/sec  1.103 ms  455/590 (77%)  
[  5]  31.00-32.00  sec  1.03 MBytes  8.65 Mbits/sec  0.559 ms  488/620 (79%)  
[  5]  32.00-33.00  sec   888 KBytes  7.27 Mbits/sec  0.415 ms  494/605 (82%)  
[  5]  33.00-34.00  sec   896 KBytes  7.34 Mbits/sec  1.023 ms  489/601 (81%)  
[  5]  34.00-35.00  sec   880 KBytes  7.21 Mbits/sec  0.986 ms  519/629 (83%)  
[  5]  35.00-36.00  sec   776 KBytes  6.36 Mbits/sec  0.414 ms  493/590 (84%)  
[  5]  36.00-37.00  sec   800 KBytes  6.55 Mbits/sec  0.845 ms  506/606 (83%)  
[  5]  37.00-38.00  sec   832 KBytes  6.82 Mbits/sec  1.124 ms  536/640 (84%)  
[  5]  38.00-39.00  sec   768 KBytes  6.29 Mbits/sec  0.577 ms  515/611 (84%)  
[  5]  39.00-40.00  sec   728 KBytes  5.96 

Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-03 Thread Toke Høiland-Jørgensen
> Jon, is there anything I can check by instrumenting the code somewhere
> specific?

Is there any way you could test with a bulk UDP flow? I'm wondering
whether this is a second-order effect where TCP ACKs are limited in a
way that cause the imbalance? Are you using ACK compression?

-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-03 Thread Pete Heist

> On Jan 3, 2019, at 6:18 AM, Jonathan Morton  wrote:
> 
>> On 3 Jan, 2019, at 6:15 am, Georgios Amanakis  wrote:
>> 
>> It seems if both clients are having bidirectional traffic, dual-
>> {dst,src}host has the same effect as triple-isolate (on both lan and
>> wan interfaces) on their bandwidth.

Exactly what I’m seeing- thanks for testing George...

> I'm left wondering whether the sense of src and dst has got accidentally 
> reversed at some point, or if the dual modes are being misinterpreted as 
> triple-isolate.  To figure that out, I'd need to look carefully at several 
> related parts of the code.  Can anyone reproduce it from the latest kernels' 
> upstream code, or is it only in the module?  And precisely which version of 
> iproute2 is everyone using?

It will be a while before I can try this on 4.19+, but: iproute2/oldstable,now 
3.16.0-2 i386. I compile tc-adv from HEAD.

Here are more bi-directional tests with 8 up / 1 down on IP1 and 1 up / 8 down 
on IP2:

dual-srchost/dual-dsthost:
IP1: 83.1 / 10.9, IP2: 10.7 / 83.0
dual-dsthost/dual-srchost (sense flipped):
IP1: 83.0 / 10.5, IP2: 10.7 / 82.9
triple-isolate:
IP1: 83.1 / 10.5, IP2: 10.7 / 82.9
srchost/dsthost (sanity check):
IP1: 47.6 / 43.8, IP2: 44.2 / 47.4
dsthost/srchost (sanity check, sense flipped):
IP1: 81.3 / 9.79, IP2: 11.0 / 80.7
flows:
IP1: 83.0 / 10.4, IP2: 10.5 / 82.9

I also tried testing shaping on eth0.3300 and ingress of eth0.3300 instead of 
egress of both eth0 and eth0.3300, because that’s more like what I tested 
before. There was no significant change from the above results.

I managed to compile versions all the way back to July 15, 2018 
(1e2473f702cf253f8f5ade4d622c6e4ba661a09d) and still see the same result. I’ll 
try to go earlier.

As far as the code goes, the easy stuff:
- flow_mode values in cake_hash are 5 for dual-srchost, 6 for dual-dsthost and 
7 for triple-isolate
- the values from cake_dsrc(flow_mode) and cake_ddst(flow_mode) are as expected 
in all three cases
- flow_override and host_override are both 0
- looks correct: !(flow_mode & CAKE_FLOW_FLOWS) == 0
- this looks normal to me (shows reply packets on eth0):
   IP1 ping: dsthost_idx = 450, reduced_hash = 129
   IP1 irtt: dsthost_idx = 450, reduced_hash = 158
   IP2 ping: dsthost_idx = 301, reduced_hash = 78
   IP2 irtt: dsthost_idx = 301, reduced_hash = 399

Jon, is there anything I can check by instrumenting the code somewhere specific?

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-02 Thread Jonathan Morton
> On 3 Jan, 2019, at 6:15 am, Georgios Amanakis  wrote:
> 
> It seems if both clients are having bidirectional traffic, dual-
> {dst,src}host has the same effect as triple-isolate (on both lan and
> wan interfaces) on their bandwidth.

> This shouldn't happen though, or am I wrong?

If both clients are communicating with the same single server IP, then there 
*should* be a difference between triple-isolate and the dual modes.  In that 
case triple-isolate would behave like plain flow isolation, because it takes 
the maximum flow-load of the src and dst hosts to determine which dual mode it 
should behave most like.

Conversely, if the clients are communicating with a different server IP for 
each flow, or are each sending all their flows to one server IP that's unique 
to them, then triple-isolate should behave the same as the appropriate dual 
modes.  This is the use-case that triple-isolate assumes in its design.

It's also possible for triple-isolation to behave differently from either of 
the dual modes, if there's a sufficiently complex pattern of traffic flows.  I 
think those cases would be relatively unusual in practice, but they certainly 
can occur.

I'm left wondering whether the sense of src and dst has got accidentally 
reversed at some point, or if the dual modes are being misinterpreted as 
triple-isolate.  To figure that out, I'd need to look carefully at several 
related parts of the code.  Can anyone reproduce it from the latest kernels' 
upstream code, or is it only in the module?  And precisely which version of 
iproute2 is everyone using?

 - Jonathan Morton

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-02 Thread Georgios Amanakis
It seems if both clients are having bidirectional traffic, dual-
{dst,src}host has the same effect as triple-isolate (on both lan and
wan interfaces) on their bandwidth.
This shouldn't happen though, or am I wrong?


On Wed, 2019-01-02 at 22:57 -0500, Georgios Amanakis wrote:
> I can reproduce this one to my surprise, too. 
> I tested on my Comcast connection, with a WRT1900ACS, running openwrt
> (r8082-95b3f8ec8d, 4.14.70), with two interfaces br-lan and
> eth0(wan).
> 
> IP1=1 up / 8 downIP2=4 up / 4 down
>   src/dst, bidir: IP1=0.88 /  8.44, IP2=0.66 / 7.75 (ok)
> dualsrc/dualdst, bidir: IP1=0.27 / 10.56, IP2=1.41 / 6.42 (unfair)
> 
> No VLANs, no other schedulers on eth0 and br-lan apart from cake.
> 
> 
> 
> On Wed, 2019-01-02 at 00:04 +0100, Pete Heist wrote:
> > In my one-armed router setup I’m seeing host fairness work
> > perfectly
> > with srchost or dsthost, but with dual-srchost or dual-dsthost,
> > host
> > fairness deviates from the ideal, _only_ when there's bi-
> > directional
> > traffic. The deviation is then dependent on the number of flows. Is
> > this expected?
> > 
> > I had thought that dual-src/dsthost worked the same as src/dsthost
> > (fairness between hosts) with the exception that there is also
> > fairness of flows within each host.
> > 
> > Here are some results (all rates aggregate throughput in Mbit):
> > 
> > IP1=8 up / 1 down   IP2=1 up / 8 down (post-test tc stats
> > attached):
> > srchost/dsthost, upload only: IP1=48.1, IP2=47.9  (OK)
> > srchost/dsthost, download only: IP1=47.8, IP2=47.8  (OK)
> > srchost/dsthost, bi-directional: IP1=47.5 up / 43.9 down,
> > IP2=44.7 up / 46.7 down  (OK)
> > 
> > dual-srchost/dual-dsthost, upload only: IP1=48.1,
> > IP2=48.0  (OK)
> > dual-srchost/dual-dsthost, download only: IP1=47.9,
> > IP2=47.9  (OK)
> > dual-srchost/dual-dsthost, bi-directional: IP1=83.0 up / 10.7
> > down, IP2=10.6 up / 83.0 down (*** asymmetric ***)
> > 
> > Dual-srchost/dual-dsthost, bi-directional tests with different flow
> > counts:
> > 
> > IP1=4 up / 1 down   IP2=1 up / 4 down:
> > IP1=74.8 up / 18.8 down, IP2=18.8 up / 74.8 down
> > 
> > IP1=2 up / 1 down   IP2=1 up / 2 down:
> > IP1=62.4 up / 31.3 down, IP2=31.3 up / 62.4 down
> > 
> > IP1=4 up / 1 down   IP2=1 up / 8 down:
> > IP1=81.8 up / 11.5 down, IP2=17.4 up / 76.3 down
> > 
> > IP1=2 up / 1 down   IP2=1 up / 8 down:
> > IP1=79.9 up / 13.5 down, IP2=25.7 up / 68.1 down
> > 
> > The setup:
> > 
> > apu2a (kernel 4.9)  <— default VLAN —>  apu1a (kernel
> > 3.16.7)  <— VLAN 3300 —>  apu2b (kernel 4.9)
> > 
> > - apu1a is the router, and has cake only on egress of both eth0 and
> > eth0.3300, rate limited to 100mbit for both
> > - it has no trouble shaping at 100mbit up and down simultaneously,
> > so
> > that should not be a problem
> > - the same problem occurs at 25mbit or 50mbit)
> > - since apu2a is the client [dual-]dsthost is used on eth0 and
> > [dual-
> > ]srchost is used on eth0.3300
> > - the fairness test setup seems correct, based on the results of
> > most
> > of the tests, at least.
> > - note in the qdisc stats attached there is a prio qdisc on eth0
> > for
> > filtering out VLAN traffic so it isn’t shaped twice
> > - I also get the exact same results with an htb or hfsc hierarchy
> > on
> > eth0 instead of adding a qdisc to eth0.3300
> > - printk’s in sch_cake.c shows values of flow_mode, srchost_hash
> > and
> > dsthost_hash as expected
> > - I also see it going into allocate_src and allocate_dst as
> > expected,
> > and later ending up in found_src and found_dst
> > 
> > I’m stumped. I know I’ve tested fairness of dual-src/dsthost
> > before,
> > but that was from the egress of client and server, and it was on a
> > recent kernel. Time to sleep on it...
> > 
> > ___
> > Cake mailing list
> > Cake@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/cake

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


Re: [Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-02 Thread Georgios Amanakis
I can reproduce this one to my surprise, too. 
I tested on my Comcast connection, with a WRT1900ACS, running openwrt
(r8082-95b3f8ec8d, 4.14.70), with two interfaces br-lan and eth0(wan).

IP1=1 up / 8 downIP2=4 up / 4 down
src/dst, bidir: IP1=0.88 /  8.44, IP2=0.66 / 7.75 (ok)
dualsrc/dualdst, bidir: IP1=0.27 / 10.56, IP2=1.41 / 6.42 (unfair)

No VLANs, no other schedulers on eth0 and br-lan apart from cake.



On Wed, 2019-01-02 at 00:04 +0100, Pete Heist wrote:
> In my one-armed router setup I’m seeing host fairness work perfectly
> with srchost or dsthost, but with dual-srchost or dual-dsthost, host
> fairness deviates from the ideal, _only_ when there's bi-directional
> traffic. The deviation is then dependent on the number of flows. Is
> this expected?
> 
> I had thought that dual-src/dsthost worked the same as src/dsthost
> (fairness between hosts) with the exception that there is also
> fairness of flows within each host.
> 
> Here are some results (all rates aggregate throughput in Mbit):
> 
> IP1=8 up / 1 down   IP2=1 up / 8 down (post-test tc stats attached):
>   srchost/dsthost, upload only: IP1=48.1, IP2=47.9  (OK)
>   srchost/dsthost, download only: IP1=47.8, IP2=47.8  (OK)
>   srchost/dsthost, bi-directional: IP1=47.5 up / 43.9 down,
> IP2=44.7 up / 46.7 down  (OK)
> 
>   dual-srchost/dual-dsthost, upload only: IP1=48.1,
> IP2=48.0  (OK)
>   dual-srchost/dual-dsthost, download only: IP1=47.9,
> IP2=47.9  (OK)
>   dual-srchost/dual-dsthost, bi-directional: IP1=83.0 up / 10.7
> down, IP2=10.6 up / 83.0 down (*** asymmetric ***)
> 
> Dual-srchost/dual-dsthost, bi-directional tests with different flow
> counts:
> 
> IP1=4 up / 1 down   IP2=1 up / 4 down:
>   IP1=74.8 up / 18.8 down, IP2=18.8 up / 74.8 down
> 
> IP1=2 up / 1 down   IP2=1 up / 2 down:
>   IP1=62.4 up / 31.3 down, IP2=31.3 up / 62.4 down
> 
> IP1=4 up / 1 down   IP2=1 up / 8 down:
>   IP1=81.8 up / 11.5 down, IP2=17.4 up / 76.3 down
> 
> IP1=2 up / 1 down   IP2=1 up / 8 down:
>   IP1=79.9 up / 13.5 down, IP2=25.7 up / 68.1 down
> 
> The setup:
> 
>   apu2a (kernel 4.9)  <— default VLAN —>  apu1a (kernel
> 3.16.7)  <— VLAN 3300 —>  apu2b (kernel 4.9)
> 
> - apu1a is the router, and has cake only on egress of both eth0 and
> eth0.3300, rate limited to 100mbit for both
> - it has no trouble shaping at 100mbit up and down simultaneously, so
> that should not be a problem
> - the same problem occurs at 25mbit or 50mbit)
> - since apu2a is the client [dual-]dsthost is used on eth0 and [dual-
> ]srchost is used on eth0.3300
> - the fairness test setup seems correct, based on the results of most
> of the tests, at least.
> - note in the qdisc stats attached there is a prio qdisc on eth0 for
> filtering out VLAN traffic so it isn’t shaped twice
> - I also get the exact same results with an htb or hfsc hierarchy on
> eth0 instead of adding a qdisc to eth0.3300
> - printk’s in sch_cake.c shows values of flow_mode, srchost_hash and
> dsthost_hash as expected
> - I also see it going into allocate_src and allocate_dst as expected,
> and later ending up in found_src and found_dst
> 
> I’m stumped. I know I’ve tested fairness of dual-src/dsthost before,
> but that was from the egress of client and server, and it was on a
> recent kernel. Time to sleep on it...
> 
> ___
> Cake mailing list
> Cake@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake

___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cake


[Cake] dual-src/dsthost unfairness, only with bi-directional traffic

2019-01-01 Thread Pete Heist
In my one-armed router setup I’m seeing host fairness work perfectly with 
srchost or dsthost, but with dual-srchost or dual-dsthost, host fairness 
deviates from the ideal, _only_ when there's bi-directional traffic. The 
deviation is then dependent on the number of flows. Is this expected?

I had thought that dual-src/dsthost worked the same as src/dsthost (fairness 
between hosts) with the exception that there is also fairness of flows within 
each host.

Here are some results (all rates aggregate throughput in Mbit):

IP1=8 up / 1 down   IP2=1 up / 8 down (post-test tc stats attached):
srchost/dsthost, upload only: IP1=48.1, IP2=47.9  (OK)
srchost/dsthost, download only: IP1=47.8, IP2=47.8  (OK)
srchost/dsthost, bi-directional: IP1=47.5 up / 43.9 down, IP2=44.7 up / 
46.7 down  (OK)

dual-srchost/dual-dsthost, upload only: IP1=48.1, IP2=48.0  (OK)
dual-srchost/dual-dsthost, download only: IP1=47.9, IP2=47.9  (OK)
dual-srchost/dual-dsthost, bi-directional: IP1=83.0 up / 10.7 down, 
IP2=10.6 up / 83.0 down (*** asymmetric ***)

Dual-srchost/dual-dsthost, bi-directional tests with different flow counts:

IP1=4 up / 1 down   IP2=1 up / 4 down:
IP1=74.8 up / 18.8 down, IP2=18.8 up / 74.8 down

IP1=2 up / 1 down   IP2=1 up / 2 down:
IP1=62.4 up / 31.3 down, IP2=31.3 up / 62.4 down

IP1=4 up / 1 down   IP2=1 up / 8 down:
IP1=81.8 up / 11.5 down, IP2=17.4 up / 76.3 down

IP1=2 up / 1 down   IP2=1 up / 8 down:
IP1=79.9 up / 13.5 down, IP2=25.7 up / 68.1 down

The setup:

apu2a (kernel 4.9)  <— default VLAN —>  apu1a (kernel 3.16.7)  <— VLAN 
3300 —>  apu2b (kernel 4.9)

- apu1a is the router, and has cake only on egress of both eth0 and eth0.3300, 
rate limited to 100mbit for both
- it has no trouble shaping at 100mbit up and down simultaneously, so that 
should not be a problem
- the same problem occurs at 25mbit or 50mbit)
- since apu2a is the client [dual-]dsthost is used on eth0 and [dual-]srchost 
is used on eth0.3300
- the fairness test setup seems correct, based on the results of most of the 
tests, at least.
- note in the qdisc stats attached there is a prio qdisc on eth0 for filtering 
out VLAN traffic so it isn’t shaped twice
- I also get the exact same results with an htb or hfsc hierarchy on eth0 
instead of adding a qdisc to eth0.3300
- printk’s in sch_cake.c shows values of flow_mode, srchost_hash and 
dsthost_hash as expected
- I also see it going into allocate_src and allocate_dst as expected, and later 
ending up in found_src and found_dst

I’m stumped. I know I’ve tested fairness of dual-src/dsthost before, but that 
was from the egress of client and server, and it was on a recent kernel. Time 
to sleep on it...

qdisc prio 1: dev eth0 root refcnt 2 bands 2 priomap  1 1 1 1 1 1 1 1 1 1 1 1 1 
1 1 1
 Sent 1502062182 bytes 1525226 pkt (dropped 204373, overlimits 0 requeues 0) 
 backlog 0b 4294550547p requeues 0
qdisc cake 10: dev eth0 parent 1:1 bandwidth 100Mbit besteffort dual-dsthost 
nonat nowash no-ack-filter split-gso rtt 100.0ms raw overhead 0 
 Sent 751856301 bytes 762993 pkt (dropped 8512, overlimits 2414269 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 361600b of 500b
 capacity estimate: 100Mbit
 min/max network layer size:   42 /1514
 min/max overhead-adjusted size:   42 /1514
 average network hdr offset:   14

  Tin 0
  thresh100Mbit
  target  5.0ms
  interval  100.0ms
  pk_delay3.9ms
  av_delay2.4ms
  sp_delay 60us
  backlog0b
  pkts   771505
  bytes   764743469
  way_inds6
  way_miss   26
  way_cols0
  drops8512
  marks   0
  ack_drop0
  sp_flows9
  bk_flows4
  un_flows0
  max_len 18168
  quantum  1514

qdisc cake 8060: dev eth0.3300 root refcnt 2 bandwidth 100Mbit besteffort 
dual-srchost nonat nowash no-ack-filter split-gso rtt 100.0ms raw overhead 0 
 Sent 750205881 bytes 762233 pkt (dropped 8542, overlimits 904221 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 279744b of 500b
 capacity estimate: 100Mbit
 min/max network layer size:   42 /1514
 min/max overhead-adjusted size:   42 /1514
 average network hdr offset:   14

  Tin 0
  thresh100Mbit
  target  5.0ms
  interval  100.0ms
  pk_delay157us
  av_delay 83us
  sp_delay  1us
  backlog0b
  pkts   770775
  bytes   763138469
  way_inds31166
  way_miss   23
  way_cols0
  drops8542
  marks   0
  ack_drop0
  sp_flows   17
  bk_flows1
  un_flows0
  max_len 30280
  quantum  1514
___
Cake mailing list
Cake@lists.bufferbloat.net