Thanks for testing Pete! I should note though, that patch is incorrect
in terms of triple-isolate. It can be further improved my
differentiating between srchost and dsthost. The results are the same
nevertheless. The same principle can also be applied to the sparse
flows.
However, I completely
I ran my original iperf3 test with and without the patch, through my one-armed
router with hfsc+cake on egress each direction at 100Mbit:
Unpatched:
IP1 1-flow TCP up: 11.3
IP2 8-flow TCP up: 90.1
IP1 8-flow TCP down: 89.8
IP2 1-flow TCP down: 11.3
Jain’s fairness index, directional: 0.623 up,
Jonathan Morton writes:
>>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>>> others think. I want to see if fairness is preserved in this case with
>>> sparse flows only. Could flent do this?
>>
>> Well, sparse flows are (by definition) not building a queue, so it
>>
Sebastian Moeller writes:
> Hi Toke,
>
>> On Jan 18, 2019, at 14:33, Toke Høiland-Jørgensen wrote:
>>
>> Georgios Amanakis writes:
>>
>>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>>> others think. I want to see if fairness is preserved in this case with
>>> sparse
>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>> others think. I want to see if fairness is preserved in this case with
>> sparse flows only. Could flent do this?
>
> Well, sparse flows are (by definition) not building a queue, so it
> doesn't really make sense to talk
Hi Toke,
> On Jan 18, 2019, at 14:33, Toke Høiland-Jørgensen wrote:
>
> Georgios Amanakis writes:
>
>> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
>> others think. I want to see if fairness is preserved in this case with
>> sparse flows only. Could flent do this?
>
>
Georgios Amanakis writes:
> Yes, exactly. Would be interesting to hear what Jonathan, Toke and
> others think. I want to see if fairness is preserved in this case with
> sparse flows only. Could flent do this?
Well, sparse flows are (by definition) not building a queue, so it
doesn't really
Yes, exactly. Would be interesting to hear what Jonathan, Toke and others
think. I want to see if fairness is preserved in this case with sparse
flows only. Could flent do this?
On Fri, Jan 18, 2019, 5:07 AM Toke Høiland-Jørgensen George Amanakis writes:
>
> > A better version of the patch for
George Amanakis writes:
> A better version of the patch for testing.
So basically, you're changing the host fairness algorithm to only
consider bulk flows instead of all active flows to that host, right?
Seems reasonable to me. Jonathan, any opinion?
-Toke
Thanks for working on it, looks promising! I’d be interested in hearing some
more feedback if this is the right approach, but it looks like it from the
experiments. I should be able to put some more testing time into it in a few
days...
> On Jan 16, 2019, at 4:47 AM,
> wrote:
>
> Of course
A better version of the patch for testing.
Setup:
IP{1,2}(flent) <> Router <> Server(netserver)
Router:
tc qdisc add dev enp1s0 root cake bandwidth 100mbit dual-srchost besteffort
tc qdisc add dev enp4s0 root cake bandwidth 100mbit dual-dsthost besteffort
IP1:
Data file written to
The patch I previously sent had the host_load manipulated only when
dual-dsthost is sent, since that was what I was primarily testing. For
dual-srchost to behave the same way line 2107 has to be changed, too. Will
resubmit in case anybody wants to test, later today.
On Tue, Jan 15, 2019, 2:22 PM
I think what is happening here is that if a client has flows such as "a
(bulk upload)" and "b (bulk download)", the incoming ACKs of flow "a"
compete with the incoming bulk traffic on flow "b". With compete I mean
in terms of flow selection.
So if we adjust the host_load to be the same with the
> On Jan 3, 2019, at 2:02 PM, Pete Heist wrote:
>
> I tried iperf3 in UDP mode, but cake is treating these flows aggressively. I
> get the impression that cake penalizes flows heavily that do not respond to
> congestion control signals. If I pit one 8 TCP flows against a single UDP
> flow at
> On Jan 4, 2019, at 3:08 AM, Georgios Amanakis wrote:
>
> On Thu, 2019-01-03 at 23:06 +0100, Pete Heist wrote:
>> Both have cake at 100mbit only on egress, with dual-srchost on client
>> and dual-dsthost on server. With this setup (and probably previous
>> ones, I just didn’t test it this
> On Jan 3, 2019, at 11:06 PM, Pete Heist wrote:
>
> I’m almost sure I tested this exact scenario, and would not have put 8 up / 8
> down on one IP and 1 up / 1 down on the other, which works with fairness for
> some reason.
I’m going to dial this statement back. I went back through my old
On Thu, 2019-01-03 at 23:06 +0100, Pete Heist wrote:
> Both have cake at 100mbit only on egress, with dual-srchost on client
> and dual-dsthost on server. With this setup (and probably previous
> ones, I just didn’t test it this way), bi-directional fairness with
> these flow counts works:
>
>
I have a simpler setup now to remove some variables, both hosts are APU2 on
Debian 9.6, kernel 4.9.0-8:
apu2a (iperf3 client) <— default VLAN —> apu2b (iperf3 server)
Both have cake at 100mbit only on egress, with dual-srchost on client and
dual-dsthost on server. With this setup (and
In my previous test the clients communicated to different flent
servers (flent-newark, flent-newark.bufferbloat.net). Iproute2 was
iproute2-ss4.18.0-4-openwrt. I will try to test on latest 4.20, will
take some time though.
I have the feeling we have discussed a similar issue in the past
> On Jan 3, 2019, at 2:20 PM, Toke Høiland-Jørgensen wrote:
>
> Pete Heist writes:
>
>> I’m not sure there’d be any way I can test fairness with iperf3 in UDP
>> mode. We’d need something that has some congestion control feedback,
>> right? Otherwise, I don’t think there are any rates I can
Pete Heist writes:
>> On Jan 3, 2019, at 12:03 PM, Toke Høiland-Jørgensen wrote:
>>
>>> Jon, is there anything I can check by instrumenting the code somewhere
>>> specific?
>>
>> Is there any way you could test with a bulk UDP flow? I'm wondering
>> whether this is a second-order effect where
> On Jan 3, 2019, at 12:03 PM, Toke Høiland-Jørgensen wrote:
>
>> Jon, is there anything I can check by instrumenting the code somewhere
>> specific?
>
> Is there any way you could test with a bulk UDP flow? I'm wondering
> whether this is a second-order effect where TCP ACKs are limited in a
> Jon, is there anything I can check by instrumenting the code somewhere
> specific?
Is there any way you could test with a bulk UDP flow? I'm wondering
whether this is a second-order effect where TCP ACKs are limited in a
way that cause the imbalance? Are you using ACK compression?
-Toke
> On Jan 3, 2019, at 6:18 AM, Jonathan Morton wrote:
>
>> On 3 Jan, 2019, at 6:15 am, Georgios Amanakis wrote:
>>
>> It seems if both clients are having bidirectional traffic, dual-
>> {dst,src}host has the same effect as triple-isolate (on both lan and
>> wan interfaces) on their bandwidth.
> On 3 Jan, 2019, at 6:15 am, Georgios Amanakis wrote:
>
> It seems if both clients are having bidirectional traffic, dual-
> {dst,src}host has the same effect as triple-isolate (on both lan and
> wan interfaces) on their bandwidth.
> This shouldn't happen though, or am I wrong?
If both
It seems if both clients are having bidirectional traffic, dual-
{dst,src}host has the same effect as triple-isolate (on both lan and
wan interfaces) on their bandwidth.
This shouldn't happen though, or am I wrong?
On Wed, 2019-01-02 at 22:57 -0500, Georgios Amanakis wrote:
> I can reproduce
I can reproduce this one to my surprise, too.
I tested on my Comcast connection, with a WRT1900ACS, running openwrt
(r8082-95b3f8ec8d, 4.14.70), with two interfaces br-lan and eth0(wan).
IP1=1 up / 8 downIP2=4 up / 4 down
src/dst, bidir: IP1=0.88 / 8.44, IP2=0.66 / 7.75 (ok)
In my one-armed router setup I’m seeing host fairness work perfectly with
srchost or dsthost, but with dual-srchost or dual-dsthost, host fairness
deviates from the ideal, _only_ when there's bi-directional traffic. The
deviation is then dependent on the number of flows. Is this expected?
I
28 matches
Mail list logo