I don't think that this feature really hurts TCP.
TCP is robust to that in any case. Even if there is avg RTT increase and
stddev RTT increase.
And, I agree that what is more important is the performance of sparse
flows, which is not affected by this feature.
There is one little thing that might
Jonathan Morton writes:
your solution significantly hurts performance in the common case
>>>
>>> I'm sorry - did someone actually describe such a case? I must have
>>> missed it.
>>
>> I started this whole thread by pointing out that this behaviour results
>> in the delay of the TCP flows
>>> your solution significantly hurts performance in the common case
>>
>> I'm sorry - did someone actually describe such a case? I must have
>> missed it.
>
> I started this whole thread by pointing out that this behaviour results
> in the delay of the TCP flows scaling with the number of activ
> If you turn off the AQM entirely for the
> first four packets, it is going to activate when the fifth packet
> arrives, resulting in a tail loss and... an RTO!
That isn't what happens.
First of all, Cake explicitly guards against tail loss by exempting the last
packet in each queue from being
Jonathan Morton writes:
>> your solution significantly hurts performance in the common case
>
> I'm sorry - did someone actually describe such a case? I must have
> missed it.
I started this whole thread by pointing out that this behaviour results
in the delay of the TCP flows scaling with the
> your solution significantly hurts performance in the common case
I'm sorry - did someone actually describe such a case? I must have missed it.
- Jonathan Morton
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo
Jonathan Morton writes:
>>> I'm saying that there's a tradeoff between intra-flow induced latency and
>>> packet loss, and I've chosen 4 MTUs as the operating point.
>>
>> Is there a reason for picking 4 MTUs vs 2 MTUs vs 2 packets, etc?
>
> To be more precise, I'm using a sojourn time equivale
Jonathan Morton writes:
>> This is why I think that any fix that tries to solve this problem in
>> the queueing system should be avoided. It does not solve the real
>> problem (overload) and introduces latency.
>
> Most people, myself included, prefer systems that degrade gracefully
> instead of
--- Begin Message ---
> On 18 Apr 2018, at 19:16, Kevin Darbyshire-Bryant via Cake
> wrote:
>
> I know this can be writted betterrer but I think this is the sort of thing
> we’re pondering over?
>
> https://github.com/ldir-EDB0/sch_cake/commit/334ae4308961e51eb6ad0d08450cdcba558ef4e3
>
> Wa
> This is why I think that any fix that tries to solve this problem in the
> queueing system should be avoided. It does not solve the real problem
> (overload) and introduces latency.
Most people, myself included, prefer systems that degrade gracefully instead of
simply failing or rejecting new
I think that this discussion is about trying to solve an almost impossible
problem.
When the link is in overload, and this is the case, there is nothing one
can do with flow queuing or AQM.
It is just too late to make something useful.
Overload means that the number of active backlogged flows is
>> I'm saying that there's a tradeoff between intra-flow induced latency and
>> packet loss, and I've chosen 4 MTUs as the operating point.
>
> Is there a reason for picking 4 MTUs vs 2 MTUs vs 2 packets, etc?
To be more precise, I'm using a sojourn time equivalent to 4 MTU-sized packets
per bu
On Wed, 18 Apr 2018, Jonathan Morton wrote:
I'm saying that there's a tradeoff between intra-flow induced latency and
packet loss, and I've chosen 4 MTUs as the operating point.
Is there a reason for picking 4 MTUs vs 2 MTUs vs 2 packets, etc?
___
C
--- Begin Message ---
> On 18 Apr 2018, at 19:11, Toke Høiland-Jørgensen wrote:
>
> Jonas Mårtensson writes:
>
>> Dave, in the thread referenced earlier that led to this change you said:
>>
>> "The loss of throughput here compared to non-ingress mode is a blocker for
>> mainlining and for th
On Wed, Apr 18, 2018 at 6:54 PM, Jonathan Morton
wrote:
> > On 18 Apr, 2018, at 7:11 pm, Toke Høiland-Jørgensen
> wrote:
> >
> > What you're saying here is that you basically don't believe there are
> > any applications where a bulk TCP flow would also want low queueing
> > latency? :)
>
> I'm s
Jonas Mårtensson writes:
> Dave, in the thread referenced earlier that led to this change you said:
>
> "The loss of throughput here compared to non-ingress mode is a blocker for
> mainlining and for that matter, wedging this into lede."
>
> I'm curious, what would the latency be in Toke's experi
Dave, in the thread referenced earlier that led to this change you said:
"The loss of throughput here compared to non-ingress mode is a blocker for
mainlining and for that matter, wedging this into lede."
I'm curious, what would the latency be in Toke's experiment with
non-ingress mode and with t
On April 18, 2018 6:34:47 PM GMT+02:00, Georgios Amanakis
wrote:
>Would making it active only for the 'ingress' mode be an option?
>
>Otherwise it has to be documented that when using ingress mode with
>lots of
>bulk flows on <20mbit/s the actual goodput is going to be less than the
>set
>one (
Jonathan:
I think you are wrong. What we care about is keeping packets in flight
across the network, with a queue length as close to 1 packet as
possible.
If it breaks ingress mode so be it.
On Wed, Apr 18, 2018 at 9:54 AM, Jonathan Morton wrote:
>> On 18 Apr, 2018, at 7:11 pm, Toke Høiland-Jø
> On 18 Apr, 2018, at 7:11 pm, Toke Høiland-Jørgensen wrote:
>
> What you're saying here is that you basically don't believe there are
> any applications where a bulk TCP flow would also want low queueing
> latency? :)
I'm saying that there's a tradeoff between intra-flow induced latency and
pa
Would making it active only for the 'ingress' mode be an option?
Otherwise it has to be documented that when using ingress mode with lots of
bulk flows on <20mbit/s the actual goodput is going to be less than the set
one (eg for 9.8mbit/s set, 5.3 mbit/s actual).
On Wed, Apr 18, 2018, 12:25 PM Da
I would like to revert this change.
On Wed, Apr 18, 2018 at 9:11 AM, Toke Høiland-Jørgensen wrote:
> Jonathan Morton writes:
>
>>> On 18 Apr, 2018, at 6:17 pm, Sebastian Moeller wrote:
>>>
>>> Just a thought, in egress mode in the typical deployment we expect,
>>> the bandwidth leading into cak
Jonathan Morton writes:
>> On 18 Apr, 2018, at 6:17 pm, Sebastian Moeller wrote:
>>
>> Just a thought, in egress mode in the typical deployment we expect,
>> the bandwidth leading into cake will be >> than the bandwidth out of
>> cake, so I would argue that the package droppage might be accepta
> On 18 Apr, 2018, at 6:17 pm, Sebastian Moeller wrote:
>
> Just a thought, in egress mode in the typical deployment we expect, the
> bandwidth leading into cake will be >> than the bandwidth out of cake, so I
> would argue that the package droppage might be acceptable on egress as there
> is
> On Apr 18, 2018, at 17:03, Jonathan Morton wrote:
> [...]
>>> Without that, we can end up with very high drop rates which, in
>>> ingress mode, don't actually improve congestion on the bottleneck link
>>> because TCP can't reduce its window below 4 MTUs, and it's having to
>>> retransmit all t
>>> So if there is one active bulk flow, we allow each flow to queue four
>>> packets. But if there are ten active bulk flows, we allow *each* flow to
>>> queue *40* packets.
>>
>> No - because the drain rate per flow scales inversely with the number
>> of flows, we have to wait for 40 MTUs' seria
Jonathan Morton writes:
>> On 18 Apr, 2018, at 2:25 pm, Toke Høiland-Jørgensen wrote:
>>
>> So if there is one active bulk flow, we allow each flow to queue four
>> packets. But if there are ten active bulk flows, we allow *each* flow to
>> queue *40* packets.
>
> No - because the drain rate pe
> On 18 Apr, 2018, at 2:25 pm, Toke Høiland-Jørgensen wrote:
>
> So if there is one active bulk flow, we allow each flow to queue four
> packets. But if there are ten active bulk flows, we allow *each* flow to
> queue *40* packets.
No - because the drain rate per flow scales inversely with the n
Jonas Mårtensson writes:
> On Wed, Apr 18, 2018 at 1:25 PM, Toke Høiland-Jørgensen
> wrote:
>
>> Toke Høiland-Jørgensen writes:
>>
>> > Jonathan Morton writes:
>> >
>> >>> On 17 Apr, 2018, at 12:42 pm, Toke Høiland-Jørgensen
>> wrote:
>> >>>
>> >>> - The TCP RTT of the 32 flows is *way* highe
On Wed, Apr 18, 2018 at 1:25 PM, Toke Høiland-Jørgensen
wrote:
> Toke Høiland-Jørgensen writes:
>
> > Jonathan Morton writes:
> >
> >>> On 17 Apr, 2018, at 12:42 pm, Toke Høiland-Jørgensen
> wrote:
> >>>
> >>> - The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoDel
> >>> controls TCP
Kevin Darbyshire-Bryant writes:
>> On 18 Apr 2018, at 12:25, Toke Høiland-Jørgensen wrote:
>>
>> Toke Høiland-Jørgensen writes:
>>
>>> Jonathan Morton writes:
>>>
> On 17 Apr, 2018, at 12:42 pm, Toke Høiland-Jørgensen wrote:
>
> - The TCP RTT of the 32 flows is *way* higher fo
--- Begin Message ---
> On 18 Apr 2018, at 12:25, Toke Høiland-Jørgensen wrote:
>
> Toke Høiland-Jørgensen writes:
>
>> Jonathan Morton writes:
>>
On 17 Apr, 2018, at 12:42 pm, Toke Høiland-Jørgensen wrote:
- The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoDel
>>
Toke Høiland-Jørgensen writes:
> Jonathan Morton writes:
>
>>> On 17 Apr, 2018, at 12:42 pm, Toke Høiland-Jørgensen wrote:
>>>
>>> - The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoDel
>>> controls TCP flow latency to around 65 ms, while for Cake it is all
>>> the way up around th
Luca Muscariello writes:
> I will check that later, still unsure.
>
> First guess: the quantum component should influence only how close to a
> fluid bit-wise approximation you are.
> So cake gets closer by automatic adjustment.
>
> The computation of the correction factor should be done by compu
I will check that later, still unsure.
First guess: the quantum component should influence only how close to a
fluid bit-wise approximation you are.
So cake gets closer by automatic adjustment.
The computation of the correction factor should be done by computing the
probability that a packet
of a
Luca Muscariello writes:
> I'm not sure that the quantum correction factor is correct.
No, you're right, there's an off-by-one error. It should be:
R_s < R / ((L/L_s)(N+1) + 1)
-Toke
___
Cake mailing list
Cake@lists.bufferbloat.net
https://lists.buff
Jonathan Morton writes:
>> On 17 Apr, 2018, at 12:42 pm, Toke Høiland-Jørgensen wrote:
>>
>> - The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoDel
>> controls TCP flow latency to around 65 ms, while for Cake it is all
>> the way up around the 180ms mark. Is the Codel version in Cak
> On 17 Apr, 2018, at 12:42 pm, Toke Høiland-Jørgensen wrote:
>
> - The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoDel
> controls TCP flow latency to around 65 ms, while for Cake it is all
> the way up around the 180ms mark. Is the Codel version in Cake too
> lenient, or what is go
I'm not sure that the quantum correction factor is correct.
On Tue, Apr 17, 2018 at 2:22 PM, Toke Høiland-Jørgensen
wrote:
> Y via Cake writes:
>
> > From: Y
> > Subject: Re: [Cake] A few puzzling Cake results
> > To: cake@lists.bufferbloat.net
> > Da
Jonas Mårtensson writes:
> On Tue, Apr 17, 2018 at 2:22 PM, Toke Høiland-Jørgensen
> wrote:
>
>> Y via Cake writes:
>>
>> > From: Y
>> > Subject: Re: [Cake] A few puzzling Cake results
>> > To: cake@lists.bufferbloat.net
>> > Date: Tu
. Search max-min rates calculations.
On Tue, Apr 17, 2018 at 2:22 PM, Toke Høiland-Jørgensen
wrote:
> Y via Cake writes:
>
> > From: Y
> > Subject: Re: [Cake] A few puzzling Cake results
> > To: cake@lists.bufferbloat.net
> > Date: Tue, 17 Apr 2018 21:05:12 +0900
&
On Tue, Apr 17, 2018 at 2:22 PM, Toke Høiland-Jørgensen
wrote:
> Y via Cake writes:
>
> > From: Y
> > Subject: Re: [Cake] A few puzzling Cake results
> > To: cake@lists.bufferbloat.net
> > Date: Tue, 17 Apr 2018 21:05:12 +0900
> >
> > Hi.
> >
Y via Cake writes:
> From: Y
> Subject: Re: [Cake] A few puzzling Cake results
> To: cake@lists.bufferbloat.net
> Date: Tue, 17 Apr 2018 21:05:12 +0900
>
> Hi.
>
> Any certain fomula of fq_codel flow number?
Well, given N active bulk flows with packet size L, and
--- Begin Message ---
Hi.
Any certain fomula of fq_codel flow number?
yutaka.
On Tue, 17 Apr 2018 12:38:45 +0200
Toke Høiland-Jørgensen wrote:
> Luca Muscariello writes:
>
> > 10Mbps/32 ~= 300kbps
> >
> > Does the VoIP stream use more than that 300kbps?
> > In the ideal case as long as the s
Luca Muscariello writes:
> 10Mbps/32 ~= 300kbps
>
> Does the VoIP stream use more than that 300kbps?
> In the ideal case as long as the sparse flow has a rate which is lower than
> the fair rate
> the optimization should work. Otherwise the optimization might not as close
> to ideal as possible.
10Mbps/32 ~= 300kbps
Does the VoIP stream use more than that 300kbps?
In the ideal case as long as the sparse flow has a rate which is lower than
the fair rate
the optimization should work. Otherwise the optimization might not as close
to ideal as possible.
Luca
On Tue, Apr 17, 2018 at 11:42 A
46 matches
Mail list logo