I don't think that this feature really hurts TCP.
TCP is robust to that in any case. Even if there is avg RTT increase and
stddev RTT increase.
And, I agree that what is more important is the performance of sparse
flows, which is not affected by this feature.
There is one little thing that might
Jonathan Morton writes:
your solution significantly hurts performance in the common case
>>>
>>> I'm sorry - did someone actually describe such a case? I must have
>>> missed it.
>>
>> I started this whole thread by pointing out that this behaviour results
>> in
>>> your solution significantly hurts performance in the common case
>>
>> I'm sorry - did someone actually describe such a case? I must have
>> missed it.
>
> I started this whole thread by pointing out that this behaviour results
> in the delay of the TCP flows scaling with the number of
> If you turn off the AQM entirely for the
> first four packets, it is going to activate when the fifth packet
> arrives, resulting in a tail loss and... an RTO!
That isn't what happens.
First of all, Cake explicitly guards against tail loss by exempting the last
packet in each queue from being
> your solution significantly hurts performance in the common case
I'm sorry - did someone actually describe such a case? I must have missed it.
- Jonathan Morton
___
Cake mailing list
Cake@lists.bufferbloat.net
Jonathan Morton writes:
>>> I'm saying that there's a tradeoff between intra-flow induced latency and
>>> packet loss, and I've chosen 4 MTUs as the operating point.
>>
>> Is there a reason for picking 4 MTUs vs 2 MTUs vs 2 packets, etc?
>
> To be more precise, I'm using
Jonathan Morton writes:
>> This is why I think that any fix that tries to solve this problem in
>> the queueing system should be avoided. It does not solve the real
>> problem (overload) and introduces latency.
>
> Most people, myself included, prefer systems that degrade
> This is why I think that any fix that tries to solve this problem in the
> queueing system should be avoided. It does not solve the real problem
> (overload) and introduces latency.
Most people, myself included, prefer systems that degrade gracefully instead of
simply failing or rejecting
I think that this discussion is about trying to solve an almost impossible
problem.
When the link is in overload, and this is the case, there is nothing one
can do with flow queuing or AQM.
It is just too late to make something useful.
Overload means that the number of active backlogged flows is
>> I'm saying that there's a tradeoff between intra-flow induced latency and
>> packet loss, and I've chosen 4 MTUs as the operating point.
>
> Is there a reason for picking 4 MTUs vs 2 MTUs vs 2 packets, etc?
To be more precise, I'm using a sojourn time equivalent to 4 MTU-sized packets
per
On Wed, 18 Apr 2018, Jonathan Morton wrote:
I'm saying that there's a tradeoff between intra-flow induced latency and
packet loss, and I've chosen 4 MTUs as the operating point.
Is there a reason for picking 4 MTUs vs 2 MTUs vs 2 packets, etc?
___
> On 18 Apr, 2018, at 6:17 pm, Sebastian Moeller wrote:
>
> Just a thought, in egress mode in the typical deployment we expect, the
> bandwidth leading into cake will be >> than the bandwidth out of cake, so I
> would argue that the package droppage might be acceptable on
>>> So if there is one active bulk flow, we allow each flow to queue four
>>> packets. But if there are ten active bulk flows, we allow *each* flow to
>>> queue *40* packets.
>>
>> No - because the drain rate per flow scales inversely with the number
>> of flows, we have to wait for 40 MTUs'
Jonas Mårtensson writes:
> On Wed, Apr 18, 2018 at 1:25 PM, Toke Høiland-Jørgensen
> wrote:
>
>> Toke Høiland-Jørgensen writes:
>>
>> > Jonathan Morton writes:
>> >
>> >>> On 17 Apr, 2018, at 12:42 pm, Toke
On Wed, Apr 18, 2018 at 1:25 PM, Toke Høiland-Jørgensen
wrote:
> Toke Høiland-Jørgensen writes:
>
> > Jonathan Morton writes:
> >
> >>> On 17 Apr, 2018, at 12:42 pm, Toke Høiland-Jørgensen
> wrote:
> >>>
> >>> - The TCP RTT of
I will check that later, still unsure.
First guess: the quantum component should influence only how close to a
fluid bit-wise approximation you are.
So cake gets closer by automatic adjustment.
The computation of the correction factor should be done by computing the
probability that a packet
of
Luca Muscariello writes:
> I'm not sure that the quantum correction factor is correct.
No, you're right, there's an off-by-one error. It should be:
R_s < R / ((L/L_s)(N+1) + 1)
-Toke
___
Cake mailing list
Jonathan Morton writes:
>> On 17 Apr, 2018, at 12:42 pm, Toke Høiland-Jørgensen wrote:
>>
>> - The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoDel
>> controls TCP flow latency to around 65 ms, while for Cake it is all
>> the way up around the
I'm not sure that the quantum correction factor is correct.
On Tue, Apr 17, 2018 at 2:22 PM, Toke Høiland-Jørgensen <t...@toke.dk>
wrote:
> Y via Cake <cake@lists.bufferbloat.net> writes:
>
> > From: Y <intruder_t...@yahoo.fr>
> > Subject: Re: [Cake] A fe
. Search max-min rates calculations.
On Tue, Apr 17, 2018 at 2:22 PM, Toke Høiland-Jørgensen <t...@toke.dk>
wrote:
> Y via Cake <cake@lists.bufferbloat.net> writes:
>
> > From: Y <intruder_t...@yahoo.fr>
> > Subject: Re: [Cake] A few puzzling Cake result
On Tue, Apr 17, 2018 at 2:22 PM, Toke Høiland-Jørgensen <t...@toke.dk>
wrote:
> Y via Cake <cake@lists.bufferbloat.net> writes:
>
> > From: Y <intruder_t...@yahoo.fr>
> > Subject: Re: [Cake] A few puzzling Cake results
> > To: cake@lists.bufferbloat.net
Y via Cake <cake@lists.bufferbloat.net> writes:
> From: Y <intruder_t...@yahoo.fr>
> Subject: Re: [Cake] A few puzzling Cake results
> To: cake@lists.bufferbloat.net
> Date: Tue, 17 Apr 2018 21:05:12 +0900
>
> Hi.
>
> Any certain fomula of fq_codel flow numb
22 matches
Mail list logo