LGTM!!! I hope you plan to submit this somewhere, usenix perhaps?

On Wed, Oct 12, 2022 at 10:35 AM Maximilian Bachl
<[email protected]> wrote:
>
> Building upon the ideas and advice I received, I simplified the whole concept 
> and updated the preprint (https://arxiv.org/abs/2206.10561). The new approach 
> is somewhat similar to what you propose in point 3). True negative rate 
> (correctly detecting the absence of FQ) is now >99%; True positive rate is 
> >95% (correctly detecting the presence of FQ (fq_codel and fq)). It can also 
> detect if the bottleneck link changes during a flow from FQ to non-FQ and 
> vice versa.
>
> A new concept is that each application can choose its maximum allowed delay 
> independently if there's FQ. A cloud gaming application might choose to not 
> allow more than 5 ms to keep latency minimal, while a video chat application 
> might allow 25 ms to achieve higher throughput. Thus, each application can 
> choose its own tradeoff between throughput and delay. Also, applications can 
> measure how large the base delay is and, if the base delay is very low 
> (because the other host is close by), they can allow more queuing delay. For 
> example, if the base delay between two hosts is just 5 ms, it could be ok to 
> add another 45 ms of queuing to have a combined delay of 50 ms. Because the 
> allowed queuing delay is quite high, throughput is maximized.
>
>
>
> On Sun, Jul 3, 2022 at 4:49 PM Dave Taht <[email protected]> wrote:
>>
>> Hey, good start to my saturday!
>>
>> 1) Apple's fq_"codel" implementation did not actually implement the
>> codel portion of the algorithm when I last checked last year. Doesn't
>> matter what you set the target to.
>>
>> 2) fq_codel has a detectable (IMHO, have not tried) phase where the
>> "sparse flow optimization" allows non queue building flows to bypass
>> the queue building
>> flows entirely. See attached. fq-pie, also. Cake also has this, but
>> with the addition of per host FQ.
>>
>> However to detect it, requires sending packets on an interval smaller
>> than the codel quantum. Most (all!?) TCP implementations, even the
>> paced ones, send 2 1514 packets back to back, so you get an ack back
>> on servicing either the first or second one. Sending individual TCP
>> packets paced, and bunching them up selectively should also oscillate
>> around the queue width. (width = number of queue building flows,
>> depth, the depth of the queue). The codel quantum defaults to 1514
>> bytes but is frequently autoscaled to less at low bandwidths.
>>
>> 3) It is also possible, (IMHO), to send a small secondary flow
>> isochronously as a "clock" and observe the width and depth of the
>> queue that way.
>>
>> 4) You can use a fq_codel RFC3168 compliant implementation to send
>> back a CE, which is (presently) a fairly reliable signal of fq_codel
>> on the path. A reduction in *pacing* different from what the RFC3168
>> behavior is (reduction by half), would be interesting.
>>
>> Thx for this today! A principal observation of the BBR paper was that
>> you cannot measure for latency and bandwidth *at the same time* in a
>> single and you showing, in a FQ'd environment, that you can, I don't
>> remember seeing elsewhere (but I'm sure someone will correct me).
>>
>> On Sun, Jul 3, 2022 at 7:16 AM Maximilian Bachl via Bloat
>> <[email protected]> wrote:
>> >
>> > Hi Sebastian,
>> >
>> > Thank you for your suggestions.
>> >
>> > Regarding
>> > a) I slightly modified the algorithm to make it work better with the small 
>> > 5 ms threshold. I updated the paper on arXiv; it should be online by 
>> > Tuesday morning Central European Time. Detection accuracy for Linux's 
>> > fq_codel is quite high (high 90s) but it doesn't work that well with small 
>> > bandwidths (<=10 Mbit/s).
>> > b) that's a good suggestion. I'm thinking how to do it best since also 
>> > every experiment with every RTT/bandwidth was repeated and I'm not sure 
>> > how to make a CDF that includes the RTTs/bandwidths and the repetitions.
>> > c) I guess for every experiment with pfifo, the resulting accuracy is a 
>> > true negative rate, while for every experiment with fq* the resulting 
>> > accuracy is a true positive rate. I updated the paper to include these 
>> > terms to make it clearer. Summarizing, the true negative rate is 100%, the 
>> > true positive rate for fq is >= 95% and for fq_codel it's also in that 
>> > range except for low bandwidths.
>> >
>> > In case you're interested in reliable FQ detection but not in the 
>> > combination of FQ detection and congestion control, I co-authored another 
>> > paper which uses a different FQ detection method, which is more robust but 
>> > has the disadvantage of causing packet loss (Detecting Fair Queuing for 
>> > Better Congestion Control (https://arxiv.org/abs/2010.08362)).
>> >
>> > Regards,
>> > Max
>> > _______________________________________________
>> > Bloat mailing list
>> > [email protected]
>> > https://lists.bufferbloat.net/listinfo/bloat
>>
>>
>>
>> --
>> FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
>> Dave Täht CEO, TekLibre, LLC



-- 
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to