That technique seems interesting, but are they addressing the right
/problem/?
They note that delay-based congestion control is better, but old
loss-based control out-competes it, so they propose to stop using delay
in that case.
Logically, I'd prefer delay-based control to detect when the other end
is transmitting aggressively, try to signal it to slow down using it's
preferred signalling, and if that fails, beat the aggressor over the
head with packet drops until the other end starts to behave itself (;-))
--dave
On 2020-10-25 11:06 a.m., Toke Høiland-Jørgensen via Bloat wrote:
This popped up in my Google Scholar mentions:
https://arxiv.org/pdf/2010.08362
It proposes using a delay-based CC when FQ is present, and a loss-based
when it isn't. It has a fairly straight-forward mechanism for detecting
an FQ bottleneck: Start two flows where one has 2x the sending rate than
the other, keep increasing their sending rate until both suffer losses,
and observe the goodput at this point: If it's ~1:1 there's FQ,
otherwise there isn't.
They cite 98% detection accuracy using netns-based tests and sch_fq.
-Toke
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat
--
David Collier-Brown, | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
[email protected] | -- Mark Twain
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat