Hi Michael,

see below.

On 24.07.22 at 12:12 Michael Welzl wrote:
On Jul 24, 2022, at 3:03 AM, Toerless Eckert <[email protected]> wrote:

Inline

On Sat, Jul 23, 2022 at 08:10:43PM -0400, Stuart Cheshire wrote:
I feel that in retrospect the name “congestion control” was a poor choice. Too 
often when I talk to people building their own home-grown transport protocol on 
top of UDP, and I ask them what congestion control algorithm they use, they 
smile smugly and say, “We don’t need congestion control.” They explain that 
their protocol won’t be used on congested networks.

I agree SO strongly !!!!!
The main problem of congestion control these days appears to be that networks 
are mostly underutilized (see the thread on ICCRG I started by pointing at our 
ComMag paper) - the issue is to increase the rate as quickly as possible, 
without producing congestion.

Now, what is congestion in this context? Packet loss or too high queuing delay?

It should really be called “rate control”.
It’s about a sending rate - whether that is indirectly achieved by controlling 
a window or explicitly by changing a rate in bits per second doesn’t really 
matter.

Yes, one would assume that, but technically it does matter.
Typically, you want to control the sending rate _and_ the amount of inflight data. Window-based approaches have the advantage that they
are kind of self-stabilizing due to the implicit rate feedback by
the growing effective RTT (queueing delay induced).
Moreover, if your congestion window is estimated too large, you just
get a constant amount of excess data in the bottleneck queue, whereas if
your sending rate is too large, you get a growing amount of inflight
data over time and thus increasing excess data in the bottleneck queue.
Even if a rate-based algorithm would know and pick the perfect rate
it may result in instability at the bottleneck (c.f. queueing theory at
\rho=1). Controlling the sending rate is thus harder and more fragile.
Therefore, I'm a strong proponent of having a window-based
approach that uses pacing in addition to avoid micro-bursts and
cope with unsteady/distorted ACK feedback. The other way around
would also work and that's how BBRv2 currently tries to approach the
problem: using a rate-based sender that also controls the amount of
inflight data.

Regards,
 Roland

Reply via email to