Hi Stuart,

I tend to agree with your point of view and I like to point out that
I initiated a discussion 2019 along these lines here:
https://mailarchive.ietf.org/arch/msg/iccrg/TrCq8Qs-eOd8g__8sOtZxxVDtbA/

Indeed, congestion control tries to let the bottleneck work in a state
of congestion. On the one hand, as soon as we have packets in the
bottleneck queue we have, strictly speaking, an overload condition.
However, on the other hand, this state is desired as the bottleneck is not saturated otherwise (under the assumption that the sender(s) are not
application limited) and capacity is wasted. That's why we argued that
the amount of queuing delay is an indicator of the severeness-level of
congestion. So yes, current congestion control and avoidance try to
overload the bottleneck to a certain extent and so they create
congestion nevertheless, while trying to prevent more severe
overload situations that would even mean more congestion and eventually
performance degradation.

Personally, I would even prefer to waste 1-2% bottleneck capacity
instead of having a perceptible queueing delay. So I guess that
many of the former approaches always tried to optimized for throughput,
e.g., by making the tail-drop buffers large, whereas nowadays latency
is much more important in many cases. So "throughput optimizer" as notion might be also too narrow and "performance optimizer" could also
catch the queueing delay aspects as well. Typically, the algorithms
need to balance three goals: keeping up the bottleneck utilization,
keeping the queueing delay low and achieving fairness. Typically, all
three are inter-dependent.

Regards,
 Roland

On 24.07.22 at 02:10 Stuart Cheshire wrote:
On 30 Jun 2022, at 18:49, Martin Duke <[email protected]> wrote:

* The Transport ADs are going to consider chartering a new IETF working group 
to update the administrative framework for congestion control standardization, 
and potentially adopt any proposals that are sufficiently mature for the 
standards track. We've formulated a proposed charter. Please consider it a 
starting point for discussion.
https://github.com/martinduke/congestion-control-charter/
I ask you to review this document before attending. It answers many of the 
questions you may already have.

I support this work.

Congestion control algorithms apply to *any* flow of packets through a 
connectionless datagram network, not just TCP connections.

While we’re considering starting this new work, I’m wondering if we should also 
consider a new name too.

I feel that in retrospect the name “congestion control” was a poor choice. Too 
often when I talk to people building their own home-grown transport protocol on 
top of UDP, and I ask them what congestion control algorithm they use, they 
smile smugly and say, “We don’t need congestion control.” They explain that 
their protocol won’t be used on congested networks.

This is the problem of the terminology “congestion control”. When most people 
hear the term “network congestion” they assume they know what that means, by 
analogy to other kinds of congestion, like rush-hour traffic on the roads or a 
busy travel weekend at the airport. They assume, by analogy, that network 
congestion is a rare event that occurs on an occasional Friday night when 
everybody is watching streaming video, and there’s nothing they can do about 
that. They assume that congestion is the network’s fault and the network should 
fix it by having have more capacity.

In reality, in the networking context, congestion control means sending data 
exactly as fast as the bottleneck link can carry it. If you send slower than 
the bottleneck link then you leave capacity unused, which is wasteful. If you 
send faster than the bottleneck link then the excess packets have to be 
discarded, which is wasteful. And the way that Reno or CUBIC do this is to 
occasionally send a little too fast, and then respond to the packet drops (or 
ECN marks) by slowing down. If we define “congestion” as the bottleneck link 
being at 100% utilization, then the job of Reno or CUBIC “congestion control” 
is to ensure that they create “congestion” in the network. If you upload a 
video to social media from your smartphone and your upstream Internet service 
is 10Mb/s, then you expect your smartphone to send at 10Mb/s. If you upgrade to 
20Mb/s, then you expect your smartphone to send at 20Mb/s. If you upgrade to 
30Mb/s, then you expect your smartphone to send at 30Mb/s. You expect your 
smartphone to drive your upstream link to the point of congestion and keep it 
there until the data transfer is complete. If it didn’t, you’d complain that 
you’re not getting the rate that you’re paying for.

Expressed that way, who wouldn’t want a transport protocol that works out the 
best rate to send, to maximize the useful throughput it achieves?

I would argue that “congested” is the desired state of a network (using the 
term “congested” with its technical meaning in this context). Anything less 
than “congested” means that data is not moving as fast as it could, and 
capacity is being wasted.

Given this, instead of continually having to explain to people that in the 
networking context the word “congestion” has a particular technical meaning, 
maybe it would be easier to pick new terminology, that is more easily 
understood. Other terms that describe equally well what a congestion control 
algorithm does might be “rate management algorithm”, “throughput optimizer”, or 
“throughput maximizer”.

When we talk to people designing a new transport protocol, it’s easy for them 
to dismiss congestion control as uninteresting and unimportant, but it’s harder 
for them to say, “We don’t have any rate management algorithm,” or, “We don’t 
care about optimizing our throughput.”

Stuart Cheshire


Reply via email to