On Sat, Jul 23, 2022 at 8:11 PM Stuart Cheshire <cheshire=
[email protected]> wrote:

> On 30 Jun 2022, at 18:49, Martin Duke <[email protected]> wrote:
>
> > * The Transport ADs are going to consider chartering a new IETF working
> group to update the administrative framework for congestion control
> standardization, and potentially adopt any proposals that are sufficiently
> mature for the standards track. We've formulated a proposed charter. Please
> consider it a starting point for discussion.
> > https://github.com/martinduke/congestion-control-charter/
> > I ask you to review this document before attending. It answers many of
> the questions you may already have.
>
> I support this work.
>

I support this work as well.


> Congestion control algorithms apply to *any* flow of packets through a
> connectionless datagram network, not just TCP connections.
>

Indeed, well said.


> While we’re considering starting this new work, I’m wondering if we should
> also consider a new name too.
>
> I feel that in retrospect the name “congestion control” was a poor choice.
> Too often when I talk to people building their own home-grown transport
> protocol on top of UDP, and I ask them what congestion control algorithm
> they use, they smile smugly and say, “We don’t need congestion control.”
> They explain that their protocol won’t be used on congested networks.
>
> This is the problem of the terminology “congestion control”. When most
> people hear the term “network congestion” they assume they know what that
> means, by analogy to other kinds of congestion, like rush-hour traffic on
> the roads or a busy travel weekend at the airport. They assume, by analogy,
> that network congestion is a rare event that occurs on an occasional Friday
> night when everybody is watching streaming video, and there’s nothing they
> can do about that. They assume that congestion is the network’s fault and
> the network should fix it by having have more capacity.
>
> In reality, in the networking context, congestion control means sending
> data exactly as fast as the bottleneck link can carry it. If you send
> slower than the bottleneck link then you leave capacity unused, which is
> wasteful. If you send faster than the bottleneck link then the excess
> packets have to be discarded, which is wasteful. And the way that Reno or
> CUBIC do this is to occasionally send a little too fast, and then respond
> to the packet drops (or ECN marks) by slowing down. If we define
> “congestion” as the bottleneck link being at 100% utilization, then the job
> of Reno or CUBIC “congestion control” is to ensure that they create
> “congestion” in the network. If you upload a video to social media from
> your smartphone and your upstream Internet service is 10Mb/s, then you
> expect your smartphone to send at 10Mb/s. If you upgrade to 20Mb/s, then
> you expect your smartphone to send at 20Mb/s. If you upgrade to 30Mb/s,
> then you expect your smartphone to send at 30Mb/s. You expect your
> smartphone to drive your upstream link to the point of congestion and keep
> it there until the data transfer is complete. If it didn’t, you’d complain
> that you’re not getting the rate that you’re paying for.
>
> Expressed that way, who wouldn’t want a transport protocol that works out
> the best rate to send, to maximize the useful throughput it achieves?
>
> I would argue that “congested” is the desired state of a network (using
> the term “congested” with its technical meaning in this context). Anything
> less than “congested” means that data is not moving as fast as it could,
> and capacity is being wasted.
>

I would argue that we don't want to define network “congestion” as "the
bottleneck link being at 100% utilization". AFAIK the common definitions of
congestion have negative connotations and involve "excessive accumulation"
or "overload". However, "the bottleneck link being at 100% utilization" is
not necessarily "excessive accumulation" or "overload". Rather, if the
bottleneck queue is sufficiently short then "the bottleneck link being at
100% utilization" is a very desirable state that should have a name with a
positive connotation.

I would argue that "congested" is something closer to "excessive queuing
delay and/or loss". And that "excessive queuing delay and/or loss" can
happen even with low link utilization  if the buffer is sufficiently
shallow or the traffic is sufficiently bursty, so really is not very
closely tied to "the bottleneck link being at 100% utilization". The
"excessive queuing delay and/or loss" can also happen with low volumes of
data in flight relative to the BDP, under the same conditions of shallow
buffers or bursty traffic, so AFAICT congestion is also not tied to the
volume of data relative to the BDP.


> Given this, instead of continually having to explain to people that in the
> networking context the word “congestion” has a particular technical
> meaning, maybe it would be easier to pick new terminology, that is more
> easily understood. Other terms that describe equally well what a congestion
> control algorithm does might be “rate management algorithm”, “throughput
> optimizer”, or “throughput maximizer”.
>
> When we talk to people designing a new transport protocol, it’s easy for
> them to dismiss congestion control as uninteresting and unimportant, but
> it’s harder for them to say, “We don’t have any rate management algorithm,”
> or, “We don’t care about optimizing our throughput.”


I agree it would be great to find a better term, and the phrase “rate
management algorithm” seems like a great choice for a replacement for
"congestion control".

Thanks for raising this question of nomenclature, and making some nice
proposals for better names.

best regards,
neal

Reply via email to