Hi Michael,

so I reread your paper and stewed a bit on it.


I believe that I do not buy some of your premises.

e.g. you write:

"We will now examine two factors that make the the present situation 
particularly worrisome. First, the way the infrastructure has been evolving 
gives TCP an increasingly large operational space in which it does not see any 
feedback at all. Second, most TCP connections are extremely short. As a result, 
it is quite rare for a TCP connection to even see a single congestion 
notification during its lifetime."

And seem to see a problem that flows might be able to finish their data 
transfer business while still in slow start. I see the same data, but see no 
problem. Unless we have an oracle that tells each sender (over a shared 
bottleneck) exactly how much to send at any given time point, different control 
loops will interact on those intermediary nodes. I might be limited in my depth 
of thought here, but having each flow probing for capacity seems exactly the 
right approach... and doubling CWND or rate every RTT is pretty aggressive 
already (making slow start shorter by reaching capacity faster within the 
slow-start framework requires either to start with a higher initial value (what 
increasing IW tries to achieve?) or use a larger increase factor than 2 per 
RTT). I consider increased IW a milder approach than the alternative. And once 
one accepts that a gradual rate increasing is the way forward it falls out 
logically that some flows will finish before they reach steady state capacity 
especially if that flows available capacity is large. So what exactly is the 
problem with short flows not reaching capacity and what alternative exists that 
does not lead to carnage if more-aggressive start-up phases drive the 
bottleneck load into emergency drop territory?

And as an aside, a PEP (performance enhancing proxy) that does not enhance 
performance is useless at best and likely harmful (rather a PDP, performance 
degrading proxy). The network so far has been doing reasonably well with 
putting more protocol smarts at the ends than in the parts in between. I have 
witnessed the arguments in the "L4S wars" about how little processing one can 
ask the more central network nodes perform, e.g. flow queueing which would 
solve a lot of the issues (e.g. a hyper aggressive slow-start flow would mostly 
hurt itself if it overshoots its capacity) seems to be a complete no-go.

I personally think what we should do is have the network supply more 
information to the end points to control their behavior better. E.g. if we 
would mandate a max_queue-fill-percentage field in a protocol header and have 
each node write max(current_value_of_the_field, 
queue-filling_percentage_of_the_current_node) in every packet, end points could 
estimate how close to congestion the path is (e.g. by looking at the rate of 
%queueing changes) and tailor their growth/shrinkage rates accordingly, both 
during slow-start and during congestion avoidance. But alas we seem to go the 
path of a relative dumb 1 bit signal giving us an under-defined queue filling 
state instead and to estimate relative queue filling dynamics from that we need 
many samples (so literally too little too late, or L3T2), but I digress.

Regards
        Sebastian


> On Jun 20, 2022, at 14:58, Michael Welzl <[email protected]> wrote:
> 
> 
> 
>> On Jun 19, 2022, at 6:53 PM, Sebastian Moeller via Bloat 
>> <[email protected]> wrote:
>> 
>> I might be out to lunch here, but why not accept a "total" speed limit per 
>> TCP flow and simply expect bulk transfers to employ more parallel streams; 
>> which is what I think download manager apps are already doing for a long 
>> time?
>> 
>> And if we accept an upper ceiling per TCP flow we should be able to select a 
>> reasonable upper bound for the initial window as well, no?
> 
> Using multiple flows is a way to do it, albeit not a very good way (better to 
> use a better congestion control than just run multiple instances - but of 
> course, one works with what one can - a download manager is on the receiver 
> side and can achieve this there). This is not related to the IW issue which 
> is relevant for short flows, which are the most common type of traffic by far 
> (a point that our paper makes, along with many prior publications).
> 
> 
>>> On Jun 15, 2022, at 19:49, Dave Taht via Bloat 
>>> <[email protected]> wrote:
>>> 
>>> ---------- Forwarded message ---------
>>> From: Michael Welzl <[email protected]>
>>> Date: Wed, Jun 15, 2022 at 1:02 AM
>>> Subject: [iccrg] Musings on the future of Internet Congestion Control
>>> To: <[email protected]>
>>> Cc: Peyman Teymoori <[email protected]>, Md Safiqul Islam
>>> <[email protected]>, Hutchison, David <[email protected]>,
>>> Stein Gjessing <[email protected]>
>>> 
>>> 
>>> Dear ICCRGers,
>>> 
>>> We just got a paper accepted that I wanted to share:
>>> Michael Welzl, Peyman Teymoori, Safiqul Islam, David Hutchison, Stein
>>> Gjessing: "Future Internet Congestion Control: The Diminishing
>>> Feedback Problem", accepted for publication in IEEE Communications
>>> Magazine, 2022.
>>> 
>>> The preprint is available at:
>>> https://arxiv.org/abs/2206.06642
>>> I thought that it could provoke an interesting discussion in this group.
>>> 
>>> Figures 4 and 5 in this paper show that, across the world, network
>>> links do not just become "faster”: the range between the low end and
>>> the high end grows too.
>>> This, I think, is problematic for a global end-to-end standard - e.g.,
>>> it means that we cannot simply keep scaling IW along forever (or, if
>>> we do, utilization will decline more and more).
>>> 
>>> So, we ask: what is the way ahead? Should congestion control really
>>> stay end-to-end?
>> 
>>      Do we really have any other option? It is the sender that decides how 
>> much to dup into the network after all. Sure the network could help by 
>> giving some information back as a hint (say a 4bit value encoding the 
>> maximum relative queue-fill level measured along the full one-way path) but 
>> in the end, unless the network is willing to police its idea about 
>> acceptable send behavior it is still the sender's decision what tho send 
>> when, no?
> 
> In a scenario where a connection-splitting PEP is installed before a 
> lower-capacity downstream path segment, this PEP can already ask for more 
> data today.  It’s doing it in an ugly way, by “cheating” TCP, which yields 
> various disadvantages… so I’d say that this is part of the problem. PEPs 
> exist, yet have to do things poorly because they are treated as if they 
> shouldn’t exist, and so they become unpopular for, well, having done things 
> poorly...
> 
> 
>> Given the discussion about L4S and FQ it seems clear that the "network" is 
>> not prepared to implement anything close to what is required to move 
>> congestion control into the network... I have a feeling though that I am 
>> missing your point and am barking up the wrong tree ;)
> 
> I guess you are. This is about middleboxes doing much “heavier” stuff.
> 
> Cheers,
> Michael

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to