17 at 10:24 AM, Neal Cardwell <ncardw...@google.com> wrote:
> I would agree the bufferbloat is susceptible to mitigation by
> congestion control at the sender.
>
> One relevant alternative I have not seen mentioned in this thread is
> BBR congestion control. (For BBR info, including the CACM paper and
> links to the BBR talks at the last 2 ICCRG sessions, see:
> https://groups.google.com/d/forum/bbr-dev )
>
> Defeating bufferbloat was one of the core goals of BBR, and our
> experience deploying it for YouTube shows it achieves this goal. BBR
> flows sharing with other BBR flows avoid filling bloated buffers, and
> instead maintain a reasonable-sized queue. But when BBR flows share a
> bottleneck with a loss-based CC like Reno or CUBIC, the model of the
> path used by the BBR flows adapts to the standing queue created by the
> loss-based flows, scaling up cwnd so that the BBR flows are not
> starved (as delay-based schemes like Vegas are starved in such
> scenarios). If the loss-based flows leave the bottleneck, BBR will
> quickly converge back to its preferred operating regime of small
> queues.
>
> BBR is deployed for YouTube and Google TCP traffic, and available in
> Linux TCP and QUIC, and work is under way at Netflix to implement it
> for FreeBSD TCP. Folks concerned about bufferbloat should keep it in
> mind as an available alternative.
>
> neal
>
> ps: AQMs are also good.

There are other issues that people need to keep in mind:

Packet scheduling is *really* important.  You really don't want to
have a burst of packets monopolize your bottleneck link, particularly
when that bottleneck may be small bandwidth.

For example, my brother suffers on a DSL line, with something like
384K up and a megabit or two down.

One full size packet @ 1mbps == 13 milliseconds.  Two of those packets
is already human perceptible latency for some applications (and gamers
and stock traders will tell you that even a single 13ms delay to their
application is a significant disadvantage). On his line, even one
packet upbound is of order 40ms.

So while congestion avoidance is key to keeping TCP's behavior sane,
it is *not* sufficient to keep latency to where it needs to be for
many applications that share the bottleneck. Wonderful as BBR is (and
I'm a great fan), it still can induce more than a BDP of queue: over
continental paths, that's of order 100ms and more over
intercontinental paths.

And even one flow of something other than BBR can ruin your whole day
completely.  Getting everyone to convert to a delay based congestion
system ain't ever going to happen as they don't compete properly, so
finally having an algorithm like BBR that can be deployed is
wonderful.

In short, BBR will reduce the fraction of time latencies are out
completely of control, but no TCP congestion control system by itself
can get latencies to where they need to be for many applications.

Jim


On Fri, Mar 31, 20
>
>
>
>
> On Fri, Mar 31, 2017 at 10:02 AM, Michael Welzl <mich...@ifi.uio.no> wrote:
>> Hi Roland,
>>
>> I agree with everything, except in your email below, everywhere:
>>
>> s/delay/delay or ECN
>>
>> I state this because it’s my opinion but also to hopefully prevent a flood 
>> of long emails related to ECN…
>>
>> Cheers,
>> Michael
>>
>>
>>> On Mar 31, 2017, at 8:41 AM, Bless, Roland (TM) <roland.bl...@kit.edu> 
>>> wrote:
>>>
>>> Hi,
>>>
>>> thanks to the session recordings I just listened to the very
>>> interesting ICCRG related discussion in the tsvarea meeting from
>>> Monday.
>>>
>>> Lars Eggert said something like:
>>> "Bufferbloat is something that a congestion controller
>>> can't really do anything about anyway...
>>> it's not necessarily a congestion control problem."
>>>
>>> I believe that this statement is not correct.
>>> A large buffer will not be filled completely by a delay-based
>>> congestion control (e.g., Vegas was a very early
>>> one). Thus even if the buffer is large, the queuing
>>> delay doesn't increase dramatically and we wouldn't
>>> see that bufferbloat effect so prominently (as in slide 3 of
>>> https://www.ietf.org/proceedings/98/slides/slides-98-tsvarea-sessb-reflections-on-congestion-control-praveen-balasubramanian-00.pdf).
>>>
>>> So if _everyone_ would use such a delay-based variant, we wouldn't
>>> have the bufferbloat problem.
>>>
>>> However, since we have more loss-based congestion control variants
>>> deployed, which will fill a tail-drop buffer completely, such
>>> delay-based variants get suppressed and cannot get a reasonable share
>>> of the bottleneck bandwidth. Therefore, several variants have a
>>> fallback to a more aggressive mode in order to get a reasonable
>>> share (I think Christian Huitema also referred to this by the different
>>> modes). What could help is some kind of co-existence mechanism that
>>> separates queue-filling TCP CC variants from delay-based variants
>>> (e.g., see http://ieeexplore.ieee.org/document/7796842/ for details).
>>> So IMHO getting rid of the queue filling variants would be useful,
>>> but is probably hard in practice, i.e., replacing them isn't easy
>>> due to the co-existence problem.
>>>
>>> Consequently, fighting against bufferbloat could be done in the network
>>> using AQMs (enforcing shorter queues) or by using a different
>>> congestion control algorithm (trying to limit inflicted queuing delay),
>>> or even by a combination of both. However, what delay and throughput
>>> performance can be achieved with AQMs is limited, so better performance
>>> in both metrics requires eventually a better congestion control algorithm.
>>>
>>> Regards,
>>> Roland
>>>
>>> _______________________________________________
>>> iccrg mailing list
>>> ic...@irtf.org
>>> https://www.irtf.org/mailman/listinfo/iccrg
>>
>> _______________________________________________
>> iccrg mailing list
>> ic...@irtf.org
>> https://www.irtf.org/mailman/listinfo/iccrg
>
> _______________________________________________
> iccrg mailing list
> ic...@irtf.org
> https://www.irtf.org/mailman/listinfo/iccrg

Reply via email to