On Fri, Mar 31, 2017 at 1:20 PM, Neal Cardwell <ncardw...@google.com> wrote:
> On Fri, Mar 31, 2017 at 10:47 AM, Jim Gettys <j...@freedesktop.org> wrote:
>> There are other issues that people need to keep in mind:
>>
>> Packet scheduling is *really* important.  You really don't want to
>> have a burst of packets monopolize your bottleneck link, particularly
>> when that bottleneck may be small bandwidth.
>>
>> For example, my brother suffers on a DSL line, with something like
>> 384K up and a megabit or two down.
>>
>> One full size packet @ 1mbps == 13 milliseconds.  Two of those packets
>> is already human perceptible latency for some applications (and gamers
>> and stock traders will tell you that even a single 13ms delay to their
>> application is a significant disadvantage). On his line, even one
>> packet upbound is of order 40ms.
>
> Yes, this is a great point. In the BBR team we are very conscious of
> the latency issues in low-bandwidth links, and this is something we
> focus on. For example, this consideration you mention is exactly why
> Linux TCP BBR chooses a TSO burst size of 1 packet (instead of the
> Linux TCP default of 2 packets) for bandwidths below 1.2Mbps.
>
>> So while congestion avoidance is key to keeping TCP's behavior sane,
>> it is *not* sufficient to keep latency to where it needs to be for
>> many applications that share the bottleneck. Wonderful as BBR is (and
>> I'm a great fan), it still can induce more than a BDP of queue: over
>> continental paths, that's of order 100ms and more over
>> intercontinental paths.
>
> The queue levels you mention can indeed be seen when there are
> multiple flows sharing a bottleneck and using the current version of
> BBR. There is nothing inherent about the BBR approach or framework
> that requires that particular level of queue, so one of our big
> focuses right now is testing some tweaks to the algorithm to reduce
> the degree of queuing in such scenarios. We discussed some of the
> experiments in pages 14-19 of this week's BBR talk:
>
>   
> https://www.ietf.org/proceedings/98/slides/slides-98-iccrg-an-update-on-bbr-congestion-control-00.pdf
>
>> And even one flow of something other than BBR can ruin your whole day
>> completely.  Getting everyone to convert to a delay based congestion
>> system ain't ever going to happen as they don't compete properly, so
>> finally having an algorithm like BBR that can be deployed is
>> wonderful.
>
> Thanks. It is also our sense that BBR provides performance and
> behavior that makes it deployable on today's Internet.
>
>> In short, BBR will reduce the fraction of time latencies are out
>> completely of control, but no TCP congestion control system by itself
>> can get latencies to where they need to be for many applications.
>
> Indeed, some very demanding latency-sensitive apps may need changes in
> the network to work well. Until those changes are widely deployed, and
> even afterward, my sense is that it's worth pushing congestion control
> to be as good as it can be.

I think we have to push from all directions to drain this global swamp.

We're getting some fq_codel uptake in commercial consumer devices at
this date, and finally have running code to deal with WiFi for the
first time (out the door in the new LEDE release: try it, you'll like
it!). It's time to put market forces to work, as it can be a major
competitive advantage, and playing one vendor off against the other is
an effective way to get motion in the market.

Very early in tilting at the bufferbloat windmill, Dave Clark told me:
"Don't yell fire in the theater until the exits are marked".  It think
it is time
for us all to yell "fire".

"FIRE!"
                                           - Jim

>
> thanks,
> neal
>
> _______________________________________________
> iccrg mailing list
> ic...@irtf.org
> https://www.irtf.org/mailman/listinfo/iccrg

Reply via email to