Re: [Bloat] [bbr-dev] Re: "BBR" TCP patches submitted to linux kernel

2016-11-02 Thread Dave Täht


On 11/2/16 11:21 AM, Klatsky, Carl wrote:
>> On Tue, 1 Nov 2016, Yuchung Cheng wrote:
>>
>>> We are curious why you choose the single-queued AQM. Is it just for
>>> the sake of testing?
>>
>> Non-flow aware AQM is the most commonly deployed "queue
>> management" on the Internet today. Most of them are just stupid FIFOs
>> with taildrop, and the buffer size can be anywhere from super small to huge
>> depending on equipment used and how it's configured.
>>
>> Any proposed TCP congestion avoidance algorithm to be deployed on the
>> wider Internet has to some degree be able to handle this deployment
>> scenario without killing everything else it's sharing capacity with.
>>
>> Dave Tähts testing case where BBR just kills Cubic makes me very concerned.
> 
> If I am understanding BBR correctly, that is working in the sender to 
> receiver direction.  In Dave's test running TCP BBR & TCP CUBIC with a single 
> queue AQM, where CUBIC gets crushed.

The scenario as I constructed it was emulating a sender on "home" side
of the link, using BBR and cubic through an emulated cablemodem running pie.


  Silly question, but the single queue AQM was also operating in the in
sender to receiver direction for this test, yes?
> ___
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> 
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [bbr-dev] Re: "BBR" TCP patches submitted to linux kernel

2016-11-02 Thread Klatsky, Carl
> On Tue, 1 Nov 2016, Yuchung Cheng wrote:
> 
> > We are curious why you choose the single-queued AQM. Is it just for
> > the sake of testing?
> 
> Non-flow aware AQM is the most commonly deployed "queue
> management" on the Internet today. Most of them are just stupid FIFOs
> with taildrop, and the buffer size can be anywhere from super small to huge
> depending on equipment used and how it's configured.
> 
> Any proposed TCP congestion avoidance algorithm to be deployed on the
> wider Internet has to some degree be able to handle this deployment
> scenario without killing everything else it's sharing capacity with.
> 
> Dave Tähts testing case where BBR just kills Cubic makes me very concerned.

If I am understanding BBR correctly, that is working in the sender to receiver 
direction.  In Dave's test running TCP BBR & TCP CUBIC with a single queue AQM, 
where CUBIC gets crushed.  Silly question, but the single queue AQM was also 
operating in the in sender to receiver direction for this test, yes? 
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [bbr-dev] Re: "BBR" TCP patches submitted to linux kernel

2016-11-01 Thread Jonathan Morton

> On 2 Nov, 2016, at 01:13, 'Yuchung Cheng' via BBR Development 
>  wrote:
> 
> We are curious why you choose the single-queued AQM. Is it just for the sake 
> of testing?

Not to speak for them, but single-queue AQM is the most likely implementation 
for high-speed hardware.  DOCSIS 3.1, for example, specifies PIE for client 
upstream, not any form of FQ-PIE.  It therefore remains an important use-case 
for interop testing.

 - Jonathan Morton

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] [bbr-dev] Re: "BBR" TCP patches submitted to linux kernel

2016-10-27 Thread Yuchung Cheng
On Thu, Oct 27, 2016 at 11:14 AM, Dave Taht  wrote:
> On Thu, Oct 27, 2016 at 10:57 AM, Yuchung Cheng  wrote:
>> On Thu, Oct 27, 2016 at 10:33 AM, Dave Taht  wrote:
>>> On Thu, Oct 27, 2016 at 10:04 AM, Steinar H. Gunderson
>>>  wrote:
 On Fri, Oct 21, 2016 at 10:47:26AM +0200, Steinar H. Gunderson wrote:
> As a random data point, I tried a single flow from my main server in .no
> to my backup server in .nl and compared CUBIC (with sch_fq) to BBR 
> (naturally
> also in sch_fq) on the sender side. The results were quite consistent 
> across
> runs:

 Another datapoint: A friend of mine had a different, worse path (of about 
 40 ms)
 and tested with iperf.

 CUBIC delivered 20.1 Mbit/sec (highly varying). BBR delivered 485 Mbit/sec.
>>>
>>> I mostly live in a world (wifi) where loss is uncommon, unless forced
>>> on it with a AQM.
>>>
>>> At the moment my biggest beef with BBR is that it ignores ECN entirely
>>> (and yet negotiates it). BBR is then so efficient at using up all the
>>> pipe that a single queued aqm "marks madly" and everything else
>>> eventually starves. Watch "ping" fade out here...
>>>
>>> http://blog.cerowrt.org/flent/bbr-comprehensive/bbr_ecn_eventually_starving_ping.png
>>
>> Thanks Dave for this issue. We design BBR with CoDel in mind b/c Van
>> co-designs both :)
>
> It works pretty darn good with codel without ecn. I'm pretty darn happy with 
> it.
>
> fq_codel is even more lovely, especially when competing with cubic.
>
> There are issues with single queued aqms with BBR vs cubic
>
> http://blog.cerowrt.org/flent/bbr-ecncaps/bandwidth-share-creaming-cubic-flowblind-aqm.svg
>
>> We have tested BBR with CoDel before and it works.
>
> Well, against cubic on the same link in single queue mode, even
> without ecn, life looks like this:
>
> http://blog.cerowrt.org/flent/bbr-ecncaps/bandwidth-share-creaming-cubic-flowblind-aqm.svg
>
> but fq_codel is fine, so long as there is no ecn vs nonecn collission
>
> http://blog.cerowrt.org/flent/bbr-ecncaps/bandwidth-share-ecn-fq.png
>
>> Could you share your tcpdump traces with us (maybe
>> you already did but no sure) or suggest how to reproduce this.
>>
>> This is 2 bbr flow or bbr + ecn-cubic? (I am guessing based on caption
>> in your graph)
>
> That's two BBRs with ecn enabled, going through cake in the single
> queue aqm mode "flowblind". I have similar plots with pie and codel
> with ecn enabled somewhere.
>
> The emulation is 48ms RTT, 20Mbit down, 5Mbit up.
>
> Regrettably I'm on a couple deadlines (a talk tomorrow, and next
> thursday), and can't look harder, I do have caps comparing ecn vs
> noecn here
>
> http://blog.cerowrt.org/flent/bbr-ecncaps/
Thanks for the data (and sorry for ignoring that before). Neal and I
think the behaviors you are observing matches BBR's top issue we are
actively pursuing: with N>1 flows, BBR may build 1.5BDP of queue. But
let's separate that from ECN negotiation. Besidethe implementation
complication Eric pointed out, even if BBR refrains from ECN
negotiation, and the test result wouldn't change much I suspect.

We'll get back soon.

>
>
>
>>
>>>
>>> somewhat conversely in fq_codel, this means that it ignores codel's
>>> marking attempts entirely and BBR retains it's own dynamics, (while
>>> the non-BBR flows are fine) which is kind of neat to watch.
>>>
 /* Steinar */
 --
 Homepage: https://www.sesse.net/
 ___
 Bloat mailing list
 Bloat@lists.bufferbloat.net
 https://lists.bufferbloat.net/listinfo/bloat
>>>
>>>
>>>
>>> --
>>> Dave Täht
>>> Let's go make home routers and wifi faster! With better software!
>>> http://blog.cerowrt.org
>>> ___
>>> Bloat mailing list
>>> Bloat@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave Täht
> Let's go make home routers and wifi faster! With better software!
> http://blog.cerowrt.org
>
> --
> You received this message because you are subscribed to the Google Groups 
> "BBR Development" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to bbr-dev+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat