Folks,

I'm lost in this conversation: I thought it started with a statement saying 
that the queue length must be at least a BDP such that full utilization is 
attained because the queue never drains.
To this, I'd want to add that, in addition to the links from Roland, the point 
of ABE is to address exactly that: 
https://tools.ietf.org/html/draft-ietf-tcpm-alternativebackoff-ecn-12
(in the RFC Editor queue)

But now I think you're discussing a BDP worth of data *in flight*, which is 
something else.

Cheers,
Michael


> On 27 Nov 2018, at 11:40, Luca Muscariello <luca.muscarie...@gmail.com> wrote:
> 
> OK. We agree.
> That's correct, you need *at least* the BDP in flight so that the bottleneck 
> queue never empties out.
> 
> This can be easily proven using fluid models for any congestion controlled 
> source no matter if it is 
> loss-based, delay-based, rate-based, formula-based etc.
> 
> A highly paced source gives you the ability to get as close as theoretically 
> possible to the BDP+epsilon
> as possible.
> 
> link fully utilized is defined as Q>0 unless you don't include the packet 
> currently being transmitted. I do,
> so the TXtteer is never idle. But that's a detail.
> 
> 
> 
> On Tue, Nov 27, 2018 at 11:35 AM Bless, Roland (TM) <roland.bl...@kit.edu> 
> wrote:
> Hi,
> 
> Am 27.11.18 um 11:29 schrieb Luca Muscariello:
> > I have never said that you need to fill the buffer to the max size to
> > get full capacity, which is an absurdity.
> 
> Yes, it's absurd, but that's what today's loss-based CC algorithms do.
> 
> > I said you need at least the BDP so that the queue never empties out.
> > The link is fully utilized IFF the queue is never emptied.
> 
> I was also a bit imprecise: you'll need a BDP in flight, but
> you don't need to fill the buffer at all. The latter sentence
> is valid only in the direction: queue not empty -> link fully utilized.
> 
> Regards,
>  Roland
> 
> > 
> > 
> > 
> > On Tue 27 Nov 2018 at 11:26, Bless, Roland (TM) <roland.bl...@kit.edu
> > <mailto:roland.bl...@kit.edu>> wrote:
> > 
> >     Hi Luca,
> > 
> >     Am 27.11.18 um 10:24 schrieb Luca Muscariello:
> >     > A congestion controlled protocol such as TCP or others, including
> >     QUIC,
> >     > LEDBAT and so on
> >     > need at least the BDP in the transmission queue to get full link
> >     > efficiency, i.e. the queue never empties out.
> > 
> >     This is not true. There are congestion control algorithms
> >     (e.g., TCP LoLa [1] or BBRv2) that can fully utilize the bottleneck link
> >     capacity without filling the buffer to its maximum capacity. The BDP
> >     rule of thumb basically stems from the older loss-based congestion
> >     control variants that profit from the standing queue that they built
> >     over time when they detect a loss:
> >     while they back-off and stop sending, the queue keeps the bottleneck
> >     output busy and you'll not see underutilization of the link. Moreover,
> >     once you get good loss de-synchronization, the buffer size requirement
> >     for multiple long-lived flows decreases.
> > 
> >     > This gives rule of thumbs to size buffers which is also very practical
> >     > and thanks to flow isolation becomes very accurate.
> > 
> >     The positive effect of buffers is merely their role to absorb
> >     short-term bursts (i.e., mismatch in arrival and departure rates)
> >     instead of dropping packets. One does not need a big buffer to
> >     fully utilize a link (with perfect knowledge you can keep the link
> >     saturated even without a single packet waiting in the buffer).
> >     Furthermore, large buffers (e.g., using the BDP rule of thumb)
> >     are not useful/practical anymore at very high speed such as 100 Gbit/s:
> >     memory is also quite costly at such high speeds...
> > 
> >     Regards,
> >      Roland
> > 
> >     [1] M. Hock, F. Neumeister, M. Zitterbart, R. Bless.
> >     TCP LoLa: Congestion Control for Low Latencies and High Throughput.
> >     Local Computer Networks (LCN), 2017 IEEE 42nd Conference on, pp.
> >     215-218, Singapore, Singapore, October 2017
> >     http://doc.tm.kit.edu/2017-LCN-lola-paper-authors-copy.pdf
> > 
> >     > Which is: 
> >     >
> >     > 1) find a way to keep the number of backlogged flows at a
> >     reasonable value. 
> >     > This largely depends on the minimum fair rate an application may
> >     need in
> >     > the long term.
> >     > We discussed a little bit of available mechanisms to achieve that
> >     in the
> >     > literature.
> >     >
> >     > 2) fix the largest RTT you want to serve at full utilization and size
> >     > the buffer using BDP * N_backlogged.  
> >     > Or the other way round: check how much memory you can use 
> >     > in the router/line card/device and for a fixed N, compute the largest
> >     > RTT you can serve at full utilization. 
> >     >
> >     > 3) there is still some memory to dimension for sparse flows in
> >     addition
> >     > to that, but this is not based on BDP. 
> >     > It is just enough to compute the total utilization of sparse flows and
> >     > use the same simple model Toke has used 
> >     > to compute the (de)prioritization probability.
> >     >
> >     > This procedure would allow to size FQ_codel but also SFQ.
> >     > It would be interesting to compare the two under this buffer sizing. 
> >     > It would also be interesting to compare another mechanism that we have
> >     > mentioned during the defense
> >     > which is AFD + a sparse flow queue. Which is, BTW, already
> >     available in
> >     > Cisco nexus switches for data centres.
> >     >
> >     > I think that the the codel part would still provide the ECN feature,
> >     > that all the others cannot have.
> >     > However the others, the last one especially can be implemented in
> >     > silicon with reasonable cost.
> > 
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to