On Tue, May 27, 2014 at 4:27 PM, David Lang <[email protected]> wrote: > On Tue, 27 May 2014, Dave Taht wrote: > >> There is a phrase in this thread that is begging to bother me. >> >> "Throughput". Everyone assumes that throughput is a big goal - and it >> certainly is - and latency is also a big goal - and it certainly is - >> but by specifying what you want from "throughput" as a compromise with >> latency is not the right thing... >> >> If what you want is actually "high speed in-order packet delivery" - >> say, for example a movie, >> or a video conference, youtube, or a video conference - excessive >> latency with high throughput, really, really makes in-order packet >> delivery at high speed tough. > > > the key word here is "excessive", that's why I said that for max throughput > you want to buffer as much as your latency budget will allow you to.
Again I'm trying to make a distinction between "throughput", and "packets delivered-in-order-to-the-user." (for-which-we-need-a-new-word-I think) The buffering should not be in-the-network, it can be in the application. Take our hypothetical video stream for example. I am 20ms RTT from netflix. If I artificially inflate that by adding 50ms of in-network buffering, that means a loss can take 120ms to recover from. If instead, I keep a 3*RTT buffer in my application, and expect that I have 5ms worth of network-buffering, instead, I recover from a loss in 40ms. (please note, it's late, I might not have got the math entirely right) As physical RTTs grow shorter, the advantages of smaller buffers grow larger. You don't need 50ms queueing delay on a 100us path. Many applications buffer for seconds due to needing to be at least 2*(actual buffering+RTT) on the path. > >> You eventually lose a packet, and you have to wait a really long time >> until a replacement arrives. Stuart and I showed that at last ietf. >> And you get the classic "buffering" song playing.... > > > Yep, and if you buffer too much, your "lost packet" is actually still in > flight and eating bandwidth. > > David Lang > > >> low latency makes recovery from a loss in an in-order stream much, much >> faster. >> >> Honestly, for most applications on the web, what you want is high >> speed in-order packet delivery, not >> "bulk throughput". There is a whole class of apps (bittorrent, file >> transfer) that don't need that, and we >> have protocols for those.... >> >> >> >> On Tue, May 27, 2014 at 2:19 PM, David Lang <[email protected]> wrote: >>> >>> the problem is that paths change, they mix traffic from streams, and in >>> other ways the utilization of the links can change radically in a short >>> amount of time. >>> >>> If you try to limit things to exactly the ballistic throughput, you are >>> not >>> going to be able to exactly maintain this state, you are either going to >>> overshoot (too much traffic, requiring dropping packets to maintain your >>> minimal buffer), or you are going to undershoot (too little traffic and >>> your >>> connection is idle) >>> >>> Since you can't predict all the competing traffic throughout the >>> Internet, >>> if you want to maximize throughput, you want to buffer as much as you can >>> tolerate for latency reasons. For most apps, this is more than enough to >>> cause problems for other connections. >>> >>> David Lang >>> >>> >>> On Mon, 26 May 2014, David P. Reed wrote: >>> >>>> Codel and PIE are excellent first steps... but I don't think they are >>>> the >>>> best eventual approach. I want to see them deployed ASAP in CMTS' s and >>>> server load balancing networks... it would be a disaster to not deploy >>>> the >>>> far better option we have today immediately at the point of most >>>> leverage. >>>> The best is the enemy of the good. >>>> >>>> But, the community needs to learn once and for all that throughput and >>>> latency do not trade off. We can in principle get far better latency >>>> while >>>> maintaining high throughput.... and we need to start thinking about >>>> that. >>>> That means that the framing of the issue as AQM is counterproductive. >>>> >>>> On May 26, 2014, Mikael Abrahamsson <[email protected]> wrote: >>>>> >>>>> >>>>> On Mon, 26 May 2014, [email protected] wrote: >>>>> >>>>>> I would look to queue minimization rather than "queue management" >>>>> >>>>> >>>>> (which >>>>>> >>>>>> >>>>>> implied queues are often long) as a goal, and think harder about the >>>>>> end-to-end problem of minimizing total end-to-end queueing delay >>>>> >>>>> >>>>> while >>>>>> >>>>>> >>>>>> maximizing throughput. >>>>> >>>>> >>>>> >>>>> As far as I can tell, this is exactly what CODEL and PIE tries to do. >>>>> They >>>>> try to find a decent tradeoff between having queues to make sure the >>>>> pipe >>>>> is filled, and not making these queues big enough to seriously affect >>>>> interactive performance. >>>>> >>>>> The latter part looks like what LEDBAT does? >>>>> <http://tools.ietf.org/html/rfc6817> >>>>> >>>>> Or are you thinking about something else? >>>> >>>> >>>> >>>> -- Sent from my Android device with K-@ Mail. Please excuse my brevity. >>> >>> >>> >>> _______________________________________________ >>> Cerowrt-devel mailing list >>> [email protected] >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>> >>> _______________________________________________ >>> Cerowrt-devel mailing list >>> [email protected] >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>> >> >> >> >> > -- Dave Täht NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article _______________________________________________ Cerowrt-devel mailing list [email protected] https://lists.bufferbloat.net/listinfo/cerowrt-devel
