David,

Perhaps you would care to provide some text to address the misconception that 
you pointed out? (To wait for a 100% fix as a 90% fix appears much less 
appealing, while the current state of art is at 0%)

If you think that aqm-recommendations is not strogly enough worded. I think 
this particular discussion (to aqm or not) really belongs there. The other 
document (ecn benefits) has a different target in arguing for going those last 
10%...

Best regards,
   Richard Scheffenegger

> Am 27.03.2015 um 16:17 schrieb "David Lang" <[email protected]>:
> 
>> On Fri, 27 Mar 2015, KK wrote:
>> 
>> The discussion about adding buffers and the impact of buffers should be
>> considered relative to the time scales when congestion occurs and when it
>> is relieved by the dynamics of the end-system protocols. The reason we
>> have buffering is to handle transients at the points where there is a
>> mismatch in available bandwidth. We don¹t look to just throw buffers in
>> front of a bottleneck for ?long run¹ overload.
> 
> In theory you are correct. However in practice, you are wrong.
> 
> throughput benchmarks don't care how long the data sits in buffers, so larger 
> buffers improve the benchmark numbers (up until the point that they cause 
> timeouts)
> 
> But even if the product folks aren't just trying to maximize throughput, they 
> size the buffers based on the worst case bandwidth/latency. So you have 
> products with buffers that can handle 1Gb links with 200ms speed-of-light 
> induces latency being used for 1.5Mb/768K 20ms DSL lines without any changes.
> 
> I'm not saying that ECN doesn't provide value, but the statement that without 
> ECN you have the choice of low-latency OR good througput is only true if you 
> ignore what's in place today.
> 
> It also does a dissservice because it implies that if you use something other 
> than ECN, it's going to hurt your performance. This discourages people from 
> enabling pie or fq_codel because they have read about how bad they are and 
> how they will increase latency because they drop packets. This isn't just a 
> theoretical "someone may think this", I've seen this exact argument trotted 
> out a couple times recently.
> 
>> While active queue management undoubtedly seeks to keep the backlog
>> build-up at a manageable level so as to not allow latency to grow and
>> still keep the links busy to the extent possible, the complement that ECN
>> provides is to mitigate the impact of the drop that AQM uses to signal
>> end-points to react to the transient congestion. ECN has the benefit when
>> you have flows that have small windows, where the impact of loss is more
>> significant.
>> 
>> As you say, "when a packet is lost it causes a 'large' amount of latency
>> as the sender times out and retransmits, but if this is only happening
>> every few thousand packets, it's a minor effect.². But this is the case
>> for flows that are long-lived. If the flows are short-lived (and I believe
>> empirical evidence suggests that they are a significant portion of the
>> flows), then it is not a minor effect any more.
> 
> Even an occasional lost packet in a short flow is a minor effect compared to 
> the current status quo of high latency on all packets.
> 
> Yes, many web pages are made up of many different items, fetched from many 
> different locations, so avoiding packet losses on these flows is desirable.
> 
> But it's even more important to keep latency low while the link is under 
> load, otherwise your connections end up being serialized, which kills 
> performance even more.
> 
> As an example (just to be sure we are all talking about the same thing)
> 
> user clicks a link
> DNS lookup
> small page fetch
> N resources to fetch, add to queue
> for each resource in the queue (up to M in parallel)
>  DNS lookup (may be cached)
>  page fetch (some small, some large, some massive)
>  may trigger more resources to fetch that get added to queue
> 
> it's common for there to be a few massive resources to fetch in a page that 
> get queued early (UI javascript libraries or background images)
> 
> If a packet gets lost from one of the large fetches, it doesn't have that big 
> of an effect. If it gets lost from one of the small fetches, it has more of 
> an effect.
> 
> But if the first resource to be fetch causes latency to go to 500ms (actually 
> a fairly 'clean' network by today's standards), then all of the DNS lookups, 
> TCP handshakes, etc that are needed for all the other resources end up taking 
> far longer than the time that would be lost due to a dropped packet.
> 
> This is a better vs best argument. Nobody disputes that something like 
> fq_codel/pie/cake/whatever + ECN would be better than just 
> fq_codel/pie/cake/whatever, but the way this is being worded make it sound 
> that static buffer sizes + tail-drop + ECN is better than 
> fq_codel/pie/cake/whaever because these other queueing algorithms will cause 
> packet loss.
> 
> David Lang
> _______________________________________________
> aqm mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/aqm

_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to