Firstly, these algorithms use drop and ECN in addition to delay to trigger 
congestion avoidance behaviors. Having said that...... Yup - using delay as a 
trigger for a host-based congestion control algorithm has merit when delay has 
an unambiguous correspondence to congestion. In a tail-drop world, this was 
certainly the case. In an AQM world, this is much less clear. 

There can be delay in the absence of congestion (noise driven loss being 
corrected by FEC on wireless networks, radio-driven changes in packet delay, 
reconvergence/multipath, etc). There can be congestion in the absence of  
congestion (AQM schemes that drop before building substantial delay). So, delay 
based algorithms will get ambiguous signals. I would add that packet loss is 
also an ambiguous signal, as noise-driven loss is difficult to distinguish from 
congestion driven loss. 

Hopefully, the new AQM algorithms will fond wide adoption. This will lead to 
10s of ms of variable buffer delay, rather than 100s of ms. If/when this 
happens, a host will be much less likely to use delay as a signal for 
congestion.

So ---- delay can be one of the signals that a host can use to trigger 
congestion avoidance, but this is becoming increasingly complex. The host also 
needs to factor in packet loss, and ideally ECN markings. As we know, ECN has 
been very slow to roll out. I suspect that loss is going to be the dominant 
congestion signal in the future.

This makes distinguishing between noise-driven loss and congestion-driven loss 
more important ---- I wonder if there is a way to have a middlebox that 
experiences noise-driven loss do ECN-like stuff to tell the endpoints "Hey, I 
just had some noise driven loss, so you may want to interpret at least some of 
the recent packet loss as not related to congestion". This does require 
flow-aware middleboxes, and introduces much of the ECN complexity. Perhaps 
there is a way to layer loss signaling onto the existing ECN infrastructure? I 
am torn on this notion, due to the low ECN uptake.


Bvs


-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf Of Mikael Abrahamsson
Sent: Tuesday, May 26, 2015 7:31 AM
To: [email protected]
Subject: [Bloat] CDG


Hi,

I just read https://lwn.net/Articles/645115/ about CDG congestion control.

After reading the article, I am left wondering how this kind of congestion 
control mechanisms handles being exposed to a token bucket policer:

http://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-policing/19645-policevsshape.html

With this kind of rate limiting, there is never any buffering or increase in 
latency, you're only seeing packet drops, no other congestion signal.

Anyone have any insight?

-- 
Mikael Abrahamsson    email: [email protected]
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to