Jonathan,
On 22/11/16 20:37, Jonathan Morton wrote:
On 22 Nov, 2016, at 21:09, Bob Briscoe <[email protected]> wrote:
{Note 1} I have never got a good answer to my questions on aqm@ietf as to why a
sqrt that controls the shrinkage of the spacing between dropped packets has
something to do with the steady state law of Reno, particularly because the law
leads to linear growth in p over time.
If you have intervals between events which follow a 1/sqrt(N) sequence, where N
is the number of preceding events, you get an event frequency which increases
linearly with time.
Yes, I pointed that out originally
<https://www.ietf.org/mail-archive/web/aqm/current/msg00376.html>. And
Toke confirmed in response that it did indeed happen in practice.
This applies directly to Codel’s signalling strategy, which is to start at one
mark per (assumed) RTT, and to increase the marking frequency if that was
insufficient to control the queue.
When you have multiple Reno flows sharing a single queue, there is a sqrt(N)
factor in several of the characteristics, where N is the number of flows. When
such a shared link becomes saturated, all of the flows must be signalled to
slow down, but for stochastic reasons it’ll probably take more than N
signalling events to do so. Increasing the signalling frequency while the
queue remains insufficiently controlled has a good chance of quickly finding
all the flows, while dropping relatively few packets (of non-ECN flows).
Just throwing a square root in "somewhere", doesn't mean it is the
correct "somewhere".
A stated goal of the sqrt in the CoDel control law is to match the
1/sqrt(p) in TCP Reno's window formula. Quite aside from whether that is
a correct goal, it isn't even doing that:
* ACK-clocked load (number of flows, N) is proportional to 1/cwnd, ie.
proportional to sqrt(p) [see 2nd para of section 4 of PI2 paper
<http://www.bobbriscoe.net/pubs.html#PI2>]
* because CoDel applies a sqrt to the interval between drops, it results
in a linear increase in p with time (because the sqrt effectively gets
squared - see the simple maths in my original posting about this
<https://www.ietf.org/mail-archive/web/aqm/current/msg00376.html>).
So, the question is: "Why is a linear increase in p (starting from 0)
good for controlling load from N flows, where N is proportional to sqrt(p)?"
I’m reasonably convinced that Codel is a near-optimal solution to congestion
signalling on TCP-friendly flows.
The word "optimal" has a precise meaning. I think you mean simply "I am
predisposed to CoDel."
Whether the control law increases p linearly with time or by the sqrt,
or by any function of time, is not the point anyway. Time is not the
correct unit for this control law. The CoDel control law has no
variables in it (other than the point at which it starts) that depend on
any feature of the traffic. Once the CoDel control law starts, it just
blindly increases until it reaches a high enough drop level to control
the traffic. So the higher p needs to be, the longer it takes. And the
lower p needs to be, the more it will overshoot within a round trip.
A constant increase in p with time, and no dependency on the traffic is
just plain wrong.
No way will that result in anything that anyone could prove was
"optimal" even if you put caveats around it like "near-optimal" or
"reasonably convincingly near-optimal".
Perhaps this helps you to see why claims of near-optimality say more
about the political or religious zeal of the person making the claim,
than they do about CoDel itself.
Regrettably, the latter are not the only type of traffic actually found on the
Internet.
Really, the central assumption of Codel is that each flow requires only one
congestion signal event per RTT to cause it to back off. Codel stops working
well on traffic which doesn’t obey that assumption (a linear increase in drop
frequency is inadequate to mitigate a flood - you need to work with drop
probabilities for that), but it *does* work acceptably well with multiple flows
sharing a queue, due to this operating-point search.
Similarly, isn't the phrase "acceptably well" another way of saying "We
don't need to consider any other AQM that might be better at handling a
wider range of load scenarios"? Even though there is already an
alternative available that increases the drop level dependent on how
fast the queue is growing.
This religious belief in a particular technology is not healthy. Please
can we have some objectivity here.
Regards
Bob
- Jonathan Morton
_______________________________________________
tcpPrague mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/tcpprague
--
________________________________________________________________
Bob Briscoe http://bobbriscoe.net/
_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm