Simon,
Y, if you're going to start autoadjusting a hard-coded parameter, you
have to first question whether it was right to choose that parameter to
hard-code in the first place.
Bob
On 03/07/15 18:34, Simon Barber wrote:
Hi Bob,
Very interesting to see this. I had just recently privately proposed
an extension to Codel - to auto tune the target parameter. The
proposal is to observe the characteristics that are exhibited when
target is too large or too small, and make adjustments appropriately.
i.e. if you make a single drop during an interval, and the response of
the flow is to go idle (even momentarily) then perhaps it was because
target is too small. Using some rule you could increase target.
Conversely you can heuristically identify when target is likely too
large, and reduce it.
Simon
On 7/3/2015 5:20 AM, Bob Briscoe wrote:
AQM chairs and list,
1) Delay-loss tradeoff
We (Koen de Schepper and I) have designed an AQM aimed at removing
the need for low delay QoS classes, initially as a cost/complexity
reduction exercise for broadband remote access servers (BRASs). One
of the requirements given to us was:
* As background load increases, delay-sensitive apps previously given
priority QoS treatment (e.g. voice, conversational video) should
continue to get the same QoS as they got with Diffserv.
We found that AQMs with a hard delay threshold (PIE, CoDel) have to
drive up loss really high in order to maintain the hard cap on delay.
The levels of loss start to cause QoS problems for voice, even tho
delay is fine. Indeed, we found that the high levels of loss become
the dominant cause of delay for Web traffic, due to tail losses and
timeouts.
Everyone has been focusing on delay, but we've not been noticing
consequent really bad loss levels at high load.
Once you know where to look, the problem is easy to grasp: As load
increases, the bottleneck link has to get each TCP flow to go slower
to use a smaller share of the link. The network can increase either
drop or RTT. If it holds queuing delay (and therefore RTT) constant
(as PIE and CoDel do), it has to increase drop more.
We found that by softening the delay threshold a little, at high load
we don't need crazy loss levels to keep delay within bounds.
BTW, the implementation needs fewer operations per packet than RED,
PIE or CoDel.
Conversely, at low load, a hard queuing delay threshold also means
that delay will be /higher/ than it needs to be.
I've written up a brief (4pp) tech report quantifying the problem
analytically.
<http://www.bobbriscoe.net/projects/latency/credi_tr.pdf>
Koen and colleagues have since done thousands of experiments on their
broadband testbed with real equipment. It's looking good, even before
we've explored varying what we call the 'curviness' parameter (which
varies how hard the target it). We have a paper under submission with
all the results, which we'll post as soon as it's not sub judice.
2) Does Flow Aggregation Increase or Decrease the Queue?
Something else had been bugging me about how queue lengths vary with
load: The above argument explains how more TCP flows /increase/ the
queue. But queues are meant to get /smaller/ at higher levels of
aggregation.
The second half of the above tech report explains why there's no
paradox. And it goes on to explain when you have to configure an AQM
with different parameters for higher link capacity, and when you
don't. It gives the formula for how to set the config too.
Writing this all down cleared up a lot of nagging doubts I had in my
mind. I hope it helps others too.
Bob
PS. Note my new interim email @.
--
________________________________________________________________
Bob Briscoe http://bobbriscoe.net/
_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm
--
________________________________________________________________
Bob Briscoe http://bobbriscoe.net/
_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm