Michael,
Yes, I should have said that the most useful aspect of your paper is the
model that ties all the different algos together allowing comparison.
Bob
On 25/05/17 08:55, Michael Menth wrote:
Hi Bob, hi Rong,
Am 25.05.2017 um 00:34 schrieb Bob Briscoe:
Michael,
I just started reading your paper. Two comments so far:
*1) PIE rate averaging is an unfortunate example**
*In related work, you mention PIE's measurement of the link rate as an
example of use of moving averages. This is an unfortunate example,
because the PIE code screws up the calculation. It averages a fraction
by averaging the denominator, which I pointed out is wrong in Section
5.3.2 of my PIE review <http://bobbriscoe.net/pubs.html#piervw_tr>.
Quoting...
"The [PIE] pseudocode continually measures the sequence of dequeue
times, t1, t2, t3,... that it takes to transmit constant amounts of
bytes (DQ_THRESHOLD). Then the exponentially weighted moving average
(EWMA) of the rate should be:
ewma(depart_rate) = DQ_THRESHOLD ∗ ewma(1/t1,1/t2,1/t3,...)
!= DQ_THRESHOLD / ewma(t1,t2,t3,...)
"
PIE uses the second (incorrect) formula. In the review, I discuss how
wrong this could be, with an example.
Thanks, Bob, for pointing this out to me.
Rong, is PIE doing this by intent (if so, what's the reason?) or is this
a flaw?
*2) Exponential Moving Average is an Approximation**
*In your moving averages paper, you compare your unbiased exponential
moving average (UEMA) algorithm with this commonly used exponential
moving average (EMA) algorithm:
ewma(X) <- (1-a)*X + a*ewma(X)
A few years ago I wondered about this comparison, and wrote a 2-page
memo for myself to record the proof. I've temporarily uploaded it here:
Moving Averages <http://bobbriscoe.net/tmp/ewma.pdf>.
I used a Taylor series expansion to show that the iterative formula
tends to the exact formula (what you call the unbiased EMA or UEMA) as
the number of readings (n) becomes large.
When n is not large, you could use the formula at the top of my p2 (just
before the Taylor series approximation) to calculate the ratio between
the actual (unbiased) UEMA and my regular iterative EMA (REMA). This
ratio can be pre-calculated, because its independent of the readings:
REMA/UEMA = (1-a)*(Sum_{j=0}^{n-1} a^j)
where
"theta" in my formula is "a" in yours
"Sum_{j=0}^{n-1}" means "the sum from j=0 to j=n-1"
Your iterative formula (EMA) starts with an exception, whereas I use
an iterative formula that just starts the same as it carries on. To
distinguish between them, I have called mine "regular EMA" or REMA.
For instance, if a = 0.75 as in your Fig 1(c), then the correction
factors for various small numbers of readings, n, are:
n UEMA/REMA
_____________________
1 2.29
2 1.73
3 1.46
4 1.31
5 1.22
6 1.15
7 1.11
8 1.08
9 1.06
10 1.04
11 1.03
12 1.02
13 1.02
14 1.01
15 1.01
16 1.01
17 1.01
18 1.00
19 1.00
20 1.00
Yes, the bias of E(W)MA is caused by the first sample and vanishes over
time. If EMA runs continuously, it has no impact. If EMA is continuously
restarted, it may have an impact. However, this is only a minor aspect
of the paper.
https://atlas.informatik.uni-tuebingen.de/~menth/papers/Menth17c.pdf
The major contribution of the paper is a framework that brings together
various types of moving averages, moving histograms and rate measurement
methods. In particular, it relates E(W)MA's "magic weight parameter" to
a memory which is also implemented by other methods (with other
parameters). The concept of a memory simplifies many engineering problems.
Regards,
Michael
Cheers
Bob
On 30/03/17 10:57, Michael Menth wrote:
Hi all,
Am 30.03.2017 um 11:08 schrieb Jonathan Morton:
On 30 Mar, 2017, at 10:46, Bob Briscoe <[email protected]> wrote:
For PI2 we removed all but 2 and it worked the same or better than PIE in all
our tests. I have been assessing each of the other 7 one by one for
reinstatement. So far I've rejected 6. I think I can reject this last one by
making the sampling time of the base PI algo dependent on the max link rate.
Then when the queue goes idle, the base PI algo will decay drop down to zero no
slower than the queue drains, without needing this extra heuristic.
That’s fair enough.
It sounds like the fairly coarsely discrete time intervals in PIE are the main
justification for this particular heuristic, so it might be sufficient to
document that WRT PIE itself. Using finer time intervals is clearly a better
choice for the future.
PIE uses time intervals for measurement purposes and several parameters
contribute. We've recently done some basic work on measurement
methodology that facilitates a comparison of different measurement
approaches and better-informed parametrization by introduction of the
"memory" concept.
https://atlas.informatik.uni-tuebingen.de/~menth/papers/Menth17c.pdf
PIE essentially implements TDRM-DTWMA-UEMA illustrated in Fig. 6d.
The concept of "memory" can also be applied to moving averages which are
also used in PIE for several purposes. Configuration via a "memory" can
make some heuristics more intuitive.
Best wishes,
Michael
- Jonathan Morton
--
________________________________________________________________
Bob Briscoe http://bobbriscoe.net/
--
________________________________________________________________
Bob Briscoe http://bobbriscoe.net/
_______________________________________________
aqm mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/aqm