```In message <[EMAIL PROTECTED]>, Bruce Ev
ans writes:
>On Tue, 17 Jul 2001, Ian Dowse wrote:
>> effect in the load calculation, but even for the shorter 5-minute
>> timescale, this will average out to typically no more than a few
>> percent (i.e the "5 minutes" will instead normally be approx 4.8
>> to 5.2 minutes). Apart from a marginally more wobbley xload display,
>> this should not make any real difference.
>
>It should average out to precisely the same as before if you haven't
>changed the mean (mean = average :-).  The real difference may be
>small, but I think it is an unnecessary regression.

I meant the "5-minute average" that is computed; it will certainly
not be precicely the same as before, though it will be similar.

>from 0 very fast.  Even with a large variation, the drift might not be
>fast enough.

Actually, it's not too bad with a +-1 second variation, which is
why I chose a value that large. If you plot 60 samples (60 is the
number of 5-second intervals in the 5-minute load average timescale)
you get a relatively good dispersion of points throughout the
5-second interval. Try pasting the following into gnuplot a few
times:

plot [] [-2.5:2.5] \
"<perl -e 'for (1..60){\$a+=4+rand()*2; \$o=\$a-5*int((\$a+2.5)/5); \
print \"\$o\n\"}'" t "1 second", \
"<perl -e 'for (1..60){\$a+=4.99+rand()*.02; \$o=\$a-5*int((\$a+2.5)/5); \
print \"\$o\n\"}'" t "0.01 second"

It shows that while a +-1 second variation results in samples that
are usually scattered well across the 5-second interval, a +-1 tick
variation never changes the sampling point much during that time.
If you have a worst-case type load pattern such as that caused by

perl -e 'for(;;){while((time-1)%5>1){}select(undef,undef,undef,2.5)}'

(5-second period, 50% duty cycle) then the interference pattern
resulting from a +-1 tick variation has a period that is typically
days long! Of course the interference pattern caused by the above
script has an infinitely long period with the old load average
the %CPU usage is approx 50%.

>> The alternative that I considered was to sample the processes once
>> during every 5-second interval, but to place the sampling point
>> randomly over the interval. That probably results in a better
>
>I rather like this.  With immediate update, It's almost equivalent to
>your current method with a random variation of between -5 and 5 seconds
>really reduce the jitter -- it just concentrates it into a smaller
>interval.

When I tried this approach (with immediate update), I didn't like
decay that I'm used to, the way it sometimes changed twice in short
succession and sometimes did not change for nearly 10 seconds was
quite noticable. I'd be quite happy to go with the delayed version
of this, though it does mean having two timer routines, and storing
the `nrun' somewhere between samples and updates.

>hopefully rare.  Use a (small) random variation to reduce phase effects
>for such processes.  I think there are none in the kernel.  I would try
>using the following magic numbers:
>
>    sample interval = 5.02 seconds (approx) (not 5.01, so that the random
>                                             variation never gives a multiple
>                                             of 1.00)
>    random variation = 0+-0.01 seconds (approx)