Just to illustrate, you have a filter that lasts 10ms, and a cpu process
that lasts 100uS
Original spike
5 |
4 |
3 |
2 |
1 |
0 |
0ms_______________________10ms
Filtered spike
5
4
3
2
1 .....................
0.. ..
0ms________________________10ms
Not only is the filtered spike, much lower, but it lasts long beyond the
100uS spike. (10ms). Why would that be used in something that should
represent cpu-usage?
Peace Be With You.
On Sun, 30 Sep 2012 13:44:14 +0200, Uwaysi Bin Kareem
<[email protected]> wrote:
Hiya. I just had an initial look at fair.c
There seems to be a 10ms averager in there?
You are aware that that means you work on delayed values?
Isn`t that counterintuitive to the principle of sharing?
That means short bursts of cpu-use will be filtered out, and given less
cpu time.
Starting applications won`t have their cpu-usage before 5ms, which is
quite a bit on modern machines. Well if you use a linearphase filter, I
don`t know what kind of averager you use. The best would ofcourse be to
use a minimalphase gaussian averager. Which might be overkill. Atleast a
one-pole iir, buf = buf + (-buf + in) * cut)); One pole IIRs also have a
better frequency response.
When you are working with low-latencies, wouldn`t it be better if such
things are tuned for target latency. I think few care about latency
after 0.2ms. So say the filter should be set to 0.4ms max.
Why would you want to filter cpu-usage also really?
Peace Be With You.
(please CC me.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/