On 2014-11-14, Cathey, Jim <[email protected]> wrote:
> Again, I'm not speaking as to what Linux does, exactly. But in
> general, things that are triggered on clocks and that _end_ before
> the next clock interrupt tend to be unnoticed when statistical
> sampling is used. If, for example, my clock-synchronized task
> consumes 90% of the CPU time before the next clock tick, but then
> finishes and goes idle, the sampler will never see it, and can report
> that the machine is 100% idle, when in fact it is only 10% idle.
That appears to be what is is happening. If you are using
applications where a significant portion of the work is done in very
short bursts and triggered by clocks, the values top shows for idle
time and per-process cpu usage are completely useless.
I finally just wrote an "idle" program in C that just does busy-work
incrementing some volatile unsigned values. It does a nice(19) and
runs a loop that does a fixed amount of "work" and measures how much
time elapsed via gettimeofday(). I calibrated it so that it does 100
outer loops per second on a truly idle system (it's stable to within 1
or 2 percent). When my clock driven applications are running, the
"idle count" program drops to a consistent 28 loops per second.
However, the cpu usage percentages shown being used by the
clock-driven programs and by the idle program vary considerably (time
shown for the idle program can be anywhere from 0% to 65%). The
values shown by "top" are pretty stable over time, but if I restart
the clock-driven programs the top values can change to drastically
different values, even though the amount of CPU time the idle program
is using never changes (and I have no reason to believe the amount of
work being done by the clock-driven programs changes either).
--
Grant Edwards grant.b.edwards Yow! But they went to MARS
at around 1953!!
gmail.com
_______________________________________________
busybox mailing list
[email protected]
http://lists.busybox.net/mailman/listinfo/busybox