Steven Wagner wrote:
matt massie wrote:
guys-
i just checked out our latest source on freebsd and solaris and it
wasn't happy. i was a little too naive about the way i stitched in the
libdnet source. i've added the necessary files to make ganglia happy
again on solaris, freebsd and likely other oses as well (since i added
the
complete intf and eth support files, autoconf tests etc).
i added the mtu_func to solaris.c (it was a simple cut and paste from
linux.c).
since i'm running the monitoring core on source forge's compile farm
.. i'm not able to test them as i'd like. let me know what you guys
find on FreeBSD, Solaris, et al.
Basically what it comes down to is that top (and, as far as I know, the
Linux kernel) does a weighted average between the last calculated value
and the current value, adjusting the weight according to how much time
it's been since the last update. My code isn't doing this so it looks
very boring.
[but don't hold the release up on my account]
Actually, top is sneakier than I thought. The code that I thought was
doing CPU utilization was the per-process cycle counter, not the per-state
cycle counter. The percentage calculation for CPU states was in another
(non-Solaris-specific) function, and uses Crazy Number Magic[tm] to get the
percentages it does.
Sheesh, I thought you'd just take the difference between your last measured
value and the current measured value from each state counter and divide
that by the sum of the difference from all the state counters, multiply by
a hundred, maybe shave it to a couple significant figures and *bam* you've
got the percentages for each state...
Anyway, diff enclosed. It has some code that, as they say, "has no direct
application" at this time but it will be used fo some lightweight process
work that I'm going to have to do in order to get the proc_run metric.
And in the spirit of 'fattest disk partition,' how about 'busiest process'
stats? CPU/mem/owner + name + args? Just a thought...