We built our own utility, basically just storing whatever data is needed (nightly diskspace, sar CPU information, etc.) and plot them out using GNU plotutils, and an Apache web server for viewing.  In the future we hope to be using a database for storage instead of flat files and system data files. Our clients have found it very helpful for exactly the reasons you mentioned -> justifying new server purchases.

 

=8-))

 

-----Original Message-----
From: Kevin Anderson [mailto:[EMAIL PROTECTED]]
Sent: Friday, October 25, 2002 2:10 PM
To: [EMAIL PROTECTED]
Subject: (clug-talk) Speaking of monitoring software...

 

Are there any good monitoring utilities that can provide a statistical graph over time.

 

Something that would allow me to track things like CPU time per day over the course of a year.  Ditto with Disk Space (though df and something like Excel could give me that).  Etc.

 

Like top, maybe.  But designed for a MUCH longer timeframe.

 

I want to be able to justify a larger server in a couple of years, and I'd like to say something beyond "it's time to upgrade".  I'd rather say, we've become more dependant on our technological systems.  When we started out, 3 years ago, the servers average utilization was 14%, Free Memory was about 75%.  Over that time, we've become much busier, and now our average utilization is 85% and Free Memory is 0%.

 

I could use a cron job to dump the output from top periodically through the day, but I'd like to have gkrellm's charts spanning over a period of several months at a time.  But...  What can build the data which would be the basis of that chart?  There must be files in /proc that show processor utilization?  # of users?  # of processes?  Disk Free vs Disk Used per partition?  Network Utilization?If I wrote them all to a database every minute or so, we'd have a very accurate chart after a while.  We'd know what days were busy every month, etc.

 

Any ideas?

Kev.

Reply via email to