Pandu,
Any modern monitoring framework/server with a web interface will have
tools to select metrics to retrieve and store into a database and
display/graph/alert as needed using whatever reasonable collection
interval you define.
If your metrics are relatively simple, you should be able to get a
solution implemented rather quickly without having to write any of
your own code and the overhead/resources needed on your server would
just be proportional to the number of metrics collected and their
frequency.
My current monitoring tool of choice is zabbix, but there are many options.
Matt
On Tue, Oct 11, 2011 at 3:21 AM, Mick michaelkintz...@gmail.com wrote:
On Tuesday 11 Oct 2011 10:48:31 Pandu Poluan wrote:
The head honcho of my company just asked me to plan for migration of
X into the cloud (where X is the online trading server that our
investors used).
Now, I need to monitor how much RAM is used throughout the day by X,
also how much bandwidth gets eaten by X throughout the day.
What tools do you recommend?
Remember: The data will be used for 'post-mortem' analysis, so I don't
need any fancy schmancy presentation. Just raw data, taken every N
seconds.
I have used mrtg and nagios to capture and monitor both, but you'll have to
install and configure them.
If you're good with perl or python, then some simple script should be able to
capture such values and record on a flat file, or even a database.
--
Regards,
Mick
--
Matthew Marlowe
m...@professionalsysadmin.com
Senior Internet Infrastructure Consultant DevOps/VMware/SysAdmin
https://www.twitter.com/deploylinux Gentoo Linux Dev
Courage is not simply one of the virtues, but the form
of every virtue at the testing point. -- C.S. Lewis