Hi folks,

 currently I am setting up monitoring for a cluster, where the demand is to 
have additional monitoring intervalls. We want to see stuff like "20-minutes", 
"8-hours", "2-weeks", "3-month" and "6-month". Doing so seems easy, but I have 
a question on the RRA definitions.

 The default setup seems to be (assuming a 15 second polling intervall):

hour    -> "RRA:AVERAGE:0.5:1:244"
day     -> "RRA:AVERAGE:0.5:24:244"
week   -> "RRA:AVERAGE:0.5:168:244"
month -> "RRA:AVERAGE:0.5:672:244" (more like 4-weeks :-)
year    -> "RRA:AVERAGE:0.5:5760:374" (367.86 days)


 So from hour to month  we have 244 datapoints with nicely increasing steps 
(1,24*1,7*24*1,4*7*24*1). So why are we doing it differently for the year? I 
would have expected the "year" RRA to be "RRA:AVERAGE:0.5:8784:244" (366 days). 
Any particular reasons for this?

Cheers
Martin
------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www:   http://www.knobisoft.de


------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Ganglia-developers mailing list
Ganglia-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-developers

Reply via email to