> 
> Have you considered saving up updates and do them (say) 10 at a time?
> 
> Sure, this is going to be a problem for that one user that happens
> to want to see the data of the past few minutes.  But it would save
> you much disk access, wouldn't it?
> 

I've done that with a daemon that keep all the update data in memory,
and
write that to the rrd at regular interval (6 hours) with a low priority,

and << a lot >> of 5 minutes updates at a time.

in parallel, this daemon is listening to 'client' requests that want to
generate graphs, and it updates the rrd before the graph generation.

il allows me to use 'standard' (dual p3 1.2ghz, 2gb ram, scsi fast
disks) 
servers to make updates on  ~50'000 rrd files, each 5 minutes, each rrd 
having 10 ds and 4 rra 
      RRA:AVERAGE:0.5:1:600
      RRA:AVERAGE:0.5:6:600
      RRA:AVERAGE:0.5:24:600
      RRA:AVERAGE:0.5:288:800

Philippe

--
Unsubscribe mailto:[EMAIL PROTECTED]
Help        mailto:[EMAIL PROTECTED]
Archive     http://lists.ee.ethz.ch/rrd-developers
WebAdmin    http://lists.ee.ethz.ch/lsg2.cgi

Reply via email to