I'm scoping out an application which will likely use RRD as its data storage backend. The structure will be such that there is one process collecting samples & writing them out to the RRD logs (perhaps as frequently as one batch of update every 1-5 seconds). In parallel with this, there can be one or more readers extracting data from the RRD logs.
My question is: are the updates applied atomically as far as the readers are concerned? I notice that the code in the update method takes out a write fcntl() lock on the database, and then performs a whole sequence of fwrite() calls. The fetch methods do not take out a corresponding read lock with fcntl(). Thus if a reader were to extract data while the writer is in the middle of its sequence of fwrite() calls, could the reader get either corrupt, or incomplete data for the most recent sample ? I'm hoping the data file format is structured such that the reader will never see a partial updated sample, but I can't tell for sure by looking at the code. Any confirmation of the behaviour when there are concurrent updates & readers would be very helpful Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=| -- Unsubscribe mailto:[EMAIL PROTECTED] Help mailto:[EMAIL PROTECTED] Archive http://lists.ee.ethz.ch/rrd-users WebAdmin http://lists.ee.ethz.ch/lsg2.cgi
