Matthew Chambers wrote:
I don't know if I would call it difficult change, but it's a different method of generating the graph. Currently rrdtool writes to standard output and that gets sent straight to the client (after the HTTP
I'd forgotten that the graphs were generated on the fly. That does change things a bit, and makes it a bit more complicated to change.
header). The --lazy option obviously only works for writing to a file, which means the frontend will need a place to store those graphs. That's do-able, but it means additional I/O for the server which the current solution avoids - not to mention the additional time to generate the graphs due to waiting for I/O. I'm a bit perplexed about where to
But consider that currently each file is generated on the fly, and there is no caching done at all. The webserver not only has to generate the image, including the IO to read the RRD file, but also serve the bits over the network. As a test, I just requested a specific image from one of my Ganglia installations twice in under a second. According to the apache logs for this host, it sent the full 10kb both times, and presumably had to generate the image from scratch each time.
The --lazy option would, I think, stat() the current on-disk graph file, stat() the corresponding RRD file(s), and generate a new graph only if needed. Of course, you have to generate a unique filename for each graph, but I don't see that as too hard.
It looks like the current method (dynamic graph generation) has read IO with every request. If things were changed to use --lazy, I *think* that there would be read and write IO to generate the graph, but subsequent requests would only create a new chart if there is new data, and allow the webserver to make use of various caching mechanisms. Currently, the images are explicitly not cached at all, so that would have to change as well.
Firefox says that the small graphs are almost all under 7kb in size, while the medium graphs are less than 16kb. I've got about 30 medium size images per host, plus a few small images. So in my case, it's about half a MB per host. Dunno how applicable that would be to other sites.
I suppose the real question is "what's the bottleneck?" Is it the graph generation? The network? IO reading files off the disk?
-- Jesse Becker NHGRI Linux support (Digicon Contractor)
smime.p7s
Description: S/MIME Cryptographic Signature
------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________ Ganglia-developers mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/ganglia-developers
