[Ganglia-developers] ideas for PHP frontend improvements

2007-11-06 Thread alex
I've been working on a list of ways to improve the PHP web frontend.   
Just curious what every else thinks of these.

- add support for a caching layer for generated graphs
Add code that allows caching of generated graphs, either on the  
filesystem or in a memcache cache.  When generating a graph, serve the  
cached version if the refresh time for that metric has not passed.

When lots of people viewing the same pages in the web frontend, this  
would sometimes eliminate the need to call rrdtool, or even parse the  
XML, to serve graphs.

- Move the graph.php functionality into an OO class.
Create a base class that contains the basic functionality of drawing a  
graph (like the rrdtool calls).  Also provide an interface which  
custom graphs would implement.  People who are providing custom  
metrics could also provide PHP modules to graph their metrics when  
custom graphs are appropriate.  If all graphs implement a common  
interface its easier to plug in 3rd-party graphs into your existing  
web app.

Implementing interfaces would require PHP5, since the PHP4 object  
model doesn't support them.  PHP4 will be end-of-life on 12/31/2007  
(search for 'PHP 4 end of life announcement' on  
http://www.php.net/archive/2007.php). Quite a few large PHP projects  
have committed to being fully PHP5 by February of next year  
(http://www.gophp5.org/projects).  I think moving the ganglia web  
application in this direction is a good idea.

- Provide graphs as a web service
This would accept requests for metric data (XML, JSON, whichever),  
generate a graph, and respond with a URL to the graph.  This would  
make it possible to embed metrics in other web applications (PHP or  
non-PHP) without much fuss.

I'm imagining this would be built around the graphs-as-a-class idea  
from the previous suggestion.

- Allow clients to reload graphs asynchronously.
I'm picturing opening a monitoring page, and have the page make AJAX  
requests back to the server for each of the graphs to be displayed.  
JavaScript on the client side would refresh each of the graphs  
according to the refresh schedule of the metric being graphed. The  
page itself would never refresh, but the graphs would periodically be  
replaced with fresh data.  The 'web service for graphs' idea above  
would probably be a requirement to make this one fly.

Providing some additional 'tooltip' data when rolling over the graph  
(like when it was last refreshed, or when the next refresh is  
expected) would probably be nice as well.

- Allow rollover effects on graphs which display discrete data.
Someone sent a link to a Perl/rrdtool application that had this  
feature a while ago.  If we can't incorporate that project directly,  
create something similar.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Ganglia-developers mailing list
Ganglia-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-developers


Re: [Ganglia-developers] ideas for PHP frontend improvements

2007-11-06 Thread Jesse Becker

[EMAIL PROTECTED] wrote:
I've been working on a list of ways to improve the PHP web frontend.   
Just curious what every else thinks of these.


- add support for a caching layer for generated graphs
Add code that allows caching of generated graphs, either on the  
filesystem or in a memcache cache.  When generating a graph, serve the  
cached version if the refresh time for that metric has not passed.


The rrdtool program already has a --lazy option that will (quoting from the 
rrdgraph man page): [o]nly generate the graph if the current graph is out of 
date or not existent.


While not a complete in-ganglia solution, it should be an easy change.

--
Jesse Becker
NHGRI Linux support (Digicon Contractor)


smime.p7s
Description: S/MIME Cryptographic Signature
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Ganglia-developers mailing list
Ganglia-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-developers


Re: [Ganglia-developers] ideas for PHP frontend improvements

2007-11-06 Thread Jesse Becker

Matthew Chambers wrote:
I don't know if I would call it difficult change, but it's a different 
method of generating the graph.  Currently rrdtool writes to standard 
output and that gets sent straight to the client (after the HTTP 


I'd forgotten that the graphs were generated on the fly.  That does change 
things a bit, and makes it a bit more complicated to change.


header).  The --lazy option obviously only works for writing to a file, 
which means the frontend will need a place to store those graphs.  
That's do-able, but it means additional I/O for the server which the 
current solution avoids - not to mention the additional time to generate 
the graphs due to waiting for I/O.  I'm a bit perplexed about where to 


But consider that currently each file is generated on the fly, and there is no 
caching done at all.  The webserver not only has to generate the image, 
including the IO to read the RRD file, but also serve the bits over the 
network.  As a test, I just requested a specific image from one of my Ganglia 
installations twice in under a second.  According to the apache logs for this 
host, it sent the full 10kb both times, and presumably had to generate the 
image from scratch each time.


The --lazy option would, I think, stat() the current on-disk graph file, 
stat() the corresponding RRD file(s), and generate a new graph only if needed. 
  Of course, you have to generate a unique filename for each graph, but I 
don't see that as too hard.


It looks like the current method (dynamic graph generation) has read IO with 
every request.  If things were changed to use --lazy, I *think* that there 
would be read and write IO to generate the graph, but subsequent requests 
would only create a new chart if there is new data, and allow the webserver to 
make use of various caching mechanisms.  Currently, the images are explicitly 
not cached at all, so that would have to change as well.


Firefox says that the small graphs are almost all under 7kb in size, while the 
medium graphs are less than 16kb.  I've got about 30 medium size images per 
host, plus a few small images.  So in my case, it's about half a MB per host. 
 Dunno how applicable that would be to other sites.


I suppose the real question is what's the bottleneck?  Is it the graph 
generation?  The network?  IO reading files off the disk?


--
Jesse Becker
NHGRI Linux support (Digicon Contractor)


smime.p7s
Description: S/MIME Cryptographic Signature
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/___
Ganglia-developers mailing list
Ganglia-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-developers