Hi Mikkel,

I was not referring to RRD overhead. How have you determined the zenperfsnmp is spending it's time reading RRD files?

When I have done performance tuning of our larger installations, zenperfsnmp does get I/O bound. Writing small values to lots of files in a round-robin fashion does very bad things for cache coherency. We usually solve the problem with more memory or by distributing the collection to more servers.

For every number tracked by zenperfsnmp, the file is locked, the header is read, another random block of the file is written. If the file is in the system cache, this is very fast. Once the file falls out of the system cache, this is slow. Since zenperfsnmp processes the values in roughly the same order with every cycle, and the cache uses a least-recently-used strategy for keeping the file in the cache, the system is always chasing a file out of the cache, just before it will need it again.

Zenoss is probably not using rrdtool in the most optimal way: we are putting every data point into it's own file, for example. I do not know if this is slowing the file access down, but it would mean that RRD could post much better numbers in it's test than you would see in Zenoss.

How many files are in $ZENHOME/perf:

   $ du -a $ZENHOME/perf | wc -l

-Eric


Mikkel Mondrup Kristensen wrote:
On Wed, 2007-02-07 at 10:41 +0100, Mikkel Mondrup Kristensen wrote:
On Tue, 2007-02-06 at 11:45 -0500, Eric Newton wrote:
Hi Mikkel,

The cyclic delay you see is zenperfsnmp loading its configuration from the Zeo database. With a lot interfaces, this can take a substantial period of time. You can speed this up by increasing the size of the in-memory and persistent object caches. Once the configuration is loaded, the configuration should sync quickly on successive cycles.

Go to About -> Configuration -> edit configuration

for zenperfsnmp.  Add these three lines:

    cachesize   10000
    pcachename zenperfsnmp
    pcachesize 100

Then restart zenperfsnmp. If this doesn't reduce your delay during the config cycle, please let us know!
It did not seem to help, i am up to 322 switches now and the server uses
most of its time reading rrd files with poll times around 400-700
seconds.
The rrd files are on their own partion so i know that the reads are on
rrd files, its my understanding that rrd files should not require all
those reads, please correct me if i am wrong :).
I am willing to provide any amount of debug info to solve this.

I tried perftest i found on the rrdtools mailinglist
(http://www.mail-archive.com/[email protected]/msg11861.html)
and i dont get the read behavior i have when i use zenoss i and get
writes around 60-100 MB/sec i dont know if this say anything about the
problem i am having with zenoss but it seems my problem it with zenoss
and not rrd itself.
I pasted the output of perftest below:

Create     10 rrds      1 c/s (0.00092 sdv)   Update     10 rrds   14323
u/s (0.00000 sdv)
Create     10 rrds      1 c/s (0.00099 sdv)   Update     20 rrds   14286
u/s (0.00001 sdv)
Create     20 rrds      1 c/s (0.00195 sdv)   Update     40 rrds   14067
u/s (0.00001 sdv)
Create     40 rrds      1 c/s (0.00401 sdv)   Update     80 rrds   13920
u/s (0.00002 sdv)
Create     80 rrds      1 c/s (0.00782 sdv)   Update    160 rrds   13332
u/s (0.00003 sdv)
Create    160 rrds      1 c/s (0.01563 sdv)   Update    320 rrds   13140
u/s (0.00004 sdv)
Create    320 rrds      1 c/s (0.03135 sdv)   Update    640 rrds   12639
u/s (0.00006 sdv)
Create    640 rrds      6 c/s (0.10247 sdv)   Update   1280 rrds   12773
u/s (0.00009 sdv)
Create   1280 rrds      1 c/s (0.27096 sdv)   Update   2560 rrds   11454
u/s (0.00014 sdv)
Create   2560 rrds      2 c/s (0.22133 sdv)   Update   5120 rrds    7058
u/s (0.00097 sdv)
Create   5120 rrds      1 c/s (0.28336 sdv)   Update  10240 rrds    1970
u/s (0.00046 sdv)
Create   3072 rrds      2 c/s (0.30806 sdv)   Update  13312 rrds    5323
u/s (0.00881 sdv)


_______________________________________________
zenoss-users mailing list
[email protected]
http://lists.zenoss.org/mailman/listinfo/zenoss-users

Reply via email to