coyote wrote:
> I have 6 SATA drives in a RAID 5 configuration, not the best performance but
> it is what the admin wants.
>
> I can back off the monitoring and performance collection if I have too. sda8
> is where Zenoss is installed.
>
> # iostat
> Linux 2.6.18-128.1.16.el5 (zmaster) 07/28/2009
>
> avg-cpu: %user %nice %system %iowait %steal %idle
> 0.75 0.00 0.08 0.01 0.00 99.15
>
> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
> sda 25.28 3.24 632.88 5326012 1040210946
> sda1 0.00 0.02 0.00 25980 602
> sda2 0.00 0.00 0.00 1877 0
> sda3 0.31 0.40 5.03 652578 8261584
> sda4 0.00 0.00 0.00 12 0
> sda5 6.52 0.21 185.22 336968 304432832
> sda6 0.15 1.50 2.63 2465424 4322896
> sda7 1.08 0.00 261.88 6096 430430360
> sda8 17.21 1.11 177.97 1828235 292514760
> sda9 0.00 0.00 0.15 3619 246896
> sda10 0.00 0.00 0.00 4607 1016
run "vmstat 1" and check the "wa" column (should be 3rd column in from the
right). It should show your %i/o wait every second. It's shown above, but
only for one period in time. If your i/o wait is staying relatively low (low
single digits), then it's likely not a disk i/o problem on that server. Your
%0.01 shown above is nothing....but the server could have just been between
polling for that moment.
Before doing too much tweaking of zope, I'd try and clean it up and re-index.
This is what I do as the zenoss user (note, the spacing is important)
# zendmd
Code:
# Fix deviceSearch
brains = dmd.Devices.deviceSearch()
for d in brains:
try:
bah = d.getObject()
except Exception:
print "Removing non-existent device from deviceSearch: " + d.getPath()
dmd.Devices.deviceSearch.uncatalog_object(d.getPath())
commit()
# Fix componentSearch
brains = dmd.Devices.componentSearch()
for d in brains:
try:
bah = d.getObject()
except Exception:
print "Removing non-existent device from componentSearch: " +
d.getPath()
dmd.Devices.componentSearch.uncatalog_object(d.getPath())
commit()
dmd.Devices.reIndex()
commit()
reindex()
commit()
[/code]
-------------------- m2f --------------------
Read this topic online here:
http://forums.zenoss.com/viewtopic.php?p=38897#38897
-------------------- m2f --------------------
_______________________________________________
zenoss-users mailing list
[email protected]
http://lists.zenoss.org/mailman/listinfo/zenoss-users