kingpin wrote:
> Hi Chet,
> 
> Thanks for the reply.
> 
> I've applied the suggested fix. Let's hope it works


Judging from the way you had to "kill -9" your zenmodeler....I'd say this is 
the same issue I ran into.  A normal "kill [zenmodeler_pid]" wouldn't 
work...had to do a kill -9 [zenmodeler_pid].  I could tell the process was 
completely kaput because an strace on the pid would show it hung wiating:


> recvmsg(65,


Chet's solution worked for me (Zenoss Enterprise 2.3.3 on Centos 5.3 x86_64).  

Also, I wasn't aware of the ability to split zenhub into workers until this 
thread.  It's working great.  I'm running Zenoss for 800+ devices on a blade 
with 8 cores, 16G of ram, and HBA attached storage.  Everything was 
bottlenecking under the single zenhub process.  I added "workers 2" to 
$ZENHOME/etc/zenhub.conf and restarted zenhub.  Afterwards it showed:


> $ ps aux |grep zenhub
> zenoss    4747  7.7  1.0 445336 173788 ?       S    Apr28  48:23 
> /opt/zenoss/bin/python /opt/zenoss/Products/ZenHub/zenhub.py --configfile 
> /opt/zenoss/etc/zenhub.conf --cycle --daemon
> zenoss    4900  8.9  2.4 694176 410780 ?       S    Apr28  56:14 
> /opt/zenoss/bin/python /opt/zenoss/Products/ZenHub/zenhubworker.py 
> --configfile /opt/zenoss/etc/zenhubworker.conf -C /tmp/tmp_-toYz
> zenoss    4901 14.6  2.4 686596 403180 ?       S    Apr28  91:48 
> /opt/zenoss/bin/python /opt/zenoss/Products/ZenHub/zenhubworker.py 
> --configfile /opt/zenoss/etc/zenhubworker.conf -C /tmp/tmpdZbHqU


Now everything runs much more smoothly.




-------------------- m2f --------------------

Read this topic online here:
http://forums.zenoss.com/viewtopic.php?p=34014#34014

-------------------- m2f --------------------



_______________________________________________
zenoss-users mailing list
[email protected]
http://lists.zenoss.org/mailman/listinfo/zenoss-users

Reply via email to