Issue #11381 has been updated by Jo Rhett.
Well normally 0-1 are active, but due to lack of splay right now we are getting hammered: <pre> [02:48 root@ats003 ~]$ passenger-status ----------- General information ----------- max = 20 count = 20 active = 20 inactive = 0 Waiting on global queue: 22 ----------- Application groups ----------- /nas/puppet/rack: App root: /nas/puppet/rack * PID: 5254 Sessions: 1 Processed: 1009 Uptime: 1h 8m 59s * PID: 4595 Sessions: 1 Processed: 1967 Uptime: 1h 9m 6s * PID: 4612 Sessions: 1 Processed: 1307 Uptime: 1h 9m 6s * PID: 4604 Sessions: 1 Processed: 1339 Uptime: 1h 9m 6s * PID: 4598 Sessions: 1 Processed: 1793 Uptime: 1h 9m 6s * PID: 4601 Sessions: 1 Processed: 1727 Uptime: 1h 9m 6s * PID: 4690 Sessions: 1 Processed: 1588 Uptime: 1h 9m 4s * PID: 4591 Sessions: 1 Processed: 1334 Uptime: 1h 9m 9s * PID: 5265 Sessions: 1 Processed: 1703 Uptime: 1h 8m 59s * PID: 4608 Sessions: 1 Processed: 1525 Uptime: 1h 9m 6s * PID: 12809 Sessions: 1 Processed: 2 Uptime: 4s * PID: 5105 Sessions: 1 Processed: 1297 Uptime: 1h 9m 2s * PID: 5251 Sessions: 1 Processed: 850 Uptime: 1h 8m 59s * PID: 5002 Sessions: 1 Processed: 1429 Uptime: 1h 9m 2s * PID: 4900 Sessions: 1 Processed: 1948 Uptime: 1h 9m 3s * PID: 4927 Sessions: 1 Processed: 1318 Uptime: 1h 9m 2s * PID: 4624 Sessions: 1 Processed: 1850 Uptime: 1h 9m 5s * PID: 5269 Sessions: 1 Processed: 1516 Uptime: 1h 8m 59s * PID: 5286 Sessions: 1 Processed: 878 Uptime: 1h 8m 58s * PID: 12815 Sessions: 1 Processed: 1 Uptime: 4s </pre> As it happens we graph these values in cacti and I can show you that this happens every 30 minutes, the other 27 it's bored silly. 0-1 active, 15-19 inactive. I'll send you a graph. <pre> $ passenger-memory-stats --------- Apache processes ---------- PID PPID VMSize Private Name ------------------------------------- 4380 1 187.1 MB 0.3 MB /usr/sbin/httpd 8060 4380 195.3 MB 1.4 MB /usr/sbin/httpd 8241 4380 195.0 MB 1.1 MB /usr/sbin/httpd 8273 4380 195.2 MB 1.4 MB /usr/sbin/httpd 8481 4380 195.0 MB 1.2 MB /usr/sbin/httpd 8484 4380 194.9 MB 1.1 MB /usr/sbin/httpd 8495 4380 195.1 MB 1.3 MB /usr/sbin/httpd 8516 4380 195.0 MB 1.1 MB /usr/sbin/httpd 8520 4380 195.3 MB 1.4 MB /usr/sbin/httpd 8536 4380 195.3 MB 1.4 MB /usr/sbin/httpd 8553 4380 195.1 MB 1.3 MB /usr/sbin/httpd 8562 4380 195.3 MB 1.4 MB /usr/sbin/httpd 8568 4380 195.0 MB 1.1 MB /usr/sbin/httpd 8596 4380 195.0 MB 1.1 MB /usr/sbin/httpd 8598 4380 195.1 MB 1.3 MB /usr/sbin/httpd 8600 4380 195.0 MB 1.1 MB /usr/sbin/httpd 8605 4380 195.0 MB 1.1 MB /usr/sbin/httpd 8629 4380 195.0 MB 1.1 MB /usr/sbin/httpd 8630 4380 195.0 MB 1.1 MB /usr/sbin/httpd 8654 4380 195.0 MB 1.1 MB /usr/sbin/httpd 8773 4380 195.2 MB 1.3 MB /usr/sbin/httpd 12713 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12723 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12724 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12732 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12734 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12735 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12742 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12743 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12744 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12745 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12746 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12747 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12748 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12751 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12761 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12763 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12764 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12765 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12766 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12767 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12770 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12771 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12772 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12773 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12775 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12776 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12779 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12780 4380 195.1 MB 1.3 MB /usr/sbin/httpd 12818 4380 194.9 MB 1.1 MB /usr/sbin/httpd 12819 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12836 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12837 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12839 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12848 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12851 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12854 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12855 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12856 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12858 4380 194.9 MB 1.1 MB /usr/sbin/httpd 12878 4380 194.9 MB 1.1 MB /usr/sbin/httpd 12879 4380 195.3 MB 1.4 MB /usr/sbin/httpd 12881 4380 194.9 MB 1.1 MB /usr/sbin/httpd 12882 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12883 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12887 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12888 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12889 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12890 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12895 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12919 4380 194.9 MB 1.1 MB /usr/sbin/httpd 12920 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12921 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12924 4380 194.9 MB 1.1 MB /usr/sbin/httpd 12938 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12939 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12940 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12941 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12942 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12943 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12944 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12952 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12954 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12956 4380 194.6 MB 0.8 MB /usr/sbin/httpd 12957 4380 195.0 MB 1.1 MB /usr/sbin/httpd 12958 4380 194.9 MB 1.1 MB /usr/sbin/httpd 12959 4380 194.6 MB 0.7 MB /usr/sbin/httpd 12960 4380 194.9 MB 1.1 MB /usr/sbin/httpd 13061 4380 194.6 MB 0.8 MB /usr/sbin/httpd 13065 4380 194.6 MB 0.7 MB /usr/sbin/httpd 13066 4380 194.6 MB 0.7 MB /usr/sbin/httpd 13075 4380 194.6 MB 0.8 MB /usr/sbin/httpd 13076 4380 194.6 MB 0.8 MB /usr/sbin/httpd 13077 4380 194.6 MB 0.8 MB /usr/sbin/httpd 13078 4380 194.6 MB 0.7 MB /usr/sbin/httpd 13087 4380 194.6 MB 0.8 MB /usr/sbin/httpd 13161 4380 194.6 MB 0.8 MB /usr/sbin/httpd 13215 4380 194.6 MB 0.7 MB /usr/sbin/httpd 13238 4380 194.6 MB 0.8 MB /usr/sbin/httpd 13252 4380 194.6 MB 0.8 MB /usr/sbin/httpd ### Processes: 100 ### Total private dirty RSS: 104.99 MB -------- Nginx processes -------- ### Processes: 0 ### Total private dirty RSS: 0.00 MB ----- Passenger processes ------ PID VMSize Private Name -------------------------------- 4381 86.9 MB 0.2 MB PassengerWatchdog 4387 232.8 MB 5.5 MB PassengerHelperAgent 4391 49.9 MB 8.3 MB Passenger spawn server 4395 57.7 MB 0.7 MB PassengerLoggingAgent 4591 212.3 MB 98.6 MB Rack: /nas/puppet/rack 4601 212.3 MB 98.5 MB Rack: /nas/puppet/rack 4604 207.8 MB 22.0 MB Rack: /nas/puppet/rack 4608 216.2 MB 100.6 MB Rack: /nas/puppet/rack 4612 214.7 MB 101.3 MB Rack: /nas/puppet/rack 4624 219.1 MB 34.6 MB Rack: /nas/puppet/rack 4690 210.5 MB 104.9 MB Rack: /nas/puppet/rack 4927 210.4 MB 98.7 MB Rack: /nas/puppet/rack 5002 205.4 MB 91.0 MB Rack: /nas/puppet/rack 5105 213.5 MB 47.6 MB Rack: /nas/puppet/rack 5251 216.4 MB 102.7 MB Rack: /nas/puppet/rack 5254 210.9 MB 94.2 MB Rack: /nas/puppet/rack 5265 213.3 MB 101.7 MB Rack: /nas/puppet/rack 5269 214.2 MB 97.6 MB Rack: /nas/puppet/rack 5286 207.9 MB 92.7 MB Rack: /nas/puppet/rack 12809 199.4 MB 85.7 MB Rack: /nas/puppet/rack 12815 198.0 MB 84.5 MB Rack: /nas/puppet/rack 13051 187.2 MB 77.4 MB Rack: /nas/puppet/rack 13083 147.0 MB 24.3 MB Rack: /nas/puppet/rack 13151 167.2 MB 62.5 MB Rack: /nas/puppet/rack 13259 205.0 MB ? Rack: /nas/puppet/rack 13281 166.2 MB ? Rack: /nas/puppet/rack 13282 197.3 MB ? Rack: /nas/puppet/rack 13283 209.8 MB ? Rack: /nas/puppet/rack ### Processes: 28 ### Total private dirty RSS: 1635.79 MB (?) </pre> ---------------------------------------- Bug #11381: puppetmaster death spiral under passenger -- document the needs! https://projects.puppetlabs.com/issues/11381 Author: Jo Rhett Status: Needs More Information Priority: Normal Assignee: Jo Rhett Category: passenger Target version: Affected Puppet version: 2.6.12 Keywords: Branch: Having run a cfengine master server that handled 25k clients, I guess I should feel spoiled. But the apparent system requirements for puppetmaster are phenomenal. With a mere 500 nodes we have a dedicated machine with 4 cores, 8 GB of memory and 6GB of swap, and yet puppetmaster goes into a death spiral daily. There is nothing on this host other than apache, passenger and puppetmaster. (and nrpe/nagios test to ensure puppet client is running) This is what top looks like when it happens: <pre> top - 01:18:06 up 1 day, 1:53, 2 users, load average: 185.70, 148.74, 77.73 Tasks: 379 total, 181 running, 198 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 99.8%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.1%hi, 0.1%si, 0.0%st Mem: 8174508k total, 8132764k used, 41744k free, 524k buffers Swap: 6094840k total, 6094840k used, 0k free, 19784k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7938 puppet 18 0 216m 100m 648 R 43.0 1.3 0:02.65 ruby 31786 puppet 19 0 215m 107m 1724 R 34.1 1.3 2:46.71 ruby 364 root 15 0 0 0 0 S 13.2 0.0 1:21.89 pdflush 7868 puppet 19 0 217m 102m 648 R 11.4 1.3 0:05.21 ruby 8028 root 15 0 0 0 0 S 11.4 0.0 0:21.73 pdflush 7804 puppet 19 0 212m 96m 648 R 11.1 1.2 0:02.38 ruby 7802 puppet 18 0 243m 131m 840 R 7.4 1.6 0:06.40 ruby 7692 puppet 19 0 212m 16m 648 R 7.1 0.2 0:06.10 ruby 7573 puppet 18 0 210m 12m 648 R 6.1 0.2 0:13.12 ruby 7900 puppet 18 0 225m 111m 648 R 6.1 1.4 0:05.88 ruby 7926 puppet 19 0 215m 105m 648 R 6.1 1.3 0:03.42 ruby 7941 puppet 18 0 181m 79m 648 R 6.1 1.0 0:02.68 ruby 7561 puppet 18 0 200m 21m 648 R 5.8 0.3 0:13.21 ruby 7792 puppet 18 0 222m 113m 940 R 4.9 1.4 0:11.08 ruby 8113 root 19 0 102m 896 608 R 4.9 0.0 0:01.40 crond 7902 puppet 18 0 209m 100m 852 R 4.3 1.3 0:04.42 ruby 7429 puppet 18 0 207m 25m 648 R 4.0 0.3 0:10.24 ruby 31816 puppet 19 0 225m 117m 1652 R 4.0 1.5 2:28.63 ruby 7685 puppet 18 0 210m 19m 648 R 3.7 0.2 0:10.95 ruby 7918 puppet 18 0 215m 101m 648 R 3.7 1.3 0:03.52 ruby 8121 root 18 0 60476 1144 800 R 3.4 0.0 0:00.73 sshd 31825 puppet 18 0 220m 110m 1652 R 3.4 1.4 2:54.23 ruby 7417 puppet 19 0 198m 30m 648 R 3.1 0.4 0:10.72 ruby 7459 puppet 19 0 206m 17m 648 R 3.1 0.2 0:08.91 ruby 7479 puppet 19 0 199m 17m 648 R 3.1 0.2 0:09.01 ruby 7570 puppet 18 0 205m 19m 648 R 3.1 0.2 0:14.22 ruby 7576 puppet 19 0 212m 12m 648 R 3.1 0.2 0:08.61 ruby 7585 puppet 19 0 207m 18m 648 R 3.1 0.2 0:07.44 ruby 7589 puppet 19 0 204m 14m 648 R 3.1 0.2 0:07.00 ruby 7593 puppet 19 0 181m 81m 1548 R 3.1 1.0 0:37.07 ruby 7620 puppet 19 0 210m 17m 648 R 3.1 0.2 0:07.81 ruby 7625 puppet 19 0 209m 21m 648 R 3.1 0.3 0:08.22 ruby 7652 puppet 18 0 164m 10m 648 R 3.1 0.1 0:03.61 ruby 7656 puppet 19 0 213m 35m 648 R 3.1 0.5 0:18.16 ruby 7669 puppet 19 0 204m 23m 648 R 3.1 0.3 0:10.32 ruby 7672 puppet 19 0 207m 14m 648 R 3.1 0.2 0:06.61 ruby 7676 puppet 20 0 205m 17m 648 R 3.1 0.2 0:07.71 ruby 7708 puppet 18 0 208m 16m 648 R 3.1 0.2 0:04.46 ruby 7739 puppet 19 0 221m 14m 648 R 3.1 0.2 0:04.93 ruby 7743 puppet 19 0 212m 34m 648 R 3.1 0.4 0:04.51 ruby 7747 puppet 19 0 207m 25m 648 R 3.1 0.3 0:08.15 ruby 7794 puppet 19 0 213m 41m 648 R 3.1 0.5 0:07.06 ruby 7842 puppet 18 0 211m 100m 648 R 3.1 1.3 0:06.48 ruby 7850 puppet 19 0 212m 96m 852 R 3.1 1.2 0:05.51 ruby 7852 puppet 19 0 212m 95m 648 R 3.1 1.2 0:01.68 ruby 7855 puppet 19 0 209m 97m 924 R 3.1 1.2 0:10.06 ruby 7872 puppet 19 0 214m 97m 852 R 3.1 1.2 0:08.38 ruby </pre> 1. Passenger clients are limited to 20. Where did all these other ruby instances come from? (there is no other ruby code on the system) 2. Why is it willing to spawn until system death? How can I limit this? CentOS 5.7 with ruby 1.8.5 and all puppet packages from yum.puppetlabs.com Passenger 3.0.11 at the moment but we first saw this with passenger 2.2 and upgraded without any change in behavior. -- You have received this notification because you have either subscribed to it, or are involved in it. To change your notification preferences, please click here: http://projects.puppetlabs.com/my/account -- You received this message because you are subscribed to the Google Groups "Puppet Bugs" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/puppet-bugs?hl=en.
