> I use Apache::Resource to set a CPU limit, that only a 
> runaway process would hit so the random killer process
> doesn't accumulate and take down my system.  I have
> MaxRequestsPerChild set to a few hundred and have found
> empirically that they don't tend to take more than 10
> seconds of CPU time for normal use, so I give a CPU 
> limit of 20-30 seconds for all my httpds.

So you use the formula:

total_proc_cpu_time_limit =
MaxRequestsPerChild * single_request_cpu_time_limit

Hmm, you describe a workable solution...  But it can be very problematic
to determine the limit numbers for the above formula, if the environment
tend to change. I mean, when you add/remove scripts, add features...

$detection_solutions++ :) 
Anyone else?

> I also run a monitor program that watchdogs the
> server every 20-30 seconds and restarts it if 
> response time is ever too low, just in case other 
> odd things go wrong. It just does a graceful 
> restart, I haven't needed to fix a problem with a 
> full stop / start yet.

Yup, I do the same. My watchdog also emails a report to myself when this
happens, so I can monitor the whole thing and spot problems. (see the
guide for the watchdog). But unfortunately this cannot spot that just a
few processes hang. It would only work, when hanging_procs = MaxClients,
so parent process wouldn't spawn any more procs and the watchdog would
detect and restart the server, killing all the hanging procs...

Thank you, Joshua

_______________________________________________________________________
Stas Bekman  mailto:[EMAIL PROTECTED]    www.singlesheaven.com/stas  
Perl,CGI,Apache,Linux,Web,Java,PC at  www.singlesheaven.com/stas/TULARC
www.apache.org  & www.perl.com  == www.modperl.com  ||  perl.apache.org
single o-> + single o-+ = singlesheaven    http://www.singlesheaven.com

Reply via email to