>
> AFAIK, Paul hit this problem because of idle_server_maintenance and/or
> MaxRequestsPerChild - true?  If that's really the extent of it, I
> believe there's a pretty easy solution.  If idle_server_maintenance
> triggers this, it is thrashing (i.e., first trying to cut back on the
> number of processes, then trying to increase them, then trying to cut
> back again while the first process with the mixed weight workloads is
> still hanging around).  So duh!, let's make it stop thrashing.  All we
> have to do is limit the number of processes that can be terminated by
> idle_server_maintenance to one at a time.  Piece of cake.
>
> If you buy that, then why doesn't that solution work for
> MaxRequestsPerChild as well?  Think about it.  That number is basically
> for memory leak tolerance on a buggy server.  How important is it that
> we always stop a process precisely when we hit that number?  not very,
> IMO.

True (IMHO).

> For the low end folks it's probably the default anyway, which we
> developers just pulled out of the air.  So if we have our mixed workload
> process basket case scenario going on, just hold off on killing any more
> processes until the first one terminates completely.

Show us some code :-)

>
> Beside idle_server_maintenance and MaxRequestsPerChild, are there any
> *realistic* scenarios that trigger this problem?
>

Do large site admins run with non-zero MaxRequestsPerChild?  If so, then I am pretty 
sure this would
be a problem for those sites that serves up content that ranges from small files to 
huge files
(mpegs or avis for example).  All it would take to hang up a process from exiting is 
for someone on
a slow link to be downloading a large file when MaxRequestsPerChild is hit.

I agree that it is probably not that important fo the child to exit exactly when 
MaxRequestPerChild
is hit, so perhaps it is fine to just only allow one proc to exit at a time. Haven't 
given much
thought on how to implement it though. Hummm... Keep in mind that it is the child that 
makes the
decision to die based on MaxRequestsPerChild, which implies to me that there needs to 
be IPC between
the child processes on who will be allowed to die or not (what if multiple child proc 
hit
MaxRequestPerChild at the same time?).  That seems to imply the need for serialization 
to keep
multiple children from dying. Or the child requests the parent to kill it off and the 
parent takes
care of doing the right thing. Still a fair amount of code change regardless.

My solution has a nice benefit that if you do not enable status, you do not need to 
allocate a big
chunk of memory. Today, we allocate all the memory we MIGHT need, even if status is 
not enabled.

Bill

Bill

Bill


Reply via email to