https://bz.apache.org/bugzilla/show_bug.cgi?id=53555
--- Comment #14 from ScottE <[email protected]> --- In case others find it useful, the approach we used to mitigate this was several things: 1. Increased MinSpareThreads and MaxSpareThreads, as well as the range between them. By making Apache less aggressive about scaling the number of servers down, it's less likely to run into this issue. Our new values are: MinSpareThreads = MaxRequestWorkers / 4 MaxSpareThreads = MinSpareThreads * 3 2. Lowered MaxKeepAliveRequests. By looking at a histogram of request counts per connection on an equivalent Apache running with worker MPM (first value in Acc column), I found a very long tail of few connections out to our old value, but a clear cluster at the lower end. Our new MaxKeepAliveRequests is a bit beyond the critical-mass cluster, but significantly lower than the old value. This will allow servers to recycle quicker when they scale down, but not cause any significant impact to client connections, since the relative number of connections we'll close early is small. 3. Increased AsyncWorkerFactor. When Apache servers are scaling down (in Gracefully Finishing state), this allows other servers to pick up the slack by handling a larger number of total client connections (in HTTP Keep-Alive, this does not increase the number of workers), where before these processes had reached their limit of connections and were rejecting new ones. Event MPM does a reasonably good job of spreading load between processes, and with our larger spare threads range we now tend to have more alive processes as well. We also considered lowering KeepAliveTimeout, but using a similar histogram as I did for KeepAliveRequests from a worker MPM configuration (using the SS column as a reasonable analog). That histogram showed a nice distribution for us, so lowering this would have affected clients and not helped for this workload. These are the values that worked for us, with our workload, to mitigate this issue. Of course your workload and values will be different, but this may be a reasonable strategy to try as well. -- You are receiving this mail because: You are the assignee for the bug. --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
