On 6/15/2010 4:42 PM, Rainer Jung wrote:
> On 15.06.2010 23:09, William A. Rowe Jr. wrote:
>> As a broad general question - why not equivalent number of MaxClients
>> across all MPMs?
> 
> I was uncertain about that. Users often tend to try to fix performance
> problems by adding concurrency to the web server. If your web server is
> configured very tight, then this will help. But if you actually have a
> throughput problem (slow backends, network etc.), then allowing evengo
> more concurrency will make things worse.

We don't disagree.

> So I wanted to stick away from sizing to what's possible and stay closer
> to what's reasonable. Concerning Unix my gut feeling was that 256
> processes (prefork) and 16x25=400 threads doesn't put much pressure on
> system resources of modern entry servers. I raised the MaxClients for
> the threaded MPMs, because I expect them to be a bit cheaper. But no
> strong opinion here about the best numbers. Keeping MaxClients in sync
> would make the MPMs better comparable out of the box, but on the other
> hand precisely this type of comparison might not make much sense.

Here again, keeping it tight is probably good.  Threads do cost (stacks,
fd's, lots of other resources) although they don't generally eat everything
that a process would.

None the less, if the 'out of the box' server should support 250 clients,
that's what we aught to do.  I'd hate people to jump from prefork to worker,
for example, only to find their resource profile and concurrency is changed
entirely under their nose.

In fact my 'quick test' servers I hang onto all the time only have 32 workers
each, for this very reason.  I don't care to have them gobble resources and
I rarely need more concurrency, myself.

> For event it's quite possible, that the idleness of worker threads
> fluctuates. So bigger thread pools (per process) might reduce risk of
> starvation.

But that's true of prefork or worker as well, if they are blocked on
something like CGI.  Here again, a consistent MaxClients, with the
user-predictable caviat that they might wish to raise this for those
who have slow/blocking response content.

> Only the Netware MPM uses a thread stack size different from the OS
> default at the moment (OK, Windows uses hard-coded 65536 for the service
> thread and a stderr_thread but not for the worker threads). So do we
> want to now switch over to an httpd specific value or should we stick
> with system default?

It does?  I guess I never backported this from trunk, but on trunk it
most certainly supports ThreadStackSize, and that number happens to
default to 256k from the build schema (and yes, some trivial required
threads get bumped down to a sensible 64k).

>>> Netware
>>> =======            conf  default   proposed
>>> StartThreads        250       50         50
>>> MinSpareThreads      25       10         25
>>> MaxSpareThreads     250      100        100
>>> MaxThreads         1000     2048       1000
>>> MaxRequestsPerChild   0        0          0
>>> ThreadStackSize   65536    65536      65536
>>> MaxMemFree          100        0          - (remove)
>>
>> ThreadStackSize seems a bit dicey, would rather see 128k default.
> 
> Because it is the only MPM with non system default used, I wondered
> whether there's a bit of history behind that value.

I'm certain :)  Guenter?  Brad?

Reply via email to