On 15 February 2011 20:41, Wol Degodver <[email protected]> wrote:
> I have Apache (2.2, worker mpm) dedicated to serving only mod_wsgi
> (django) stuff (a nginx frontend does the rest). mod_wsgi is running
> in daemon mode with the default of 1 process and 15 threads
> (susceptible to change if I get more visitors). Since those values are
> set as hard limits, I figure I set Apache to limit the process and
> thread count to the same values like so:
>
> ServerLimit 1
> StartServers 1
> MaxClients 15
> MinSpareThreads 1
> MaxSpareThreads 15
> ThreadsPerChild 15
>
> Is this indeed a good idea? If not, why does my dedicated apache
> perhaps need to get some extra leeway?

Because nginx only uses HTTP/1.0 when it proxies and doesn't support
keep alive connections, then technically a 1 to 1 relationship of
number of threads in Apache processes with number of threads across
daemon mode processes should be fine.

The only implication of this is that if more concurrent requests queue
up than number of threads that they will queue up on the main listener
socket of the Apache processes themselves. If the number of threads in
Apache processes were more than across the number in the daemon
processes, then instead of queueing up as socket connects on listener
socket, they would get accepted by Apache processes and instead queue
up on the listener socket used by the daemon processes internally.

Right now not sure there is any significance to the distinction. May
only be relevant if nginx was load balancing between multiple
Apache/mod_wsgi backends and only for requests which have more than
about 1MB in body of request. This is because if the connection to
Apache fails, for a >1MB size request, because it hasn't started
streaming the request content, can still possibly fail over to another
backend if the connection ultimately fails. If Apache has instead
accepted it, then it would have already started streaming the data and
can't fail over if there is an error with socket connection being lost
before a response is received.

Overall, would need a nginx proxying/load balancing expert to comment
about all that though as I am not sure how nginx load balancing and
failover works.

Not sure if that is helpful or just confusing.

Graham

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/modwsgi?hl=en.

Reply via email to