Of course it is not all perfectly clear for me, but for sure you 
explanations helped a ton, thank you !
I think I've encountered both issues 1 and 2, so I set up --processes 2 
--threads 10 --max-clients 25, and I'll try to monitor what happens.

Can I ask for a last advice ? Because I have a lot of things in cache (from 
the startup step) that fill the memory, I cannot much increase the number 
of processes, although I have 2 CPUs at disposal and usually one sets the 
number of processes to at least twice the number of CPUs, right ?

I watched the talk and will give a try at New Relic. Our lab is using 
something called Ganglia for now.

On Wednesday, 26 August 2015 02:50:25 UTC+2, Graham Dumpleton wrote:
>
>
> On 25 Aug 2015, at 10:14 pm, Julien Delafontaine <[email protected] 
> <javascript:>> wrote:
>
> It seems to work fine now !
> I still need to "apachectl restart" after "mod_wsgi-express", right ?
>
>
> Yes.
>
> Also I get that warning :
> WARNING: MaxClients of 25 exceeds ServerLimit value of 20 servers,
>  lowering MaxClients to 20.  To increase, please see the ServerLimit
>  directive.
>
>
> [Tue Aug 25 13:45:39 2015] [warn] WARNING: Attempt to change ServerLimit 
> ignored during restart
>
>
> This is further evidence of the back logging which will lead to the queue 
> timeouts. In this case the warning is that the Apache child worker 
> processes which proxy requests through to the mod_wsgi daemon processes 
> have reached capacity.
>
> The number of Apache child worker processes created and the number of 
> threads (if a threaded MPM), is control by mod_wsgi-express based on how 
> many processes/threads are used in daemon mode.
>
> The ratio of child worker threads across all processes defaults to 1.5 
> times the number of threads across all mod_wsgi daemon processes.
>
> Thus if you had processes=2, threads=6 that is setting the daemon 
> processes and there would then be 18 threads in total across Apache child 
> worker processes.
>
> So the three scenarios are.
>
> 1. The mod_wsgi daemon processes were restarted and 18 requests had queued 
> up in child worker processes waiting for daemon processes to be ready.
>
> 2. The mod_wsgi daemon processes had reached capacity at 12 concurrent 
> requests in total, but there were 6 additional requests queued up in child 
> worker processes still waiting for active requests in WSGI application to 
> complete.
>
> 3. You were also handling static files and no matter how many requests got 
> through to the WSGI application, there were sufficient requests for static 
> files that capacity in Apache child worker processes had been reached.
>
> If issue is (2), then you need to increase processes/threads for daemon 
> processes using command line options.
>
> If issue is (1), then ideally look at why WSGI application takes so long 
> to load and whether one could load stuff on demand instead.
>
> For (1) and (3), if need to increase the capacity of the Apache child 
> worker process to allow more requests to be queued up to daemon processes, 
> or to handle static file requests, then you can use the —max-clients option 
> to override the 1.5 ratio and set the number of workers across the Apache 
> child processes explicitly. 
>
> Thus you could say:
>
>     —processes=2 —threads=6 —max-clients=24
>
> This equates to ratio of 2.0 instead of 1.5.
>
> You have to be a little bit careful with making the ratio too high, as 
> that technically increases the scope for backlogging within the Apache 
> child worker processes. Although, it isn’t quite that simple.
>
> This is because queuing can also occur in the accept soccer listener queue 
> and that defaults for Apache to a stupidly high value of 500. Requests 
> queue there you don’t have a lot of visibility of and you can’t tell how 
> long they have been waiting.
>
> So there is actually some benefit to increasing maximum clients so that 
> Apache will be able to accept them and the time they have been queued up 
> can be tracked.
>
> This is now where the queue timeout can kick in if request ends up at WSGI 
> application. 
>
> That is, if the request took too long to be accepted by the daemon process 
> for handing and was greater than default 45 seconds, then it would get that 
> gateway timeout and cause error response.
>
> So the queue timeout is a fail safe for throwing out requests when 
> backlogging had occurred.
>
> Hope that helps and doesn’t just cause more confusion.
>
> FWIW, I talk about backlogging problem in one of:
>
> http://lanyrd.com/2012/pycon/spcdg/
> http://lanyrd.com/2013/pycon/scdyzk/
>
> I think it may be the first talk.
>
> Graham
>
> On Tuesday, 25 August 2015 09:58:31 UTC+2, Graham Dumpleton wrote:
>
> I mean you shouldn’t be modifying any of the generated configuration files 
> in the directory /home/***/***/mod_wsgi-server. So you should manually edit 
> the ‘httpd.conf’ as your prior email suggests you did.
>
> The whole point is that you always go back and run ‘mod_wsgi-express 
> setup-server’ to regenerate the generated configuration files based on the 
> command line arguments passed to ‘mod_wsgi-express setup-server’.
>
> On the point of the number of processes you can run with, that depends on 
> whether your WSGI application is highly CPU bound or not.
>
> Have a watch of:
>
>     https://www.youtube.com/watch?v=SGleKfigMsk
>
> That explains why CPU bound activity is an issue and why one needs to go 
> to more processes. If your application is mostly I/O bound, then you can 
> afford to up the number of threads to get concurrency.
>
> Graham
>
> On 25 Aug 2015, at 5:23 pm, Julien Delafontaine <[email protected]> wrote:
>
> Ok, I will do that, thanks.
> Indeed the app can be slow to start because it it caching a lot of stuff 
> on first load, that is probably the reason. Now I do some "wget" with large 
> timeout just after deployment to be more sure that everything is loaded.
> I am not sure what you mean by configuration. It is a Centos6.5 virtual 
> machine with 2 cpus and 2Gb memory. Everything concerning Apache, mod_wsgi 
> or Django I tried to leave at default settings. I'd definitely like to have 
> better logging than a "top" in the shell.
>
> On Tuesday, 25 August 2015 08:55:31 UTC+2, Graham Dumpleton wrote:
>
> You should never edit the Apache configuration by hand when using 
> mod_wsgi-express.
>
> Just re run the ‘setup-server’ command with any changed options. Thus:
>
> mod_wsgi-express setup-server app/wsgi.py --port=8887 \
>                   --setenv DJANGO_SETTINGS_MODULE app.settings.prod \
>                   --user *** —processes 2 —threads 6 \
>                   --server-root=/home/***/***/mod_wsgi-server
>
> If you cannot remember what command you originally used to generate it, 
> look in the generated apachectl file and should show it there.
>
> On 25 Aug 2015, at 4:46 pm, Julien Delafontaine <[email protected]> wrote:
>
> To be honest I am extremely confused myself. I have instructions that say:
>  
>
> """
> An Apache server instance was created with
>
>  mod_wsgi-express setup-server app/wsgi.py --port=8887 \
>                   --setenv DJANGO_SETTINGS_MODULE app.settings.prod \
>                   --user *** \
>                   --server-root=/home/***/***/mod_wsgi-server
>
> Then to restart it, 
>
>  /home/***/***/mod_wsgi-server/apachectl restart
> """
>
>
> That is why I edited the httpd.conf directly, and I remember setting an 
> environment variable WSGI_MULTIPROCESS=1 for it to use the processes and 
> threads numbers present in the httpd.conf.
>
> Now maybe I had the timeouts on my dev machine where I run only
> "python3 manage.py runmodwsgi --reload-on-changes --log-to-terminal", 
> since I am switching between the two frequently and may not have noticed.
>
>
> This will be single process with five threads. You can run it as:
>
> python3 manage.py runmodwsgi --reload-on-changes --log-to-terminal 
> —processes 2 —threads 6
>
> To change the number of processes and threads.
>
> Anyway, best to confirm under what configuration you were seeing it. When 
> have better idea, there is a some logging you could add to output how much 
> capacity is being used.
>
> Another cause maybe that the startup time for web application is quite 
> large. When processes restart due to —reload-on-changes requests will have 
> to wait until process is available once again.
>
> Do you do anything on web application start, such as preload data, which 
> might take a long time?
>
> Graham
>
>
> On Tuesday, 25 August 2015 03:06:50 UTC+2, Graham Dumpleton wrote:
>
> I am a bit confused at this point. If you are manually configuring Apache 
> for mod_wsgi, then you would not be using mod_wsgi-express. They are 
> distinct ways of doing things.
>
> The defaults for doing it manually with Apache don’t have the queue 
> timeout having a default and so you shouldn’t see the timeout.
>
> Can you describe better the architecture of your system and more about the 
> whole configuration?
>
> Graham
>
> On 25 Aug 2015, at 1:36 am, Julien Delafontaine <[email protected]> wrote:
>
> Thanks for the very clear explanation !
>
> I have 2 CPUs at my disposal and I use an Apache config where I have 
> WSGIDaemonProcess ... processes=2 threads=6
> although my wsgi.py is untouched, and the `mod_wsgi-express` command to 
> launch it does not have parameters.
> I believe that I am seeing 2 processors used at the same time with this 
> config.
>
> I think I cannot set more processes than I have CPUs, am I right ? Which 
> means the only ways to solve my problem are to speed up the computations, 
> or buy more CPUs ?
>
> On Monday, 24 August 2015 06:17:25 UTC+2, Graham Dumpleton wrote:
>
>
> > On 23 Aug 2015, at 5:46 am, Julien Delafontaine <[email protected]> 
> wrote: 
> > 
> > Hello, 
> > 
> > I am really having a hard time finding out what happens here : 
> > I send requests to my python server that take maximum 1-3 secs each to 
> respond (so way below the usual 60 sec timeout), but sometimes, I randomly 
> get this response instead : 
> > 
> >     mod_wsgi (pid=8447): Queue timeout expired for WSGI daemon process 
> 'localhost:8000'. 
> > 
> > I can't reproduce it at will by a sequence of actions. It seems that it 
> once the server sends back an actual andwer, it does not happen anymore. 
> > What sort of parameter do I have to change in order that to never happen 
> ? 
>
>
> What you are encountering is the backlog protection which for 
> mod_wsgi-express is enabled by default. 
>
> What happens is if all the available threads handling requests in the WSGI 
> application processes are busy, which would probably be easy if your 
> requests run 1-3 seconds and with default of only 5 threads, then requests 
> will start to queue up, causing a backlog. If the WSGI application process 
> is so backlogged that requests get queued up and not handled within 45 
> seconds, then they will hit the queue timeout and rather than be allowed to 
> continue on to be handled by the WSGI application, will see a gateway 
> timeout HTTP error response sent back to the client instead. 
>
> The point of this mechanism is such that when the WSGI application becomes 
> overloaded and requests backlog, that the backlogged requests will be 
> failed at some point rather than left in the queue. This has the effect of 
> throwing out requests where the client had already been waiting a long time 
> and had likely given up. For real user requests, where it is likely they 
> gave up, this avoids handling a request where if you did still handle it, 
> the response would fail anyway as the client connection had long gone. 
>
> This queue timeout is 45 seconds though, so things would have to be quite 
> overloaded or requests stuck in the WSGI application for a long time. 
>
> Now if you are running with very long requests which are principally I/O 
> bound, what you should definitely be doing is increasing the default of 5 
> threads per process, which since there is only 1 process by default, means 
> 5 in total threads to handle concurrent requests. 
>
> So have a look at doing something like using: 
>
>     —processes=3 —threads=10 
>
> which would be a total of 30 threads for handling concurrent requests, 
> spread across 3 processes. 
>
> Exactly what you should use really depends on the overall profile of your 
> application as to throughput and response times. But in short, you probably 
> just need to increase the capacity. 
>
> The question is though, are you using the defaults, or are you already 
> overriding the processes and threads options? 
>
> Graham
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "modwsgi" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/modwsgi.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "modwsgi" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/modwsgi.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "modwsgi" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/modwsgi.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "modwsgi" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To post to this group, send email to [email protected] <javascript:>
> .
>
> ...

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/modwsgi.
For more options, visit https://groups.google.com/d/optout.

Reply via email to