Thanks for the very clear explanation !

I have 2 CPUs at my disposal and I use an Apache config where I have 
WSGIDaemonProcess ... processes=2 threads=6
although my wsgi.py is untouched, and the `mod_wsgi-express` command to 
launch it does not have parameters.
I believe that I am seeing 2 processors used at the same time with this 
config.

I think I cannot set more processes than I have CPUs, am I right ? Which 
means the only ways to solve my problem are to speed up the computations, 
or buy more CPUs ?

On Monday, 24 August 2015 06:17:25 UTC+2, Graham Dumpleton wrote:
>
>
> > On 23 Aug 2015, at 5:46 am, Julien Delafontaine <[email protected] 
> <javascript:>> wrote: 
> > 
> > Hello, 
> > 
> > I am really having a hard time finding out what happens here : 
> > I send requests to my python server that take maximum 1-3 secs each to 
> respond (so way below the usual 60 sec timeout), but sometimes, I randomly 
> get this response instead : 
> > 
> >     mod_wsgi (pid=8447): Queue timeout expired for WSGI daemon process 
> 'localhost:8000'. 
> > 
> > I can't reproduce it at will by a sequence of actions. It seems that it 
> once the server sends back an actual andwer, it does not happen anymore. 
> > What sort of parameter do I have to change in order that to never happen 
> ? 
>
>
> What you are encountering is the backlog protection which for 
> mod_wsgi-express is enabled by default. 
>
> What happens is if all the available threads handling requests in the WSGI 
> application processes are busy, which would probably be easy if your 
> requests run 1-3 seconds and with default of only 5 threads, then requests 
> will start to queue up, causing a backlog. If the WSGI application process 
> is so backlogged that requests get queued up and not handled within 45 
> seconds, then they will hit the queue timeout and rather than be allowed to 
> continue on to be handled by the WSGI application, will see a gateway 
> timeout HTTP error response sent back to the client instead. 
>
> The point of this mechanism is such that when the WSGI application becomes 
> overloaded and requests backlog, that the backlogged requests will be 
> failed at some point rather than left in the queue. This has the effect of 
> throwing out requests where the client had already been waiting a long time 
> and had likely given up. For real user requests, where it is likely they 
> gave up, this avoids handling a request where if you did still handle it, 
> the response would fail anyway as the client connection had long gone. 
>
> This queue timeout is 45 seconds though, so things would have to be quite 
> overloaded or requests stuck in the WSGI application for a long time. 
>
> Now if you are running with very long requests which are principally I/O 
> bound, what you should definitely be doing is increasing the default of 5 
> threads per process, which since there is only 1 process by default, means 
> 5 in total threads to handle concurrent requests. 
>
> So have a look at doing something like using: 
>
>     —processes=3 —threads=10 
>
> which would be a total of 30 threads for handling concurrent requests, 
> spread across 3 processes. 
>
> Exactly what you should use really depends on the overall profile of your 
> application as to throughput and response times. But in short, you probably 
> just need to increase the capacity. 
>
> The question is though, are you using the defaults, or are you already 
> overriding the processes and threads options? 
>
> Graham

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/modwsgi.
For more options, visit https://groups.google.com/d/optout.

Reply via email to