I mean you shouldn’t be modifying any of the generated configuration files in 
the directory /home/***/***/mod_wsgi-server. So you should manually edit the 
‘httpd.conf’ as your prior email suggests you did.

The whole point is that you always go back and run ‘mod_wsgi-express 
setup-server’ to regenerate the generated configuration files based on the 
command line arguments passed to ‘mod_wsgi-express setup-server’.

On the point of the number of processes you can run with, that depends on 
whether your WSGI application is highly CPU bound or not.

Have a watch of:

    https://www.youtube.com/watch?v=SGleKfigMsk 
<https://www.youtube.com/watch?v=SGleKfigMsk>

That explains why CPU bound activity is an issue and why one needs to go to 
more processes. If your application is mostly I/O bound, then you can afford to 
up the number of threads to get concurrency.

Graham

> On 25 Aug 2015, at 5:23 pm, Julien Delafontaine <[email protected]> wrote:
> 
> Ok, I will do that, thanks.
> Indeed the app can be slow to start because it it caching a lot of stuff on 
> first load, that is probably the reason. Now I do some "wget" with large 
> timeout just after deployment to be more sure that everything is loaded.
> I am not sure what you mean by configuration. It is a Centos6.5 virtual 
> machine with 2 cpus and 2Gb memory. Everything concerning Apache, mod_wsgi or 
> Django I tried to leave at default settings. I'd definitely like to have 
> better logging than a "top" in the shell.
> 
> On Tuesday, 25 August 2015 08:55:31 UTC+2, Graham Dumpleton wrote:
> You should never edit the Apache configuration by hand when using 
> mod_wsgi-express.
> 
> Just re run the ‘setup-server’ command with any changed options. Thus:
> 
> mod_wsgi-express setup-server app/wsgi.py --port=8887 \
>                   --setenv DJANGO_SETTINGS_MODULE app.settings.prod \
>                   --user *** —processes 2 —threads 6 \
>                   --server-root=/home/***/***/mod_wsgi-server
> 
> If you cannot remember what command you originally used to generate it, look 
> in the generated apachectl file and should show it there.
> 
>> On 25 Aug 2015, at 4:46 pm, Julien Delafontaine <[email protected] 
>> <javascript:>> wrote:
>> 
>> To be honest I am extremely confused myself. I have instructions that say: 
>> 
>> """
>> An Apache server instance was created with
>>  mod_wsgi-express setup-server app/wsgi.py --port=8887 \
>>                   --setenv DJANGO_SETTINGS_MODULE app.settings.prod \
>>                   --user *** \
>>                   --server-root=/home/***/***/mod_wsgi-server
>> Then to restart it, 
>>  /home/***/***/mod_wsgi-server/apachectl restart
>> """
>> 
>> That is why I edited the httpd.conf directly, and I remember setting an 
>> environment variable WSGI_MULTIPROCESS=1 for it to use the processes and 
>> threads numbers present in the httpd.conf.
>> 
>> Now maybe I had the timeouts on my dev machine where I run only
>> "python3 manage.py runmodwsgi --reload-on-changes --log-to-terminal", 
>> since I am switching between the two frequently and may not have noticed.
> 
> This will be single process with five threads. You can run it as:
> 
> python3 manage.py runmodwsgi --reload-on-changes --log-to-terminal —processes 
> 2 —threads 6
> 
> To change the number of processes and threads.
> 
> Anyway, best to confirm under what configuration you were seeing it. When 
> have better idea, there is a some logging you could add to output how much 
> capacity is being used.
> 
> Another cause maybe that the startup time for web application is quite large. 
> When processes restart due to —reload-on-changes requests will have to wait 
> until process is available once again.
> 
> Do you do anything on web application start, such as preload data, which 
> might take a long time?
> 
> Graham
> 
>> 
>> On Tuesday, 25 August 2015 03:06:50 UTC+2, Graham Dumpleton wrote:
>> I am a bit confused at this point. If you are manually configuring Apache 
>> for mod_wsgi, then you would not be using mod_wsgi-express. They are 
>> distinct ways of doing things.
>> 
>> The defaults for doing it manually with Apache don’t have the queue timeout 
>> having a default and so you shouldn’t see the timeout.
>> 
>> Can you describe better the architecture of your system and more about the 
>> whole configuration?
>> 
>> Graham
>> 
>>> On 25 Aug 2015, at 1:36 am, Julien Delafontaine <mura...@ <>gmail.com 
>>> <http://gmail.com/>> wrote:
>>> 
>>> Thanks for the very clear explanation !
>>> 
>>> I have 2 CPUs at my disposal and I use an Apache config where I have 
>>> WSGIDaemonProcess ... processes=2 threads=6
>>> although my wsgi.py is untouched, and the `mod_wsgi-express` command to 
>>> launch it does not have parameters.
>>> I believe that I am seeing 2 processors used at the same time with this 
>>> config.
>>> 
>>> I think I cannot set more processes than I have CPUs, am I right ? Which 
>>> means the only ways to solve my problem are to speed up the computations, 
>>> or buy more CPUs ?
>>> 
>>> On Monday, 24 August 2015 06:17:25 UTC+2, Graham Dumpleton wrote:
>>> 
>>> > On 23 Aug 2015, at 5:46 am, Julien Delafontaine <mura...@ <>gmail.com 
>>> > <http://gmail.com/>> wrote: 
>>> > 
>>> > Hello, 
>>> > 
>>> > I am really having a hard time finding out what happens here : 
>>> > I send requests to my python server that take maximum 1-3 secs each to 
>>> > respond (so way below the usual 60 sec timeout), but sometimes, I 
>>> > randomly get this response instead : 
>>> > 
>>> >     mod_wsgi (pid=8447): Queue timeout expired for WSGI daemon process 
>>> > 'localhost:8000'. 
>>> > 
>>> > I can't reproduce it at will by a sequence of actions. It seems that it 
>>> > once the server sends back an actual andwer, it does not happen anymore. 
>>> > What sort of parameter do I have to change in order that to never happen 
>>> > ? 
>>> 
>>> 
>>> What you are encountering is the backlog protection which for 
>>> mod_wsgi-express is enabled by default. 
>>> 
>>> What happens is if all the available threads handling requests in the WSGI 
>>> application processes are busy, which would probably be easy if your 
>>> requests run 1-3 seconds and with default of only 5 threads, then requests 
>>> will start to queue up, causing a backlog. If the WSGI application process 
>>> is so backlogged that requests get queued up and not handled within 45 
>>> seconds, then they will hit the queue timeout and rather than be allowed to 
>>> continue on to be handled by the WSGI application, will see a gateway 
>>> timeout HTTP error response sent back to the client instead. 
>>> 
>>> The point of this mechanism is such that when the WSGI application becomes 
>>> overloaded and requests backlog, that the backlogged requests will be 
>>> failed at some point rather than left in the queue. This has the effect of 
>>> throwing out requests where the client had already been waiting a long time 
>>> and had likely given up. For real user requests, where it is likely they 
>>> gave up, this avoids handling a request where if you did still handle it, 
>>> the response would fail anyway as the client connection had long gone. 
>>> 
>>> This queue timeout is 45 seconds though, so things would have to be quite 
>>> overloaded or requests stuck in the WSGI application for a long time. 
>>> 
>>> Now if you are running with very long requests which are principally I/O 
>>> bound, what you should definitely be doing is increasing the default of 5 
>>> threads per process, which since there is only 1 process by default, means 
>>> 5 in total threads to handle concurrent requests. 
>>> 
>>> So have a look at doing something like using: 
>>> 
>>>     —processes=3 —threads=10 
>>> 
>>> which would be a total of 30 threads for handling concurrent requests, 
>>> spread across 3 processes. 
>>> 
>>> Exactly what you should use really depends on the overall profile of your 
>>> application as to throughput and response times. But in short, you probably 
>>> just need to increase the capacity. 
>>> 
>>> The question is though, are you using the defaults, or are you already 
>>> overriding the processes and threads options? 
>>> 
>>> Graham
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "modwsgi" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to modwsgi+u...@ <>googlegroups.com <http://googlegroups.com/>.
>>> To post to this group, send email to mod...@ <>googlegroups.com 
>>> <http://googlegroups.com/>.
>>> Visit this group at http://groups.google.com/group/modwsgi 
>>> <http://groups.google.com/group/modwsgi>.
>>> For more options, visit https://groups.google.com/d/optout 
>>> <https://groups.google.com/d/optout>.
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "modwsgi" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To post to this group, send email to [email protected] <javascript:>.
>> Visit this group at http://groups.google.com/group/modwsgi 
>> <http://groups.google.com/group/modwsgi>.
>> For more options, visit https://groups.google.com/d/optout 
>> <https://groups.google.com/d/optout>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "modwsgi" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To post to this group, send email to [email protected] 
> <mailto:[email protected]>.
> Visit this group at http://groups.google.com/group/modwsgi 
> <http://groups.google.com/group/modwsgi>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/modwsgi.
For more options, visit https://groups.google.com/d/optout.

Reply via email to