On 24 September 2012 21:12, Roshan Mathews <[email protected]> wrote:
> [2] https://code.google.com/p/modwsgi/wiki/ProcessesAndThreading
>
> There have been mentions in this group and other places that we can
> use WSGIApplicationGroup to map different virtual hosts to the same
> sub-interpreter if they are from the same django app, but not if they
> are different.  I don't understand that part too well, and given that
> there were some issue with site contents' getting mixed up, etc. I
> would like to know what the issues are with running lots of django
> wsgi applications on one server, and why WSGIApplicationGroup can't be
> used to overcome those problems.
>
> My ideal setup would have interpreters spinning up inside threads, as
> required, and not consuming resources when they are not being used.

You are hitting a combination of two problems.

The first is the restrictions imposed by virtue of Django using global
configuration and caching mechanisms which cannot be dynamically
adjusted between requests, at least not using an officially supported
mechanisms.

The second is that mod_wsgi requires static configuration of daemon
processes and does not support transient processes.

I have seen people get around the first by using a mod_wsgi
configuration which was single threaded and forcibly going in and
changing all the Django settings, including forcing any necessary
reconfiguration as result of that, between requests, but it is a big
hack and wouldn't recommend it.

For the latter, one could in simplistic cases use a preconfigured
static pool of daemon processes and some special mod_wsgi or
mod_rewrite magic to dynamically map to a spare daemon process group,
but at the scale you are doing it, not sure want to be suggesting that
either as it isn't a seamless solution.

Other options you could look at to try and solve the latter is a
traditional FASTCGI/flup based WSGI solution or uWSGI and its emperor
mode. Both of these have better dynamic abilities to create transient
processes and kill them off when required.

FWIW, one of the reasons I have held back from implementing transient
processes is that the experience can suck for the user. This is
because every time you have to reload the application for the site,
the first requests to it will be quite slow as fat Python web
applications tend not to startup very fast. If a site is only going to
be around for a few requests and then get shut down because it is
idle, then same thing happens for next lot of requests. In all the
performance can be as bad as CGI for infrequently used sites.

Overall I would therefore encourage you to find a better way of doing
what you need to do, potentially not using Django and instead using a
different more lightweight framework which better supports multi
tenancy directly, or implement it as an application level mechanism.
That said, if a primary concern is process separation for each site,
then no matter what framework you use, you are going to have each site
run in a separate process and so have same issue of process
management.

Graham

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/modwsgi?hl=en.

Reply via email to