I'm using mod_wsgi with toppcloud
(http://bitbucket.org/ianb/toppcloud)... it's kind of server
configuration and tool to manage sites on that server.  The primary
goal is easy deployment and management of Python applications, with
minimal fuss.  It uses mod_wsgi.

So... right now I've configured it to have 5 processes and no threads
for each application, with no process sharing between applications.
Right now each application has to live on its own domain, but that's
just temporary -- in the future a site should be possibly formed out
of several applications (/ -a CMS, /blog -a blog, /gallery -some
gallery app, etc).  But that means 5 processes for each of these
areas, and the memory use gets a bit high: memory-per-app(~20Mb) *
number-of-apps * 5

So I'm hoping for advice, or maybe this will turn into a feature
request.  The current configuration:

WSGIDaemonProcess general user=www-data processes=5 threads=1
maximum-requests=200 inactivity-timeout=3600 display-name=wsgi
home=/var/www

One possibility of course is to use processes=1 and threads=5, or
something like that (the 15 thread default seems really high).  But
I'd like to avoid threading; not all frameworks work well with it, and
I like the isolation and simplicity of a single process per request.
Ideally I'd like maybe 1 process and 1 thread per app, but for new
processes to be created as needed.  It seems like inactivity-timeout
could accomplish this, but I'm unclear on its purpose or mechanism.
Right now 5 processes are started right away.  Will there be less than
5 processes after an hour of inactivity?  It doesn't seem like it...
does it just kill and respawn processes after one hour?  If so I'm not
sure of the point.  So, if inactivity-timeout worked like my intuition
would imply it should work (kill idle processes, restart on demand)
that'd be great.

What would *really* be ideal is if there were, say, 10 processes
total, and those 10 processes were allocated among all possible
consumers according to load.  Depending on the size of the server,
there's usually a top limit above which you get declining performance;
so even in high-load situations it'd be better to have 10 quick
processes handling requests than 30 slow processes.

If there's other suggestions about how to manage memory, I'd love to
hear them... I just don't want to trade reliability for resources.

--
Ian Bicking  |  http://blog.ianbicking.org  |  http://topplabs.org/civichacker
-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/modwsgi?hl=en.


Reply via email to