On Tue, Dec 13, 2011 at 03:40:28PM +1100, Graham Dumpleton wrote:
> On 13 December 2011 15:27, Rodrigo Campos <rodr...@sdfg.com.ar> wrote:
> > On Tue, Dec 13, 2011 at 01:15:00PM +1100, Graham Dumpleton wrote:
> >> On 13 December 2011 08:09, Rodrigo Campos <rodr...@sdfg.com.ar> wrote:
> >> > Thanks a lot for your quick answer =)
> >> >
> >> > No, static files should be served by nginx. We are using it as a proxy. 
> >> > Apache
> >> > should serve only de Python App. Why ? :)
> >>
> >> If there is no static files being served and only dynamic Python
> >> stuff, can continue to use embedded/prefork. You would though just
> >> want to setup Apache MPM settings to control better number of child
> >> worker processes and pre start them and leave them running rather than
> >> allowing Apache to dynamically adjust number of processes.
> >
> > I'm kind of doing this right now. StartServers and MinSpareServers are the 
> > same
> > (50) and MaxSpareServers, MaxClients and ServerLimit are set to 60 and
> > MaxRequestsPerChild to 4000. Is this what you mean ?
> >
> > I was considering to lower it even more, since nginx timeout for proxy mode
> > default value is 50/60 sec (I checked, don't remember right now), to have 
> > more a
> > "one on one" apache process per WSGI process. Or perhaps 2 apache process 
> > per
> > WSGI process, I was going to try. Any suggestions here ? :)
> >
> > And increasing the process parameter in the WSGIDaemonProcess directive did 
> > a
> > great difference (request queueing in newrelic grows a lot when there are 
> > many
> > requests at the same time, and it seems they went down even when there are 
> > many
> > request with a higher process parameter), so I was planning to increase it 
> > up to
> > 16 (on a quad-core) instead of the actual value of 4. I wasn't sure about 16
> > either, perhaps it is too overloaded, but I was goint to try that first
> > (suggestions are very welcome, of course :)). But if I increase it right 
> > now it
> > seems that poorly written queries get stucked on the DB server. So other 
> > people
> > is tackling that first :)
> >
> > A value of 16 WSGI process with 32 apache process, in "theory" (my 
> > calculator),
> > seems to leave enough RAM free and the perf test I did (kind of quick ones 
> > with
> > "ab", nothing really serious) where ok. Is there anything else I should take
> > into account ?
> >
> >> Do this and can use less processes than worker MPM and daemon mode.
> >
> > Really *less* ? Or comparable ?
> >
> > Let's say I have N WSGI process and 2*N apache process. If I use worker I 
> > will
> > keep the N WSGI process and use less apache process but with threads. Even 
> > if
> > I use N process with prefork, the "worker" equivalente should be less 
> > process.
> > Perhaps it is not substancial, since WSGI process use a lot more memory that
> > apache prefork process. What am I missing ? Or you are saying its not a
> > substancial difference ?
> 
> You said originally you were using 'mod_wsgi embedded mode' and was
> basing what I said on that.
> 
> If using embedded mode you would have no WSGI processes (daemon mode).
> 
> I am confused. Did you originally mean to say daemon mode?

Yes, but was confused, sorry. We are actually using daemon mode with prefork :S

But you are right, sorry, prefork + embedded mode will use less processes.
Should I expect memory consumption to be less too ? (I'll do test runs later)






Thanks a lot and sorry for the confusion,
Rodrigo

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To post to this group, send email to modwsgi@googlegroups.com.
To unsubscribe from this group, send email to 
modwsgi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/modwsgi?hl=en.

Reply via email to