On Aug 11, 12:25 am, Dj Gilcrease <digitalx...@gmail.com> wrote:
> On Mon, Aug 10, 2009 at 6:43 AM, Graham
>
> Dumpleton<graham.dumple...@gmail.com> wrote:
> > These values for min and max threads are a bit dubious because you
> > have a single process and that will have fixed 25 threads. Usually
> > these are defined in multiples of ThreadsPerChild and not less like
> > this. Possibly just  means the values are ignored.
>
> The MaxClients (according to the Apache docs is #MaxServers *
> #ThreadsPerChild) I just set it explicitly so I remember what it is,
> though it is actually ignored

It is not ignored. What you are describing is the fallback default
behaviour if the directives are not defined. If the directives are
defined they will take precedence. I would always add them all
explicitly.

> >> ThreadsPerChild 25
> >> MaxRequestsPerChild 5000
>
> > Don't see much point for MaxRequestsPerChild.
>
> This just restarts the child after 5k requests, which helps clear out
> some memory on occasion (I am still trying to get rrd or something run
> on on Webfaction so I can actually map memory usage over time, and
> start tuning Apache to my actual usage needs better)

But if using daemon mode, MaxRequestsPerChild only applies to the main
Apache server child processes. These are only serving up static files,
if you aren't getting WebFaction nginx front end to handle them, and
the task of proxying to mod_wsgi daemon processes. The processes
therefore shouldn't grow memory.

FWIW, if you are only running 5 threads in mod_wsgi daemon process and
static files handled by nginx front end anyway, you could drop
ThreadsPerChild to be a lower value and save more memory in main
Apache server child processes as stack memory for extra threads you
eliminate would no longer be allocated. Just remember to also drop
MaxClients to match if you want to keep it done to a single process
maximum, although if you don't I think Apache will complain that
calculated maximum servers is greater than ServerLimit.

> > If you application does leak Python objects due to inability to break
> > reference count cycles, there shouldn't really be a need for maximum
> > requests.
>
> Though my app does no leak, that I have noticed anyways, I figure a
> maximum-requests would be good just in case I suddenly get a flood of
> traffic, it will restart the daemon even if the hour of idle time has
> not passed.

The other one I forgot to mention is where specific requests have a
large transient memory usage. This will bump out memory usage and once
pushed out, doesn't come back.

The big danger with using multithreading is where a URL with large
transient memory requirement gets hit by multiple requests at the same
time.

I have been thinking for a while with a feature for mod_wsgi whereby
you could using a directive specify a limit as to how many concurrent
requests you want to allow to enter a specific URL or some subset of
URLs. This would help in limiting such memory usage explosions. The
only reason haven't added feature is that it could just as easily be
done in WSGI middleware, however, in that case people have to
implement it themselves and so more than likely they will not bother.

Other than such an ability, only workaround at moment is to use
maximum requests to recycle process often. In worst case may want to
split application across multiple daemon process groups and delegate
memory hungry URLs to daemon process group of their own and recycle
that much much more often than bulk of application.

Graham
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to 
django-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to