2009/7/28 mixedpuppy <mixedpu...@gmail.com>:
>
> I'm using mod_wsgi for an application that takes several seconds to
> load when a new daemon process is started.    The problem is, when the
> processes restart (via maxrequest setting), even with the use of
> WSGIImportScript there is a potential lag in availability.
> WSGIImportScript solves a part of the problem I would like to address,
> but not everything.
>
> The thought occurred to me, why not start a new daemon process at
> (maxrequests - X) requests to give it time to get started and preload
> the application.  Better yet, at maxrequests start a new daemon but
> keep handling requests until receiving a signal (or a configured
> timeout) from the new process that it is ready to handle requests.
>
> Curious of this sounds reasonable.

There are various issue with doing this.

The first is that the daemon process itself makes the decision to
shutdown after the set number of requests. The parent process which
spawns new process will only know that a new process needs to be
started when the old one has completely exited.

The second is that the current scheme gives certainty about how much
memory would be used by all application processes. In your scheme that
certainty is removed and much easier to blow out memory usage which
for memory constrained VPS systems could cause system to grind to a
halt.

This sort of graceful restart can if not done exactly right can result
in processes hanging around. This is a problem that occurs with some
fastcgi implementations. Although mod_wsgi has much better control
over a daemon process than fastcgi, there is still the risk that an
application may do something that would prohibit a process from
shutting down properly. As is, mod_wsgi daemon processes have a
background thread at C code level which will forcibly kill off a
process if it doesn't shutdown in a certain amount of time.

FWIW, Apache doesn't offer what you want either when using
MaxRequestsPerChild. It does offer a graceful restart option, but that
is only when restarting the whole server. If those worker processes
don't exit properly though, it doesn't have a way of cleaning them up.

That all said, what I would be asking is why you need maximum requests
anyway. If you have problems with memory creep in an application, you
should identify what the problem is and eliminate it.

If specific URLs create large memory usage but main part of
application doesn't, then have multiple daemon process groups and
delegate the URLs with large transient memory requirements to a daemon
process group of their own and only set a maximum requests on that
daemon process, thereby leaving most of the application just to run
persistently without restarts.

So, rather than see a solution, what is the problem that requires use
of maximum-requests to begin with? Also, how many daemon processes are
you running in the daemon process group? If using maximum-requests it
is advisable to run more than one. That way when one process is
restarting, likely that the other is still accepting requests and
users wouldn't see any delay.

Graham

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To post to this group, send email to modwsgi@googlegroups.com
To unsubscribe from this group, send email to 
modwsgi+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/modwsgi?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to