On Mon, Oct 11, 2004 at 10:09:30PM -0400, Stas Bekman wrote:
Philippe M. Chiasson wrote:
Sorry, but you didn't understand what I was trying to say. Let's say you have only one process with many threads. If that process grows too big (since you can't measure a size of a thread) it'll throw the TERM signal and then it'll block until all threads are done. Meanwhile no other new requests will be started. Or will the parent process spawn a new process as soon as its child receives SIGTERM? I can't see how the parent will know about it, before the child terminates.
Yes, the situation you just described is correct. In the case of one child and multiple threads, sending a SIGTERM is not a very good idea.
(untested) What if the child forks? The child (parent of grandchild) could then _exit, and the main httpd would detect that it needs to spawn a new child.
Glenn, have you by chance saw this thread on the httpd-dev list? http://marc.theaimsgroup.com/?t=109716966900003&r=1&w=2
The grandchild would continue serving the request, after setting $r->connection->keepalive = AP_CONN_CLOSE; # ?? available in mod_perl ?? $r->child_terminate();
It is available: http://perl.apache.org/docs/2.0/api/Apache/Const.html#C_Apache__CONN_CLOSE_ You can even see it in action: modperl-2.0/t/response/TestAPI/conn_rec.pm
There is a memory exhaustion potential if the grandchild can be tied up for a very long time, and the new child serving requests can be caused to exceed its memory limit and to fork and do the same, etc until there are many, many grandchildren all tying up lots of memory.
So in the case of a single child we have to balance two types of DoS: - stop serving requests until we can cleanly exit and start fresh (a la graceful restart) - risk memory exhaustion DoS from forking and finishing up in background
Therefore, sending SIGTERM might be the way to go, but to mitigate the DoS potential of not serving requests for too long, an alarm should be set after which the child will abort requests in progress.
That's out of question. We need this method to be able to bracket memory usage, but it needs to be done gracefully. Aborting requests in progress is certainly not graceful.
I think that functionality should be designed in a way, so that there is no DoS potential, so there will be no need to design a destructive protection mechanism.
To make that solution even better, it would be nice if MPMs supported some sort of command to disable keepalives on all current connections, so that each thread finishes up as soon as it has finished serving requests in progress.
It is still quite possible to get a long running request over a non-keepalive connection. But of cource having keepalive aggrevates the case.
As an alternative to setting an alarm to make sure that new requests are not starved for too long, one could risk memory exhaustion DoS instead by forking and setting the alarm in the grandchild. In most cases, this would mean that new requests continue to be served without delay and grandchild finishes up old requests. However, the worst-case scenario of the grandchild taking a long time to exit and many grandchildren building up and causing a memory DoS is probably worse than temporarily delaying serving new connections. Take your pick.
What about the thread cancellation? will it make any better solution?
In fact the reason we need this API is to be able to kill perl interpreters, which become too big, as they get more and more of their data unshared as the time goes by. In the prefork model we had interpreters tied to the processes, so by killing the process we were killing the interpreter. Since perl interpreters have nothing to do with threads, in the case of threaded mpm, we shouldn't even try to kill the process, quite on the opposite, we should try to figure out how to measure the perl interpreter size and then kill it when it's unused. Also don't forget that in mod_perl 2 it's possible to have more than one interpreter inside the same process...
But this is exactly the same situation with N child processes. It's quite possible that all N processes will get SIGTERM in about the same time.
That sounds like proper behavior, though. Stop serving requests until excessive resource usage is released, right?
I don't think this is exactly right. In modperl 1 Apache::SizeLimit (and its clones) verify the unshared size or the total size of the processes and kill the big ones once they finish to serve the request. Usually it's used to prevent swapping. But I don't think a service should refuse clients if it got this condition we are talking about. Think of it as a soft limit, not the hard limit. When it's passed, some action should be triggered to recover, but it doesn't mean that it should refuse the processes.
-- __________________________________________________________________ Stas Bekman JAm_pH ------> Just Another mod_perl Hacker http://stason.org/ mod_perl Guide ---> http://perl.apache.org mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com http://modperlbook.org http://apache.org http://ticketmaster.com
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]