On 04.01.2009 00:36, Paul Querna wrote:
Rainer Jung wrote:
During testing 2.3.1 I noticed a lot of errors of type EMFILE: "Too
many open files". I used strace and the problem looks like this:

- The test case is using ab with HTTP keep alive, concurrency 20 and a
small file, so doing about 2000 requests per second.
MaxKeepAliveRequests=100 (Default)

- the file leading to EMFILE is the static content file, which can be
observed to be open more than 1000 times in parallel although ab
concurrency is only 20

- From looking at the code it seems the file is closed during a
cleanup function associated to the request pool, which is triggered by
an EOR bucket

Now what happens under KeepAlive is that the content files are kept
open longer than the handling of the request, more precisely until the
closing of the connection. So when MaxKeepAliveRequests*Concurrency >
MaxNumberOfFDs we run out of file descriptors.

I observed the behaviour with 2.3.1 on Linux (SLES10 64Bit) with
Event, Worker and Prefork. I didn't yet have the time to retest with 2.2.

It should only happen in 2.3.x/trunk because the EOR bucket is a new
feature to let MPMs do async writes once the handler has finished running.

And yes, this sounds like a nasty bug.

I verified I can't reproduce with the same platform and 2.2.11.

Not sure I understand the EOR asynchronicity good enough to analyze the root cause.

Rainer

Reply via email to