On Tue, 24 Sep 2002, Greg Stein wrote:

> Just ran into an interesting bug, and I've got a proposal for a way to solve
> it, too. (no code tho :-)
> 
> If a CGI writes to stderr [more than the pipe's buffer has room for], then
> it will block on that write. Meanwhile, when Apache goes to deliver the CGI
> output to the network, it will *block* on a read from the CGI's output.
> 
> See the deadlock yet? :-)
> 
> The CGI can't generate output because it needs the write-to-stderr to
> complete. Apache can't drain stderr until the read-from-stdout completes. In
> fact, Apache won't even drain stderr until the CGI is *done* (it must empty
> the PIPE bucket passed into the output filters).
> 
> Eventually, the deadlock resolves itself when the read from the PIPE bucket
> times out.
> 
> [ this read behavior occurs in the C-L filter ]
> 
> [ NOTE: it appears this behavior is a regression from Apache 1.3. In 1.3, we
>   just hook stderr into the error log. In 2.0, we manually read lines, then
>   log them (with timestamps) ]

Is there a reason we don't go back to what 1.3 did?  That would seem to be
the easiest way to solve this problem.  I am pretty sure that the reason
this was changed originally, was that the first version of apr_proc_create
couldn't do what 1.3 did.  Although we should double check on that.

Ryan


Reply via email to