Just ran into an interesting bug, and I've got a proposal for a way to solve
it, too. (no code tho :-)

If a CGI writes to stderr [more than the pipe's buffer has room for], then
it will block on that write. Meanwhile, when Apache goes to deliver the CGI
output to the network, it will *block* on a read from the CGI's output.

See the deadlock yet? :-)

The CGI can't generate output because it needs the write-to-stderr to
complete. Apache can't drain stderr until the read-from-stdout completes. In
fact, Apache won't even drain stderr until the CGI is *done* (it must empty
the PIPE bucket passed into the output filters).

Eventually, the deadlock resolves itself when the read from the PIPE bucket
times out.

[ this read behavior occurs in the C-L filter ]

[ NOTE: it appears this behavior is a regression from Apache 1.3. In 1.3, we
  just hook stderr into the error log. In 2.0, we manually read lines, then
  log them (with timestamps) ]


I believe the solution is to create a new CGI bucket type. The read()
function would read from stdout, similar to a normal PIPE bucket (e.g.
create a new HEAP bucket with the results). However, the bucket *also* holds
the stderr pipe from the CGI script. When you do a bucket read(), it
actually blocks on both pipes. If data comes in from stderr, then it drains
it and sends that to the error log. Data that comes in from stdout is
handled normally.

This system allows you to keep stderr drained, yet still provide for
standard PIPE style operation on the stdout pipe.

Thoughts?

Anybody adventurous enough to code it? :-)

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/

Reply via email to