On Friday 20 January 2012, Joe Orton wrote:
> The main loop in the core output filter (rewritten since 2.2) will
> try to read the entire passed-in brigade into RAM for
> CGI/PIPE-like mutating bucket types.  :( :( We have trying to bash
> this kind of bug since 2.0.x days, and now the *core output
> filter* itself is doing it, yegads.
> 
> The fix may be as simple as adding an appropriate "break" to the
> loop, I'm not sure yet.  It should be doing the non-block read,
> FLUSH & retry thing as well, so that streaming from slow
> generators works properly.
> 
> prefork/mod_cgi, serve this as "nph-killme.pl" and wait for the
> fireworks:
> 
> !/usr/bin/perl
> 
> print "HTTP/1.0 200 OK\r\n";
> print "Transfer-Encoding: chunked\r\n";
> print "\r\n";
> 
> print "5\r\nhello\r\n" while (1);

This is a bigger problem. With the attached patch, the core output 
filter will flush data to the client when it has read more than 64K 
from the cgi bucket. Then it will setaside the remaining part of the 
passed brigade for later write completion. Here it hits the second 
part of the problem: ap_save_brigade() will call apr_bucket_read() on 
bucket types that don't implement the setaside method. And the cgi 
bucket doesn't.

To me, there seem to be two immediate solutions: Either require 
setaside being implemented for all bucket types or disable async write 
completion for requests involving such buckets. Or am I missing 
something?
diff --git a/server/core_filters.c b/server/core_filters.c
index 5cdcb99..84954ae 100644
--- a/server/core_filters.c
+++ b/server/core_filters.c
@@ -396,6 +396,8 @@ apr_status_t ap_core_output_filter(ap_filter_t *f, apr_bucket_brigade *new_bb)
     if (new_bb != NULL) {
         for (bucket = APR_BRIGADE_FIRST(new_bb); bucket != APR_BRIGADE_SENTINEL(new_bb); bucket = APR_BUCKET_NEXT(bucket)) {
             if (bucket->length > 0) {
+                /* XXX: this is wrong for buckets of indeterminate length */
+                /* XXX: ctx->bytes_in is never used anyway !?! */
                 ctx->bytes_in += bucket->length;
             }
         }
@@ -474,6 +476,13 @@ apr_status_t ap_core_output_filter(ap_filter_t *f, apr_bucket_brigade *new_bb)
             if (bucket->length == (apr_size_t)-1) {
                 const char *data;
                 apr_size_t length;
+                if (flush_upto != NULL) {
+                    /*
+                     * We have already enough for the next flush, don't read
+                     * more data into memory.
+                     */
+                    break;
+                }
                 /* XXX support nonblocking read here? */
                 rv = apr_bucket_read(bucket, &data, &length, APR_BLOCK_READ);
                 if (rv != APR_SUCCESS) {

Reply via email to