> -----Ursprüngliche Nachricht----- > Von: Joe Orton > Gesendet: Freitag, 20. Februar 2009 10:15 > An: [email protected] > Betreff: Re: FLUSH, filtering, setaside, etc (was Re: > Problems with EOSoptimisation in ap_core_output_filter() and > file buckets.) > > On Thu, Feb 19, 2009 at 10:00:50PM +0100, Ruediger Pluem wrote: > > On 02/19/2009 12:32 PM, Joe Orton wrote: > ... > > > @@ -497,13 +500,17 @@ > > > next = APR_BUCKET_NEXT(bucket); > > > } > > > bytes_in_brigade += bucket->length; > > > - if (!APR_BUCKET_IS_FILE(bucket)) { > > > + if (APR_BUCKET_IS_FILE(bucket)) { > > > + num_files_in_brigade++; > > > + } > > > + else { > > > non_file_bytes_in_brigade += bucket->length; > > > } > > > } > > > } > > > > > > - if (non_file_bytes_in_brigade >= THRESHOLD_MAX_BUFFER) { > > > + if (non_file_bytes_in_brigade >= THRESHOLD_MAX_BUFFER > > > + || num_files_in_brigade >= THRESHOLD_MAX_FILES) { > > > > If the 16 FD's were split over more then one brigade and the > > brigades before us were set aside the FD's belong already > to the wrong pool > > (the connection pool). Deleting a file bucket doesn't close > the FD it uses. > > Not sure what the concern is there - this loop is iterating over the > concatenation of the buffered brigade an the "new" brigade > (right?), so > it will count the total number of buckets which are potentially left > buffered after this c_o_f invocation terminates.
The problem is that originally the apr_file objects are created in the request pool. If you now sent two brigades down the chain, let's say the first one with 15 files and the second one with 1 file and the first one gets set aside for later writing for whatever reason then the 15 apr_file objects from the first brigade move over to the connection pool and thus do not get closed until the connection gets closed (which may take a long time). This is where the leak happens. So I guess for a final solution we must find a way to avoid set asides in the connection pool. Regards Rüdiger
