On 10/18/2018 11:09 AM, Ruediger Pluem wrote:
>
>
> On 10/17/2018 07:47 PM, Joe Orton wrote:
>> On Wed, Oct 17, 2018 at 03:32:34PM +0100, Joe Orton wrote:
>>> I see constant memory use for a simple PROPFIND/depth:1 for the
>>> attached, though I'm not sure this is sufficient to repro the problem
>>> you saw before.
>>
>> Curiously inefficient writev use when stracing the process, though,
>> dunno what's going on there (trunk/prefork):
>>
>> writev(46, [{iov_base="\r\n", iov_len=2}], 1) = 2
>> writev(46, [{iov_base="1f84\r\n", iov_len=6}], 1) = 6
>> writev(46, [{iov_base="<D:lockdiscovery/>\n<D:getcontent"...,
>> iov_len=7820}], 1) = 7820
>> writev(46, [{iov_base="<D:supportedlock>\n<D:lockentry>\n"...,
>> iov_len=248}], 1) = 248
>>
>>
>
> The reason is ap_request_core_filter. It iterates over the brigade and hands
> over each bucket alone to
> ap_core_output_filter. IMHO a bug.
How about the attached patch for fixing?
Regards
Rüdiger
Index: server/request.c
===================================================================
--- server/request.c (revision 1843972)
+++ server/request.c (working copy)
@@ -2102,6 +2102,8 @@
seen_eor = 1;
}
else {
+ int morphing_bucket = 0;
+
/* if the core has set aside data, back off and try later */
if (!flush_upto) {
if (ap_filter_should_yield(f->next)) {
@@ -2124,13 +2126,25 @@
if (status != APR_SUCCESS) {
break;
}
+ morphing_bucket = 1;
}
/* pass each bucket down the chain */
APR_BUCKET_REMOVE(bucket);
APR_BRIGADE_INSERT_TAIL(tmp_bb, bucket);
+
+ /*
+ * If we had a morphing bucket better pass things down the chain
+ * to avoid that we consume too much memory
+ */
+ if (morphing_bucket) {
+ status = ap_pass_brigade(f->next, tmp_bb);
+ apr_brigade_cleanup(tmp_bb);
+ }
}
+ }
+ if ((status == APR_SUCCESS) && !APR_BRIGADE_EMPTY(tmp_bb)) {
status = ap_pass_brigade(f->next, tmp_bb);
apr_brigade_cleanup(tmp_bb);
}