On 10/17/2018 07:47 PM, Joe Orton wrote:
> On Wed, Oct 17, 2018 at 03:32:34PM +0100, Joe Orton wrote:
>> I see constant memory use for a simple PROPFIND/depth:1 for the 
>> attached, though I'm not sure this is sufficient to repro the problem 
>> you saw before.

Thanks for having a look. My test case was opening a large directory (about 
50,000 files)
with doplhin under RedHat 6. Memory usage remains constant after the patch. So 
patch
seems to make sense.

Any idea if we could hit a similar issue with the two other remaining callers 
of dav_open_propdb?

dav_gen_supported_live_props
dav_method_proppatch


> 
> I needed to also remove the new apr_pool_clear() there.  Is the repro 
> case for this something other than depth:1 PROPFIND?  Am seing constant 
> memory use with
> 
> $ mkdir t/htdocs/modules/dav/
> $ (cd t/htdocs/modules/dav/; 
>   seq 1 100000 | sed 's/^/file.b/;s/$/.txt/' | xargs -n100 touch )
> $ ./t/TEST -start
> 
> and then run a PROPFIND against /modules/dav/
> 
> Curiously inefficient writev use when stracing the process, though, 
> dunno what's going on there (trunk/prefork):
> 
> writev(46, [{iov_base="\r\n", iov_len=2}], 1) = 2
> writev(46, [{iov_base="1f84\r\n", iov_len=6}], 1) = 6
> writev(46, [{iov_base="<D:lockdiscovery/>\n<D:getcontent"..., iov_len=7820}], 
> 1) = 7820
> writev(46, [{iov_base="<D:supportedlock>\n<D:lockentry>\n"..., iov_len=248}], 
> 1) = 248
> 
> 

The reason is ap_request_core_filter. It iterates over the brigade and hands 
over each bucket alone to
ap_core_output_filter. IMHO a bug.



Regards

Rüdiger

Reply via email to