On 06/24/2019 07:04 PM, Joe Orton wrote:
> On Fri, Feb 08, 2019 at 08:07:57AM +0100, Ruediger Pluem wrote:
>> On 11/13/2018 10:26 AM, Joe Orton wrote:
>>> On Mon, Nov 12, 2018 at 08:26:48AM +0100, Ruediger Pluem wrote:
>>>> The discussion died a little bit, because of the other issue (frequent 
>>>> writeev calls). I know that the liveprops issue is not fixed yet, but 
>>>> I guess it makes sense if you commit the patch you posted here 
>>>> already.
>>>
>>> Sorry, I was looking at this again and the repro case I thought worked 
>>> was still showing leaks, so I got stuck.  Will come back to it hopefully 
>>> later this week.
>>>
>>
>> Going through the STATUS of 2.4.x I got aware that this died a little bit 
>> again.
>> Any new ideas in the meantime?
> 
> I spent another half an afternoon looking at this.
> 
> I'm trying a PROPFIND/depth: 1 for *just* DAV: getcontenttype - and I 
> still see steadily increasing memory use during the response with trunk 
> or trunk plus my patch.
> 
> If I break in sbrk the memory allocation which triggers a heap expansion 
> is within some r->pool for the subrequest, so I suppose the bug is 
> around subreq handling somehow, but it's far from obvious.

Thanks for spending time on this again. By coincidence I stumbled across it
recently again on a system that has Webdav enabled folders with a lot of files 
in (e.g. 10,000).
Over a short time the only process of this webserver instance with 10 threads, 
which does
nothing else but serving this stuff via WebDav rose to several GB.
Doing a dump_all_pools showed that all pools and allocator free lists looked 
well (MaxMemFree
set to 2048 globally).
I was able to limit the memory consumption over time to sane values by adding 
the following environment
variables:

export MALLOC_MMAP_THRESHOLD_=8192
export MALLOC_ARENA_MAX=3

The system is running RedHat 7.5. So my conclusion was that the glibc was not 
able to free the memory
again when it was allocated via sbrk, probably because other blocks that 
remained in use were added
to the top of the heap. OTOH memory also does not seem to get reused with the 
next request which
is kind of weird given that APR requests pages size aligned blocks which should 
reduce fragmentation.


Regards

Rüdiger

Reply via email to