On Tue, Jun 25, 2019 at 08:49:08AM +0100, Joe Orton wrote:
> Unless I am missing something the apr_dir_open/apr_dir_read API design 
> is always going to have memory use proportional to directory size 
> because apr_dir_read() duplicates the filename into the directory's 
> pool.  We need an apr_dir_pread() or whatever which takes a pool 
> argument. :(

OK, yes, this plus an iterpool in dav_fs_walker() fixes it.  

I've been using the python "psrecord" (from pip) to check this rather 
than my traditional method of "eyeballing top", here are the charts for 
running 10x a PROPFIND with ~100K files, for 

a) trunk - steep gradient, ~20mb consumed
b) trunk plus the patch from my previous post, ~5mb?
c) (b) plus iterpool in dav_fs_walker - flat

so I think (b) is fine to commit but with the recursion in dav_fs_walker 
I am not at all sure this stuff is safe yet, will need some more 
testing.

Reply via email to