> > I noticed this while looking at Subversion issue #602. It > > is about Subversion consuming too much memory when importing > > large tree. > > http://subversion.tigris.org/issues/show_bug.cgi?id=602 > > > > I found that memory consumption doesn't go too high if I > > compiled APR with --enable-pool-debug so I glanced > > memory/unix/apr_pools.c. There I found that non-DEBUG build > > does not free any memory unless the pool holding the > > allocator is destroyed or cleared. I think it should free > > memory if it already has enough memory in free list. > > This is the point of pools. The idea is that you should hit a steady > state quickly. Basically, one request goes through, and it allocates > all of the memory out of pools. The next time that same request is > sent, it should use the same amount of memory. For all other requests, > it should either use a little more or a little less memory, but at some > point you will get a request that uses more memory than any other > request, and that is how large your pool will be forever, which means > that you will no longer allocate memory. > > If your pools are growing too large, then you most likely need to split > the allocation into multiple sub-pools, so that the memory is returned > and can be used by later operations. > > Ryan
There is that, although I must admit that I have a 'hi free' patch lying around. The idea is to free() all mem on the freelist when the freelist size goes over a certain threshold (like a few MB). I would need some feedback on this though, Sander