On Mar 6, 2015, at 5:26 PM, Steve Cobb <stevecob...@yahoo.com> wrote:
> I am busily reading web pages etc about tuning jemalloc etc, but would like 
> to get some direct comments on how aggressive jemalloc is at releasing freed 
> memory to the system - either by munmap or madvise(MADV_DONTNEED).
> 
> The problem we are facing - this is using glibc malloc, on embedded systems 
> running Linux -  is that we have applications that can scale way up in size, 
> then scale way down. These systems have no swap partition. Using some tools 
> to dump the heaps of these applications, after the scale-down, we find large 
> chunks of memory retained on malloc free lists, but none of that memory can 
> be trimmed from the heap and unmapped. This is on the order of 100s of M 
> free, with contiguous blocks of of 50M on the free lists.
> 
> It turns out that glibc malloc appears very reluctant to trim its arenas. 
> Particularly, the "main arena" is allocated via sbrk, and that can obviously 
> only shrink at the end of the sbrk value. But the mmap'd arena's, one would 
> hope would be more easily trimmed, but that does not seem to be the case.
> 
> So we are looking at jemalloc in hopes of solving this problem. 
> 
> I hope I have made my question clear. Can someone point out the basic 
> implementation details here - the bottom line question is: how aggresive is 
> jemalloc at returning memory to the system, and are there any tuning knobs 
> for this type of behavior.

jemalloc is moderately aggressive about returning memory to the system by 
default, and it can be tuned to be very aggressive.  See the lg_dirty_mult 
option for more information.

        
http://www.canonware.com/download/jemalloc/jemalloc-latest/doc/jemalloc.html#opt.lg_dirty_mult
 
<http://www.canonware.com/download/jemalloc/jemalloc-latest/doc/jemalloc.html#opt.lg_dirty_mult>

Jason

_______________________________________________
jemalloc-discuss mailing list
jemalloc-discuss@canonware.com
http://www.canonware.com/mailman/listinfo/jemalloc-discuss

Reply via email to