On Apr 21, 2013, at 10:01 PM, vandana shah wrote:
> I have been trying to use jemalloc for my application and observed that the 
> rss of the process keeps on increasing.
> 
> I ran the application with valgrind to confirm that there are no memory leaks.
> 
> To investigate more, I collected jemalloc stats after running the test for 
> few days and here is the summary for a run with narenas:1, tcache:false, 
> lg_chunk:24
> 
>  Arenas: 1
>  Pointer size: 8
>  Quantum size: 16
>  Page size: 4096
>  Min active:dirty page ratio per arena: 8:1
>  Maximum thread-cached size class: 32768
>  Chunk size: 16777216 (2^24)
>  Allocated: 24364176040, active: 24578334720, mapped: 66739765248
>  Current active ceiling: 24578621440
>  chunks: nchunks   highchunks    curchunks
>             3989         3978         3978
>  huge: nmalloc      ndalloc    allocated
>              3            2    117440512
>  
>  arenas[0]:
>  assigned threads: 17
>  dss allocation precedence: disabled
>  dirty pages: 5971898:64886 active:dirty, 354265 sweeps, 18261119 madvises, 
> 1180858954 purged
> 
> While in this state, the RSS of the process was at 54GB.
> 
> Questions:
> 1) The difference between RSS and jemalloc active is huge (more than 30GB). 
> In my test, the difference was quite less in the beginning (say 4 GB) and it 
> went on increasing with time. That seems too high to account for jemalloc 
> data structures, overhead etc. What else gets accounted in process RSS - 
> active?

jemalloc is reporting very low page-level external fragmentation for your app: 
1.0 - allocated/active == 1.0 - 24364176040/24578334720 == 0.87%.  However, 
virtual memory fragmentation is quite high: 1.0 - active/mapped == 63.2%.

> 2) The allocations are fairly random, sized between 8 bytes and 2MB. Are 
> there any known issues of fragmentation for particular allocation sizes?

If your application were to commonly allocate slightly more than one chunk, 
then internal fragmentation would be quite high, but at little actual cost to 
physical memory.  However, you are using 16 MiB chunks, and the stats say that 
there's only a single huge (112-MiB) allocation.

> 3) Is there a way to tune the allocations and reduce the difference?

I can't think of a way this could happen short of a bug in jemalloc.  Can you 
send me a complete statistics, and provide the following?

- jemalloc version
- operating system
- compile-time jemalloc configuration flags
- run-time jemalloc option flags
- brief description of what application does

Hopefully that will narrow down the possible explanations.

Thanks,
Jason
_______________________________________________
jemalloc-discuss mailing list
[email protected]
http://www.canonware.com/mailman/listinfo/jemalloc-discuss

Reply via email to