On Oct 30, 2012, at 3:49 PM, Jason Evans wrote:

> The preference for allocating dirty runs was a solution to excessive dirty 
> page purging.  However, the purging policy (as of jemalloc 3.0.0) is 
> round-robin,
> justified only as a strategy for allowing dirty pages to accumulate in chunks 
> before going to the considerable effort (including arena mutex operations)
> of scanning a chunk for dirty pages.  In retrospect I'm thinking maybe this 
> was a bad choice, and that we should go back to scanning downward through
> memory to purge dirty pages.  The danger is that the linear scanning overhead 
> for scanning each chunk will cause a measurable performance degradation
> if high chunks routinely have many runs, only a few of which are unused dirty 
> runs.  I think that problem can be solved with slightly more sophisticated
> hysteresis though.

> I'll work on a diff for you to test, and see how it affects Firefox.  I'll do 
> some testing with Facebook server loads too (quite different behavior from 
> Firefox).
> If this causes a major reduction in virtual memory usage for both workloads, 
> it's probably the right thing to do, even speed-wise.

[...]

Jason (and Mike),

I've been following this closely, as I've been experiencing the exact same 
issue with our use of jemalloc 3.  Our application does a large number of
varied-size allocations and "leaks" several GB of VM each day due to the 
fragmentation...  When you have a diff, I can provide a third data point.

Thanks, 

Jim Kuhn
_______________________________________________
jemalloc-discuss mailing list
[email protected]
http://www.canonware.com/mailman/listinfo/jemalloc-discuss

Reply via email to