On Mon, Sep 29, 2014 at 04:07:12PM -0700, Jason Evans wrote:
> On Sep 9, 2014, at 6:51 AM, Guilherme Goncalves <g...@mozilla.com>
> wrote:
> > | Will this sufficiently address your accounting concerns?  There's
> > the | potential to over-report active memory by nearly 1.2X in the
> > worst case, but | that's a lot better than nearly 2X as things
> > currently are.
> > 
> > While that's definitely better than 2X over-reporting, I wonder if
> > we can't just expose the sum of all huge allocations rounded to a
> > page boundary as a new statistic, without actually changing the way
> > the mapping is done. That could give us the more accurate accounting
> > we want without causing fragmentation in the address space.
> > 
> > In more concrete terms, this would add a
> > "stats.arenas.<i>.huge.allocated_pages" statistic, reporting the
> > total size of huge allocations serviced by the i-th arena, but
> > rounded to pages and not chunks (while still mapping memory in
> > chunks as usual).
> > 
> > If I'm not missing anything, a patch to implement this would look
> > similar yet a lot less intrusive than my first attempt [1]. Does
> > this sound reasonable?
> 
> I want the sum of malloc_usable_size() for all extant allocations to
> remain the source of truth about how much memory the application has
> allocated, and I'm currently on a mission to make size class spacing
> uniform, so I'm loath to add exceptions before even finishing that.
> If 1.2X worst case is too loose a bound for your use case, one other
> possibility would be to add a configure option to create 8 size
> classes per size doubling rather than 4, so that the worst case is
> ~1.11X (or 16 size classes per doubling and 1.06X worst case overhead,
> etc.).  The size_classes.sh script requires only a single constant be
> parametrized in order to make this possible.

The need is to approximate the amount of committed memory, as opposed
to allocated. Changing the allocation properties doesn't help much,
here.

Mike
_______________________________________________
jemalloc-discuss mailing list
jemalloc-discuss@canonware.com
http://www.canonware.com/mailman/listinfo/jemalloc-discuss

Reply via email to