On Jul 17, 2014, at 11:13 AM, D'Alessandro, Luke K <[email protected]> wrote:
> I don’t have any other changes in place, just using git master and the
> `arena.*.chunk.{alloc,dealloc}` functionality to register a function that
> forwards to the original {alloc,dalloc} and then uses IB verbs
> registration/deregistration on the returned chunks.
Ah, I'd incorrectly assumed you were using a version of jemalloc without that
functionality. =)
> I don’t really want to to use jemalloc exclusively for managing the pinned
> memory—in fact, it gets used to back all of the normal malloc/free calls in
> the code. We use allocx() dallocx() with the pinned arena directly for
> network managed memory. Will the MALLOC_CONF cause us problems with the rest
> of the our runtime? I could link something else first, to deal with those
> routines if this is a bad thing to disable in general (-lc?).
The MALLOC_CONF setting will impact the application as a whole, so until
there's a mechanism in jemalloc for controlling purging on a per arena basis,
you're going to potentially suffer increased physical memory usage, depending
on application behavior, because jemalloc won't be discarding the dirty pages
in all the other arenas. I just created an issue on github to make sure this
use case isn't forgotten in jemalloc 4:
https://github.com/jemalloc/jemalloc/issues/93
> It would be nice to have allocx() dallocx() take a “cache” reference instead
> of an arena reference, with the cache configured with arenas to handle cache
> misses or something to deal with multithreaded-accesses. Other than that we
> really like using the library and as long as our network memory doesn’t move
> between cores frequently, this works well.
Are you suggesting a system in which each cache is uniquely identified, or one
in which every thread potentially has indexable caches [0, 1, 2, ...]? I've
given some thought to something similar to the latter: each thread invisibly
has a cache layered on top of any arena which is explicitly used for
allocation. I'm still in the idea phase on this, so I'm really interested to
hear any insights you have.
Thanks,
Jason
_______________________________________________
jemalloc-discuss mailing list
[email protected]
http://www.canonware.com/mailman/listinfo/jemalloc-discuss