I've noticed that an allocation request for a large chunk of memory (128GB) results in two calls to pages_map() (in src/chunk_mmap.c), consuming 2x the VM I requested. In a 64 bit world this is not a big problem, but I've recoded pages_map() to force allocation from an mmap'd ssd (instead of swap anonymous mmap), and it's forcing me to run out of backing store. The issue I would like to understand is why pages_map() is called twice with separate requests for the single 128GB jemalloc() that I'm doing in my application. The first allocation is followed by a call to pages_unmap(), but with an unmap size of 0 bytes, leaving it fully mapped, while the second allocation (which is slightly larger than 128GB) is trimmed to exactly 128GB by 2 subsequent pages_unmap() calls. This behavior seems very strange to me, and any explanation would be appreciated.
Bill

_______________________________________________
jemalloc-discuss mailing list
[email protected]
http://www.canonware.com/mailman/listinfo/jemalloc-discuss

Reply via email to