Status: Accepted
Owner: [email protected]
Labels: Type-Bug Priority-Medium newgc

New issue 1459 by [email protected]: NewGC: Return memory to the OS.
http://code.google.com/p/v8/issues/detail?id=1459

We do not currently return memory to the OS when entire pages become free. This applies both to large objects and completely empty regular pages.

In order to return pages to the OS we need to call a safe version of InNewSpace when scanning scan-on-scavenge pages. The current version of InNewSpace looks at the page to see the new-space flags, but this doesn't work for garbage pointers into pages that have been unmapped. There are three ways to solve this:

1) Put objects to be returned to the OS on a queue and don't actually free them until a precise sweep-and-clear of pointer, map and cell spaces has removed all pointers from garbage pages to the to-be-freed-pages. The simple version of this is to move to always use precise sweeping-and-clearing. That would cause a throughput hit, but no latency hit, since sweeping is now incremental (we would have to wait for the incremental sweeping to complete before we could return memory to the OS though).

2) Keep the page bits or a copy of the page bits in a byte-per-page array that covers the whole of memory. This is feasible for pages of > ca. 128k on 32 bit architectures and may give other performance benefits. The byte-per-page array may be feasible on x64 where current hardware has 48 bits of virtual memory, yielding a 256Mbyte virtual mapping which is per-process (not per-isolate) for 1Mbyte page size. This is rather like the 512MByte code space in that it is a virtual reservation, not an actual memory requirement. The virtual address size of x64 CPUs follows roughly Moore's law so we can expect it to grow yearly as we release new V8 versions and new chips are released.

3) Use exclusively mask-compare operations to determine whether a pointer is into newspace. This restricts us a little wrt. how we put pages in and out of newspace, though currently we don't make use of any flexibility in this area.

--
v8-dev mailing list
[email protected]
http://groups.google.com/group/v8-dev

Reply via email to