> This is the implementation of JEP 516: Ahead-of-Time Object Caching with Any 
> GC.
> 
> The current mechanism for the AOT cache to cache heap objects is by using 
> mmap to place bytes from a file directly in the GC managed heap. This 
> mechanism poses compatibility challenges that all GCs have to have bit by bit 
> identical object and reference formats, as the layout decisions are offline. 
> This has so far meant that AOT cache optimizations requiring heap objects are 
> not available when using ZGC. This work ensures that all GCs, including ZGC, 
> are able to use the more advanced AOT cache functionality going forward.
> 
> This JEP introduces a new mechanism for archiving a primordial heap, without 
> such compatibility problems. It embraces online layouts and allocates objects 
> one by one, linking them using the Access API, like normal objects. This way, 
> archived objects quack like any other object to the GC, and the GC 
> implementations are decoupled from the archiving mechanism.
> 
> The key to doing this GC agnostic object loading is to represent object 
> references between objects as object indices (e.g. 1, 2, 3) instead of raw 
> pointers that we hope all GCs will recognise the same. These object indices 
> become the key way of identifying objects. One table maps object indices to 
> archived objects, and another table maps object indices to heap objects that 
> have been allocated at runtime. This allows online linking of the 
> materialized heap objects.
> 
> The main interface to the cached heap is roots. Different components can 
> register object roots at dump time. Each root gets assigned a root index. At 
> runtime, requests can be made to get a reference to an object at a root 
> index. The new implementation uses lazy materialization and concurrency. When 
> a thread asks for a root object, it must ensure that the given root object 
> and its transitively reachable objects are reachable. A new background thread 
> called the AOTThread, tries to perform the bulk of the work, so that the 
> startup impact of processing the objects one by one is not impacting the 
> bootstrapping thread.
> 
> Since the background thread performs the bulk of the work, the archived is 
> laid out to ensure it can run as fast as possible.
> Objects are laid out inf DFS pre order over the roots in the archive, such 
> that the object indices and the DFS traversal orders are the same. This way, 
> the DFS traversal that the background thread is performing is the same order 
> as linearly materializing the objects one by one in the order they are laid 
> out in...

Erik Österlund has updated the pull request with a new target base due to a 
merge or a rebase. The pull request now contains 31 commits:

 - Merge branch 'master' into 8326035_JEP_object_streaming_v6
 - Comment update
 - Merge branch 'master' into 8326035_JEP_object_streaming_v6
 - Merge branch 'master' into 8326035_JEP_object_streaming_v6
 - remove include
 - Interned string value word accounting
 - Dont load all objects when JVMTI CFLH is on
 - Remove duplicate string dedup disabling when dumping
 - Accept interned strings sharing value with another string
 - Merge branch 'master' into 8326035_JEP_object_streaming_v6
 - ... and 21 more: https://git.openjdk.org/jdk/compare/b0536f9c...afdb11ee

-------------

Changes: https://git.openjdk.org/jdk/pull/27732/files
  Webrev: https://webrevs.openjdk.org/?repo=jdk&pr=27732&range=14
  Stats: 8721 lines in 106 files changed: 5943 ins; 2318 del; 460 mod
  Patch: https://git.openjdk.org/jdk/pull/27732.diff
  Fetch: git fetch https://git.openjdk.org/jdk.git pull/27732/head:pull/27732

PR: https://git.openjdk.org/jdk/pull/27732

Reply via email to