On Wed, 10 Sep 2025 07:56:42 GMT, Thomas Schatzl <tscha...@openjdk.org> wrote:

>> Hi all,
>> 
>>   please review this change that implements (currently Draft) JEP: G1: 
>> Improve Application Throughput with a More Efficient Write-Barrier.
>> 
>> The reason for posting this early is that this is a large change, and the 
>> JEP process is already taking very long with no end in sight but we would 
>> like to have this ready by JDK 25.
>> 
>> ### Current situation
>> 
>> With this change, G1 will reduce the post write barrier to much more 
>> resemble Parallel GC's as described in the JEP. The reason is that G1 lacks 
>> in throughput compared to Parallel/Serial GC due to larger barrier.
>> 
>> The main reason for the current barrier is how g1 implements concurrent 
>> refinement:
>> * g1 tracks dirtied cards using sets (dirty card queue set - dcqs) of 
>> buffers (dirty card queues - dcq) containing the location of dirtied cards. 
>> Refinement threads pick up their contents to re-refine. The barrier needs to 
>> enqueue card locations.
>> * For correctness dirty card updates requires fine-grained synchronization 
>> between mutator and refinement threads,
>> * Finally there is generic code to avoid dirtying cards altogether 
>> (filters), to avoid executing the synchronization and the enqueuing as much 
>> as possible.
>> 
>> These tasks require the current barrier to look as follows for an assignment 
>> `x.a = y` in pseudo code:
>> 
>> 
>> // Filtering
>> if (region(@x.a) == region(y)) goto done; // same region check
>> if (y == null) goto done;     // null value check
>> if (card(@x.a) == young_card) goto done;  // write to young gen check
>> StoreLoad;                // synchronize
>> if (card(@x.a) == dirty_card) goto done;
>> 
>> *card(@x.a) = dirty
>> 
>> // Card tracking
>> enqueue(card-address(@x.a)) into thread-local-dcq;
>> if (thread-local-dcq is not full) goto done;
>> 
>> call runtime to move thread-local-dcq into dcqs
>> 
>> done:
>> 
>> 
>> Overall this post-write barrier alone is in the range of 40-50 total 
>> instructions, compared to three or four(!) for parallel and serial gc.
>> 
>> The large size of the inlined barrier not only has a large code footprint, 
>> but also prevents some compiler optimizations like loop unrolling or 
>> inlining.
>> 
>> There are several papers showing that this barrier alone can decrease 
>> throughput by 10-20% 
>> ([Yang12](https://dl.acm.org/doi/10.1145/2426642.2259004)), which is 
>> corroborated by some benchmarks (see links).
>> 
>> The main idea for this change is to not use fine-grained synchronization 
>> between refinement and mutator threads, but coarse grained based on 
>> atomically switching c...
>
> Thomas Schatzl has updated the pull request with a new target base due to a 
> merge or a rebase. The pull request now contains 74 commits:
> 
>  - Merge branch 'master' into 8342382-card-table-instead-of-dcq
>  - * iwalulya: remove confusing comment
>  - * sort includes
>  - Merge branch 'master' into 8342382-card-table-instead-of-dcq
>  - * improve logging for refinement, making it similar to marking logging
>  - * commit merge changes
>  - Merge branch 'master' into 8342382-card-table-instead-of-dcq
>  - * fix merge error
>  - * forgot to actually save the files
>  - Merge branch 'master' into 8342382-card-table-instead-of-dcq
>  - ... and 64 more: https://git.openjdk.org/jdk/compare/9e3fa321...e7c3a067

src/hotspot/share/gc/g1/g1CardTableClaimTable.hpp line 43:

> 41: // Claiming works on full region (all cards in region) or a range of 
> contiguous cards
> 42: // (chunk). Chunk size is given at construction time.
> 43: class G1CardTableClaimTable : public CHeapObj<mtGC> {

Do we need the `Table` in the `G1CardTableClaimTable` or can just calling it 
`G1CardTableClaimer` suffice?

src/hotspot/share/gc/g1/g1ConcurrentRefine.hpp line 301:

> 299:   // Indicate that last refinement adjustment had been deferred due to 
> not
> 300:   // obtaining the heap lock.
> 301:   bool wait_for_heap_lock() const { return _heap_was_locked; }

`wait_for_heap_lock()` does not do any waiting, maybe just maintain 
`heap_was_locked` as the method name.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/23739#discussion_r2336340738
PR Review Comment: https://git.openjdk.org/jdk/pull/23739#discussion_r2336332933

Reply via email to