On Tue, 28 Oct 2025 17:29:15 GMT, Vladimir Kozlov <[email protected]> wrote:

>> I was sort of thinking that there might be more AOT cache work that could 
>> benefit from concurrency, other than object loading. That's why I named it a 
>> bit more generically. Does that make sense? Otherwise, I'm open to renaming 
>> it.
>
> There is an other AOT thread `TrainingReplayThread` (also JavaThread) which 
> has loop to process AOT dependencies for classes which were initialized.
> And we have 2 AOT compiler threads (for C1 and C2) to load AOT code.
> As you see we have specialized threads for AOT work. 
> 
> May be I can use this AOTthread for AOT code preloading 
> (AOTCodeCache::preload_code()) which is currently done in main thread.

Yes, I am hoping so!

>> Both flags are diagnostic flags as I don't think ordinary users have any 
>> business fiddling with them. The AOTStreamableObjects flag controls whether 
>> dumping will dump the GC agnostic streaming format or not. Not setting it 
>> implies we will use streaming when compressed oops is off.
>> 
>> As for AOTEagerlyLoadObjects, enabling it when loading a streamable object 
>> archive means all objects will be materialized upfront, instead of using 
>> concurrent materialization. This is enabled automatically if the environment 
>> has one core or less available. At that point, concurrency won't help much, 
>> and just getting it over with is a good idea.
>> 
>> But yeah, as a user, you shouldn't really have to fiddle with these at all. 
>> They are there for benchmarking the differences.
>
> Got it. Thanks you.
> An other question: does it mean that if streamable objects are in AOT cache 
> we can't use mmap for them?
> So everything is defined during assembly phase which approach is used: 
> streaming or mmap. Right?

Yes that's right.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/27732#discussion_r2470773277
PR Review Comment: https://git.openjdk.org/jdk/pull/27732#discussion_r2470769763

Reply via email to