On Mon, 2 Feb 2026 13:51:00 GMT, Artur Barashev <[email protected]> wrote:

> Yes, basically I see 2 options:
> 
> 1. If the intent is to implement a true `intern` method (same as 
> `String.intern()`), where `s.intern() == t.intern()` is true if and only if 
> `s.equals(t)` is true, then I would use `ConcurrentHashMap` which is a hard 
> cache with a fine per-bucket locking. Particularly `computeIfAbsent` method 
> performs get/put operation atomically.
> 2. If the intent is to return objects that can be compared with `equals()` 
> method and not `==` operator, then we can keep current soft cache 
> implementation and remove all the synchronization. I've done a quick research 
> and I couldn't find a single place in code where we compare certificates or 
> CRLs by reference (with `==` operator), so I suspect that was the original 
> intent.

Thanks for sharing both the options.

Option 1: I agree, but I wouldn’t lean toward this approach right now since it 
would be a policy change. Using a hard cache here would need a much deeper look 
before we take that route, and the existing soft cache looks like a deliberate 
choice to keep interning best-effort rather than permanent.

Option 2: If identity stability is not a goal and semantic equivalence 
(equals()) is sufficient, then we could simplify further: keep the soft cache 
and remove extra synchronization, accepting that under concurrency two threads 
may build and insert distinct instances for the same cert/CRL. Even if no 
callers rely on reference equality (==), this still implies duplicate 
parsing/allocation under contention.
Just to make sure I understand your preference correctly here before I remove 
the synch: are you suggesting that, for X509Factory, avoiding duplicate 
parsing/allocation under concurrency is not a goal, and that occasional 
duplicate construction is acceptable as long as the returned objects are 
semantically equivalent, even if another thread has already inserted an 
equivalent entry?

With the current implementation, my intent is a middle ground: it keeps the 
soft-cache policy unchanged, narrows the lock scope compared to the old 
synchronized methods, and preserves best-effort convergence at the time of the 
call by making that step atomic. Parsing/building remains outside the lock, so 
the expensive work is not serialized and this can still help reduce duplicate 
allocation under concurrency.

Let me know.

-------------

PR Comment: https://git.openjdk.org/jdk/pull/29181#issuecomment-3839428354

Reply via email to