ben-manes commented on issue #4544:
URL: https://github.com/apache/accumulo/issues/4544#issuecomment-2105956413

   @cshannon, I think you're right on all points. I'll give more depth just for 
fun.
   
   In regards to automatic refresh, the sync/async caches are the same. The 
vast majority of the code is shared 
([BoundedLocalCache.refreshIfNeeded](https://github.com/ben-manes/caffeine/blob/5a172296406a570a08b33c4992d72a6e80ba3d17/caffeine/src/main/java/com/github/benmanes/caffeine/cache/BoundedLocalCache.java#L1298-L1416))
 with thin adapters to the corresponding interfaces 
([LocalLoadingCache](https://github.com/ben-manes/caffeine/blob/master/caffeine/src/main/java/com/github/benmanes/caffeine/cache/LocalLoadingCache.java),
 
[LocalAsyncLoadingCache](https://github.com/ben-manes/caffeine/blob/master/caffeine/src/main/java/com/github/benmanes/caffeine/cache/LocalAsyncLoadingCache.java)).
 When a refresh is triggered, it calls [AsyncCacheLoader.asyncReload(k, v, 
executor)](https://github.com/ben-manes/caffeine/blob/5a172296406a570a08b33c4992d72a6e80ba3d17/caffeine/src/main/java/com/github/benmanes/caffeine/cache/AsyncCacheLoader.java#L93-L117),
 which [CacheLoader](https://github.com/be
 
n-manes/caffeine/blob/master/caffeine/src/main/java/com/github/benmanes/caffeine/cache/CacheLoader.java)
 overrides to wrap and call the synchronous [reload(k, 
v)](https://github.com/ben-manes/caffeine/blob/master/caffeine/src/main/java/com/github/benmanes/caffeine/cache/CacheLoader.java#L152-L207)
 for convenience. There is a little juggling if the cache is storing a future 
value, but that's all internal details.
   
   `AsyncLoadingCache` is definitely useful when avoiding lock contention on 
`computeIfAbsent`, as it shifts from ConcurrentHashMap's hashbin lock to the 
entry's future. That reduces the map's own operation time, but sacrifices 
linearizability which is normally okay. By the cache managing the pending 
value, another significant benefit is it allows us to offer a [smarter bulk 
load](https://github.com/ben-manes/caffeine/wiki/Faq#bulk-loads) that protects 
against cache stampedes like individual loads do. Other benefits are that a 
future also allows the caller to use a timeout or share the same error if a 
failure rather than trying the load anew. And of course if the caller or loader 
is an async/reactive chain then it naturally fits together rather than randomly 
blocking in an undesirable code point. That's quite nice if all calls can 
benefit from 
[coalescing](https://github.com/ben-manes/caffeine/tree/master/examples/coalescing-bulkloader-reactor)
 and not just a refresh.
   
   A negative aspect of all async calls is the need for an executor. The cache 
doesn't manage its own threads so it defaults to the shared JVM-wide 
`ForkJoinPool.commonPool()` (`CompletableFuture` does similarly in its 
`defaultExecutor()`). That thread pool is designed for cpu-intensive work and 
will quickly starve on I/O due to its limited thread count. The new virtual 
thread executor is optimized for I/O as it elegantly layers itself on top of 
FJP so the blocking waits are not performed on the native threads. That feature 
is still too immature for production use on the latest JDKs due to 
[footguns](https://mail.openjdk.org/pipermail/loom-dev/2024-February/006463.html),
 but those will be resolved in future releases. When appropriate we'll switch 
defaults, but you might see a benefit in sharing an application-wide 
`Executors.newCachedThreadPool()` for all of your async I/O 
(`ThreadPoolExecutor`) and configuring that in Caffeine when appropriate.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to