[
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15873357#comment-15873357
]
Yonik Seeley commented on SOLR-10141:
-------------------------------------
The size issue is only an issue for the BlockCache specifically (not for any
other Solr caches).
Actually, the way the BlockCache is written, we are guaranteed to never have
more than maxEntries... writers have to wait for an open slot (which opens up
once the removal listener is called). The writer spins a bit trying to find an
open slot and fails if it can't. Doing extra work via cache.cleanUp() if we
don't see an empty slot is definitely better than failing to cache the entry.
I imagine the issue existed when CLHM was used as well. The metric of store
failures isn't currently tracked, and it only leads to a lower cache hit rate.
I plan on starting to track it, and then to see how often it happens when we're
actually caching real HDFS blocks. That's a separate issue though.
> Caffeine cache causes BlockCache corruption
> --------------------------------------------
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the
> concurrency test passes with the previous implementation using
> ConcurrentLinkedHashMap and fail with Caffeine.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]