[
https://issues.apache.org/jira/browse/SOLR-10141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15873361#comment-15873361
]
Ben Manes commented on SOLR-10141:
----------------------------------
That makes sense. If its a fallback when an empty slot can't be acquired, it
may be preferable to calling cleanUp() always. But a stress test would be
necessary to verify that, as the spin time might be too small so that it didn't
help.
In most traces frequency dominates over recency, so most insertions are
pollutants. The impact of a failed insertion might not have had a negative
result, as a popular item would make its way in. Then the failing one-hit
wonders wouldn't have disrupted the LRU as much. That's less meaningful with
Caffeine, since we switched to TinyLFU.
As an aside, I'd appreciate help in moving SOLR-8241 forward. Its been approved
but backlogged as the committer has not had the time to actively participate in
Solr. But if that's crossing territories or you feel uncomfortable due to this
bug, I understand.
> Caffeine cache causes BlockCache corruption
> --------------------------------------------
>
> Key: SOLR-10141
> URL: https://issues.apache.org/jira/browse/SOLR-10141
> Project: Solr
> Issue Type: Bug
> Security Level: Public(Default Security Level. Issues are Public)
> Reporter: Yonik Seeley
> Attachments: SOLR-10141.patch, Solr10141Test.java
>
>
> After fixing the race conditions in the BlockCache itself (SOLR-10121), the
> concurrency test passes with the previous implementation using
> ConcurrentLinkedHashMap and fail with Caffeine.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]