Jakub Zytka created SOLR-14498:
----------------------------------

             Summary: BlockCache gets stuck not accepting new stores
                 Key: SOLR-14498
                 URL: https://issues.apache.org/jira/browse/SOLR-14498
             Project: Solr
          Issue Type: Bug
      Security Level: Public (Default Security Level. Issues are Public)
          Components: query
    Affects Versions: 8.5.1, 7.7.3, 6.6.5, 6.5
            Reporter: Jakub Zytka


{{BlockCache}} uses two components: "storage", i.e. {{banks}} and "eviction 
mechanism", i.e {{cache}}, implemented by caffeine cache.
The relation between them is that "storage" enforces a strict limit for the 
number of entries (

{{numberOfBlocksPerBank * numberOfBanks}}) whereas the "eviction mechanism" 
takes care of freeing entries from the storage thanks to {{maximumSize}} set 
for the caffeine cache to {{numberOfBlocksPerBank * numberOfBanks - 1}}.

The storage relies on caffeine cache to eventually free at least 1 entry from 
the storage. If that doesn't happen the {{BlockCache}} starts to fail all new 
stores.

As it turns out, the caffeine cache may not reduce it's size to the desired 
{{maximumSize}} for as long as no {{put}} or {{getIfPresent}} which *finds an 
entry* is executed.

With a sufficiently unlucky read pattern, the block cache may be rendered 
useless (0 hit ratio):
cache poisoned by non-reusable entries; new, reusable entries are not stored 
and thus not reused.

Further info may be found in [https://github.com/ben-manes/caffeine/issues/420]

 

Change in caffeine that triggers it's internal cleanup mechanism regardless of 
whether getIfPresent gets a hit has been implemented in 
[https://github.com/ben-manes/caffeine/commit/7239bb0dda2af1e7301e8f66a5df28215b5173bc]
and is due to be released in caffeine 2.8.4



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to