[
https://issues.apache.org/jira/browse/SOLR-1308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12735679#action_12735679
]
Jason Rutherglen commented on SOLR-1308:
----------------------------------------
Perhaps in another issue we can implement a cache that is RAM
usage aware. Implement sizeof(bitset), and keep the cache below
a predefined limit?
Do we need to have a cache per reader or can the cachekey
include the reader? If segments are created rapidly, we may not
want the overhead of creating a new cache and managing it's
size, especially if we move to a RAM usage model.
> Cache docsets and docs at the SegmentReader level
> -------------------------------------------------
>
> Key: SOLR-1308
> URL: https://issues.apache.org/jira/browse/SOLR-1308
> Project: Solr
> Issue Type: Improvement
> Affects Versions: 1.4
> Reporter: Jason Rutherglen
> Priority: Minor
> Fix For: 1.5
>
> Original Estimate: 504h
> Remaining Estimate: 504h
>
> Solr caches docsets and documents at the top level Multi*Reader
> level. After a commit, the caches are flushed. Reloading the
> caches in near realtime (i.e. commits every 1s - 2min)
> unnecessarily consumes IO resources, especially for largish
> indexes.
> We can cache docsets and documents at the SegmentReader level.
> The cache settings in SolrConfig can be applied to the
> individual SR caches.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.