[ 
https://issues.apache.org/jira/browse/SOLR-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14488292#comment-14488292
 ] 

Yonik Seeley commented on SOLR-7372:
------------------------------------

We only need to  throw an exception if not Accountable for items being put in 
the cache right?
Old items that are bring removed (or overwritten) must be Accountable since 
they got added somehow.

Also, it's going to be relatively easy to blow this out of the water on 
purpose, or even by accident.

1) Do facet.method=enum on a high cardinality field like "ID" and thus put a 
million small items in the cache.
2) start searching normally and the cache size will stay at a million, 
regardless of the size of items we put in (since removeEldestEntry is only 
called once for each put).

After a quick browse of LinkedHashMap, I didn't see an obvious easy/fast way to 
remove the oldest entry, so I'm not sure how to fix.

For the calculation of amount of RAM taken up... perhaps we should estimate the 
minimum that a key + internal map node would take up?
For the query cache in particular, it's going to be common for query keys to 
take up more memory than the actual DocSlice.


> Limit LRUCache by RAM usage
> ---------------------------
>
>                 Key: SOLR-7372
>                 URL: https://issues.apache.org/jira/browse/SOLR-7372
>             Project: Solr
>          Issue Type: Improvement
>            Reporter: Shalin Shekhar Mangar
>            Assignee: Shalin Shekhar Mangar
>             Fix For: Trunk, 5.2
>
>         Attachments: SOLR-7372.patch
>
>
> Now that SOLR-7371 has made DocSet impls Accountable, we should add an option 
> to LRUCache to limit itself by RAM.
> I propose to add a 'maxRamBytes' configuration parameter which it can use to 
> evict items once the total RAM usage of the cache reaches this limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to