[ 
https://issues.apache.org/jira/browse/SOLR-9284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15447558#comment-15447558
 ] 

Michael Sun commented on SOLR-9284:
-----------------------------------

bq. Why does the underlying data in the cache need to be removed? The 
underlying cache locations should simply be reclaimed by the LRU cache 
replacement policy.

Ah, I see your question. I agree that inaccessible data can be removed by the 
LRU logic of block cache eventually. The main gain of my suggestion to help 
cache efficiency. For example, if cached data related to the name removed is 
newly cached, instead of pushing out them, the LRU cache may decide to push out 
some older cached data which may be still useful.

And releasing unused memory early is in general a good practice.

With that said, the name map implementation in patch is better than current 
implementation (using ConcurrentHashMap). I was hoping to make max use of 
memory by removing related items once a name is deleted. But if it's hard to 
achieve, the current patch is good to go IMO.

 

> The HDFS BlockDirectoryCache should not let it's keysToRelease or names maps 
> grow indefinitely.
> -----------------------------------------------------------------------------------------------
>
>                 Key: SOLR-9284
>                 URL: https://issues.apache.org/jira/browse/SOLR-9284
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: hdfs
>            Reporter: Mark Miller
>            Assignee: Mark Miller
>             Fix For: 6.2, master (7.0)
>
>         Attachments: SOLR-9284.patch, SOLR-9284.patch
>
>
> https://issues.apache.org/jira/browse/SOLR-9284



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to