[
https://issues.apache.org/jira/browse/HBASE-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714801#action_12714801
]
Jonathan Gray commented on HBASE-1460:
--------------------------------------
Ah, forgot one important thing. The BlockCache interface needs to change
somehow so that when blocks are cached the LRU knows whether the blocks are
from an in-memory store.
Also, there exists a race condition where two threads both want to access the
same uncached block. There is enormous complexity involved with dealing with
that... I'd prefer to rely on hdfs or hfile code being relatively quick on
duplicated fetches... would seem we'd have OS caching and such to help with
hdfs reads. The attached LRU cache will swap in the new ByteBuffer if you
re-cache an already cached block, but assumes equivalent size (does not modify
heapSize). This can behave however we want, up for discussion.
> Concurrent LRU Block Cache
> --------------------------
>
> Key: HBASE-1460
> URL: https://issues.apache.org/jira/browse/HBASE-1460
> Project: Hadoop HBase
> Issue Type: Improvement
> Components: io
> Reporter: Jonathan Gray
> Assignee: Jonathan Gray
> Fix For: 0.20.0
>
> Attachments: HBASE-1460-v1.patch
>
>
> The LRU-based block cache that will be committed in HBASE-1192 is thread-safe
> but contains a big lock on the hash map. Under high load, the block cache
> will be hit very heavily from a number of threads, so it needs to be built to
> handle massive concurrency.
> This issue aims to implement a new block cache with LRU eviction, but backed
> by a ConcurrentHashMap and a separate eviction thread. Influence will be
> drawn from Solr's ConcurrentLRUCache, however there are major differences
> because solr treats all cached elements as equal size whereas we are
> dependent on our HeapSize interface with realistic (though approximate) heap
> usage.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.