[
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16088839#comment-16088839
]
Anoop Sam John commented on HBASE-17819:
----------------------------------------
We have a config to say whether blocks belonging to an HFile to be evicted when
the file is closed.
key : 'hbase.rs.evictblocksonclose'
This defaults to false only.
Means the blocks wont be forcefully evicted when a file is closed but
eventually the LRU nature will remove these blocks.
This is what was happening before we had CompactedHFilesDischarger?
[~ram_krish]?
Now CompactedHFilesDischarger seems not considering this config!
> Reduce the heap overhead for BucketCache
> ----------------------------------------
>
> Key: HBASE-17819
> URL: https://issues.apache.org/jira/browse/HBASE-17819
> Project: HBase
> Issue Type: Sub-task
> Components: BucketCache
> Reporter: Anoop Sam John
> Assignee: Anoop Sam John
> Fix For: 2.0.0
>
>
> We keep Bucket entry map in BucketCache. Below is the math for heapSize for
> the key , value into this map.
> BlockCacheKey
> ---------------
> String hfileName - Ref - 4
> long offset - 8
> BlockType blockType - Ref - 4
> boolean isPrimaryReplicaBlock - 1
> Total = 12 (Object) + 17 = 29
> BucketEntry
> ------------
> int offsetBase - 4
> int length - 4
> byte offset1 - 1
> byte deserialiserIndex - 1
> long accessCounter - 8
> BlockPriority priority - Ref - 4
> volatile boolean markedForEvict - 1
> AtomicInteger refCount - 16 + 4
> long cachedTime - 8
> Total = 12 (Object) + 51 = 63
> ConcurrentHashMap Map.Entry - 40
> blocksByHFile ConcurrentSkipListSet Entry - 40
> Total = 29 + 63 + 80 = 172
> For 10 million blocks we will end up having 1.6GB of heap size.
> This jira aims to reduce this as much as possible
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)