[
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092587#comment-16092587
]
Anoop Sam John commented on HBASE-17819:
----------------------------------------
Now the CompactedHFilesDischarger is running at interval and that run might
remove many compacted files. So instead of one by one removal of blocks per
file, we can do it as a list of files. Need to see how effective we can do.
> Reduce the heap overhead for BucketCache
> ----------------------------------------
>
> Key: HBASE-17819
> URL: https://issues.apache.org/jira/browse/HBASE-17819
> Project: HBase
> Issue Type: Sub-task
> Components: BucketCache
> Reporter: Anoop Sam John
> Assignee: Anoop Sam John
> Fix For: 2.0.0
>
>
> We keep Bucket entry map in BucketCache. Below is the math for heapSize for
> the key , value into this map.
> BlockCacheKey
> ---------------
> String hfileName - Ref - 4
> long offset - 8
> BlockType blockType - Ref - 4
> boolean isPrimaryReplicaBlock - 1
> Total = 12 (Object) + 17 = 29
> BucketEntry
> ------------
> int offsetBase - 4
> int length - 4
> byte offset1 - 1
> byte deserialiserIndex - 1
> long accessCounter - 8
> BlockPriority priority - Ref - 4
> volatile boolean markedForEvict - 1
> AtomicInteger refCount - 16 + 4
> long cachedTime - 8
> Total = 12 (Object) + 51 = 63
> ConcurrentHashMap Map.Entry - 40
> blocksByHFile ConcurrentSkipListSet Entry - 40
> Total = 29 + 63 + 80 = 172
> For 10 million blocks we will end up having 1.6GB of heap size.
> This jira aims to reduce this as much as possible
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)