[
https://issues.apache.org/jira/browse/HBASE-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16094930#comment-16094930
]
Vladimir Rodionov commented on HBASE-17819:
-------------------------------------------
{quote}
BlockCacheKey
---------------
String hfileName - Ref - 4
long offset - 8
BlockType blockType - Ref - 4
boolean isPrimaryReplicaBlock - 1
Total = 12 (Object) + 17 = 29
BucketEntry
------------
int offsetBase - 4
int length - 4
byte offset1 - 1
byte deserialiserIndex - 1
long accessCounter - 8
BlockPriority priority - Ref - 4
volatile boolean markedForEvict - 1
AtomicInteger refCount - 16 + 4
long cachedTime - 8
Total = 12 (Object) + 51 = 63
ConcurrentHashMap Map.Entry - 40
blocksByHFile ConcurrentSkipListSet Entry - 40
Total = 29 + 63 + 80 = 172
{quote}
Just couple corrections on you math, guys
# compressed OOP (obj ref = 4 bytes) works up to 30.5GB of heap size. Many
users already have more than that
# object's fields layout is slightly different: n-byte types are aligned on
n-bytes boundaries, therefore if you have for example, boolean and long fields
of the object is going to be 16 (overhead) + 8 + 8 = 32 and not 16 + 1+ 8. You
should take into account also that total object size is always multiple of 8,
so if you get 42, then its actually - 48, because next object starts on a
8-byte boundary.
You can shave some bytes by just rearranging fields in the object in size
descending order: first go 8 byte type, followed by 4-byte types, 2-byte and
1-byte at the end
> Reduce the heap overhead for BucketCache
> ----------------------------------------
>
> Key: HBASE-17819
> URL: https://issues.apache.org/jira/browse/HBASE-17819
> Project: HBase
> Issue Type: Sub-task
> Components: BucketCache
> Reporter: Anoop Sam John
> Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-17819_V1.patch, HBASE-17819_V2.patch
>
>
> We keep Bucket entry map in BucketCache. Below is the math for heapSize for
> the key , value into this map.
> BlockCacheKey
> ---------------
> String hfileName - Ref - 4
> long offset - 8
> BlockType blockType - Ref - 4
> boolean isPrimaryReplicaBlock - 1
> Total = 12 (Object) + 17 = 29
> BucketEntry
> ------------
> int offsetBase - 4
> int length - 4
> byte offset1 - 1
> byte deserialiserIndex - 1
> long accessCounter - 8
> BlockPriority priority - Ref - 4
> volatile boolean markedForEvict - 1
> AtomicInteger refCount - 16 + 4
> long cachedTime - 8
> Total = 12 (Object) + 51 = 63
> ConcurrentHashMap Map.Entry - 40
> blocksByHFile ConcurrentSkipListSet Entry - 40
> Total = 29 + 63 + 80 = 172
> For 10 million blocks we will end up having 1.6GB of heap size.
> This jira aims to reduce this as much as possible
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)