[ 
https://issues.apache.org/jira/browse/HBASE-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-1590:
-------------------------

         Priority: Minor  (was: Major)
    Fix Version/s:     (was: 0.20.1)
                   0.21.0
         Assignee: stack  (was: Jonathan Gray)

Assigned me, moved to 0.21 and made it trivial.

> Extend TestHeapSize and ClassSize to do "deep" sizing of Objects
> ----------------------------------------------------------------
>
>                 Key: HBASE-1590
>                 URL: https://issues.apache.org/jira/browse/HBASE-1590
>             Project: Hadoop HBase
>          Issue Type: Improvement
>    Affects Versions: 0.20.0
>            Reporter: Jonathan Gray
>            Assignee: stack
>            Priority: Minor
>             Fix For: 0.21.0
>
>
> As discussed in HBASE-1554 there is a bit of a disconnect between how 
> ClassSize calculates the heap size and how we need to calculate heap size in 
> our implementations.
> For example, the LRU block cache can be sized via ClassSize, but it is a 
> shallow sizing.  There is a backing ConcurrentHashMap that is the largest 
> memory consumer.  However, ClassSize only counts that as a single reference.  
> But in our heapSize() reporting, we want to include *everything* within that 
> Object.
> This issue is to resolve that dissonance.  We may need to create an 
> additional ClassSize.estimateDeep(), we may need to rethink our HeapSize 
> interface, or maybe just leave it as is.  The two primary goals of all this 
> testing is to 1) ensure that if something is changed and the sizing is not 
> updated, our tests fail, and 2) ensure our sizing is as accurate as possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to