[ https://issues.apache.org/jira/browse/HBASE-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12726260#action_12726260 ]
stack commented on HBASE-1590: ------------------------------ @holstad is Instrumention.getObjectSize() a sizeof call? SizeOf is GPL, right? Let me know if you want me to work on build to add like we have for clover where you point at a sizeof install and then run an ant task with -javaagent:/home/erik/src/tgzs/SizeOf.jar. We could run this as part of hudson build (I think -- maybe GPL code is disallowed up on hudson ... would have to see)... or we could run it as part of release. > Extend TestHeapSize and ClassSize to do "deep" sizing of Objects > ---------------------------------------------------------------- > > Key: HBASE-1590 > URL: https://issues.apache.org/jira/browse/HBASE-1590 > Project: Hadoop HBase > Issue Type: Improvement > Affects Versions: 0.20.0 > Reporter: Jonathan Gray > Fix For: 0.20.0 > > > As discussed in HBASE-1554 there is a bit of a disconnect between how > ClassSize calculates the heap size and how we need to calculate heap size in > our implementations. > For example, the LRU block cache can be sized via ClassSize, but it is a > shallow sizing. There is a backing ConcurrentHashMap that is the largest > memory consumer. However, ClassSize only counts that as a single reference. > But in our heapSize() reporting, we want to include *everything* within that > Object. > This issue is to resolve that dissonance. We may need to create an > additional ClassSize.estimateDeep(), we may need to rethink our HeapSize > interface, or maybe just leave it as is. The two primary goals of all this > testing is to 1) ensure that if something is changed and the sizing is not > updated, our tests fail, and 2) ensure our sizing is as accurate as possible. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.