[ 
https://issues.apache.org/jira/browse/HBASE-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13460698#comment-13460698
 ] 

Todd Lipcon commented on HBASE-6852:
------------------------------------

If using an array of longs, we'd get a ton of cache contention effects. 
Whatever we do should be cache-line padded to avoid this perf hole.

Having a per-thread (ThreadLocal) metrics array isn't a bad way to go: no 
contention, can use non-volatile types, and can be stale-read during metrics 
snapshots by just iterating over all the threads.
                
> SchemaMetrics.updateOnCacheHit costs too much while full scanning a table 
> with all of its fields
> ------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-6852
>                 URL: https://issues.apache.org/jira/browse/HBASE-6852
>             Project: HBase
>          Issue Type: Improvement
>          Components: metrics
>    Affects Versions: 0.94.0
>            Reporter: Cheng Hao
>            Priority: Minor
>              Labels: performance
>             Fix For: 0.94.3, 0.96.0
>
>         Attachments: onhitcache-trunk.patch
>
>
> The SchemaMetrics.updateOnCacheHit costs too much while I am doing the full 
> table scanning.
> Here is the top 5 hotspots within regionserver while full scanning a table: 
> (Sorry for the less-well-format)
> CPU: Intel Westmere microarchitecture, speed 2.262e+06 MHz (estimated)
> Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit 
> mask of 0x00 (No unit mask) count 5000000
> samples  %        image name               symbol name
> -------------------------------------------------------------------------------
> 98447    13.4324  14033.jo                 void 
> org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
>  boolean)
>   98447    100.000  14033.jo                 void 
> org.apache.hadoop.hbase.regionserver.metrics.SchemaMetrics.updateOnCacheHit(org.apache.hadoop.hbase.io.hfile.BlockType$BlockCategory,
>  boolean) [self]
> -------------------------------------------------------------------------------
> 45814     6.2510  14033.jo                 int 
> org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
> byte[], int, int)
>   45814    100.000  14033.jo                 int 
> org.apache.hadoop.hbase.KeyValue$KeyComparator.compareRows(byte[], int, int, 
> byte[], int, int) [self]
> -------------------------------------------------------------------------------
> 43523     5.9384  14033.jo                 boolean 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
>   43523    100.000  14033.jo                 boolean 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(org.apache.hadoop.hbase.KeyValue)
>  [self]
> -------------------------------------------------------------------------------
> 42548     5.8054  14033.jo                 int 
> org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
> byte[], int, int)
>   42548    100.000  14033.jo                 int 
> org.apache.hadoop.hbase.KeyValue$KeyComparator.compare(byte[], int, int, 
> byte[], int, int) [self]
> -------------------------------------------------------------------------------
> 40572     5.5358  14033.jo                 int 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
>  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1
>   40572    100.000  14033.jo                 int 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.binarySearchNonRootIndex(byte[],
>  int, int, java.nio.ByteBuffer, org.apache.hadoop.io.RawComparator)~1 [self]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to