[ 
https://issues.apache.org/jira/browse/HBASE-22532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16855300#comment-16855300
 ] 

ramkrishna.s.vasudevan commented on HBASE-22532:
------------------------------------------------

Probably we should see the size of per block getting written and while reading 
see what are the offsets and length we pass to HDFS and then ascertain if that  
matches with the dataLength you got here. Probably we are reading more (approx 
2 blocks). good one [~openinx].

> There's still too much cpu wasting on validating checksum even if 
> buffer.size=65KB
> ----------------------------------------------------------------------------------
>
>                 Key: HBASE-22532
>                 URL: https://issues.apache.org/jira/browse/HBASE-22532
>             Project: HBase
>          Issue Type: Sub-task
>            Reporter: Zheng Hu
>            Assignee: Zheng Hu
>            Priority: Major
>         Attachments: async-prof-pid-27827-cpu-3.svg, 
> async-prof-pid-64695-cpu-1.svg
>
>
> After disabled the block cache, and with the following config: 
> {code}
>     # Disable the block cache
>     hfile.block.cache.size=0
>     hbase.ipc.server.allocator.buffer.size=66560
>     hbase.ipc.server.reservoir.minimal.allocating.size=0
> {code}
> The ByteBuff for block should be expected to be a SingleByteBuff,  which will 
> use the hadoop native lib to validate the checksum, while in the cpu flame 
> graph 
> [async-prof-pid-27827-cpu-3.svg|https://issues.apache.org/jira/secure/attachment/12970683/async-prof-pid-27827-cpu-3.svg],
>   we can still see that about 32% CPU wasted on PureJavaCrc32#update,  which 
> means it's not using the faster hadoop native lib.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to