[
https://issues.apache.org/jira/browse/HBASE-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14542274#comment-14542274
]
Anoop Sam John commented on HBASE-11927:
----------------------------------------
bq.I was wondering if we could not just strip the hbase checksumtype... there
is nothing in it now. Could we just use hadoops?
I was telling mainly wrt read part where we have to handle the old HFiles (with
HBase checksum type). So if we just use hadoop's type code, we will have to
change write part also.
> Use Native Hadoop Library for HFile checksum (And flip default from CRC32 to
> CRC32C)
> ------------------------------------------------------------------------------------
>
> Key: HBASE-11927
> URL: https://issues.apache.org/jira/browse/HBASE-11927
> Project: HBase
> Issue Type: Bug
> Reporter: stack
> Assignee: Apekshit Sharma
> Attachments: HBASE-11927-v1.patch, HBASE-11927-v2.patch,
> HBASE-11927-v4.patch, HBASE-11927.patch, after-compact-2%.svg,
> after-randomWrite1M-0.5%.svg, before-compact-22%.svg,
> before-randomWrite1M-5%.svg, c2021.crc2.svg, c2021.write.2.svg,
> c2021.zip.svg, crc32ct.svg
>
>
> Up in hadoop they have this change. Let me publish some graphs to show that
> it makes a difference (CRC is a massive amount of our CPU usage in my
> profiling of an upload because of compacting, flushing, etc.). We should
> also make use of native CRCings -- especially the 2.6 HDFS-6865 and ilk -- in
> hbase but that is another issue for now.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)