[
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13202100#comment-13202100
]
Phabricator commented on HBASE-5074:
------------------------------------
dhruba has commented on the revision "[jira] [HBASE-5074] Support checksums in
HBase block cache".
Ted:I forgot to state that one can change the default checksum algorithm
anytime. No disk format upgrade is necessary. Each hfile stores the checksum
algorithm that is used to store data inside it. If today u use CRC32 and the
tomorrow you change the configuration setting to CRC32C, then new files that
are generated (as part of memstore flushes and compactions) will start using
CRC32C while older files will continue to be verified via CRC32 algorithm.
REVISION DETAIL
https://reviews.facebook.net/D1521
> support checksums in HBase block cache
> --------------------------------------
>
> Key: HBASE-5074
> URL: https://issues.apache.org/jira/browse/HBASE-5074
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch,
> D1521.2.patch, D1521.3.patch, D1521.3.patch
>
>
> The current implementation of HDFS stores the data in one block file and the
> metadata(checksum) in another block file. This means that every read into the
> HBase block cache actually consumes two disk iops, one to the datafile and
> one to the checksum file. This is a major problem for scaling HBase, because
> HBase is usually bottlenecked on the number of random disk iops that the
> storage-hardware offers.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira