[
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13172994#comment-13172994
]
Todd Lipcon commented on HBASE-5074:
------------------------------------
bq. Once the block is read into the hbase cache, it will verify the checksum
and if not valid, have to use a new HDFS api to read in contents from another
hdfs replica
Rather than adding a new API to read from another replica, HBase could instead
just trigger a second pread from HDFS _with_ the verifyChecksum flag set. This
would cause HDFS to notice the checksum error based on its own checksums, and
do the "right thing" (ie report the bad replica, fix it up, etc).
> support checksums in HBase block cache
> --------------------------------------
>
> Key: HBASE-5074
> URL: https://issues.apache.org/jira/browse/HBASE-5074
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
>
> The current implementation of HDFS stores the data in one block file and the
> metadata(checksum) in another block file. This means that every read into the
> HBase block cache actually consumes two disk iops, one to the datafile and
> one to the checksum file. This is a major problem for scaling HBase, because
> HBase is usually bottlenecked on the number of random disk iops that the
> storage-hardware offers.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira