[ https://issues.apache.org/jira/browse/HDFS-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13171531#comment-13171531 ]
Luke Lu commented on HDFS-2699: ------------------------------- bq. The number of random reads issued by HBase is almost twice the iops shown via iostat. Each hbase random io translates to a position read (pread) to HDFS. As I mentioned in our last conversation, you can embed an application level checksum in HBase block (a la Hypertable) and turn off verifyChecksum in preads. You'd need HFile v3 for this, of course :) bq. Any thoughts on how we can put data and checksums together on the same block file? As discussed in HADOOP-1134, inline checksums not only makes the code more complex, but also makes inplace upgrade a lot more expensive (you have to copy the content). We can solve the latter by supporting two block format simultaneously at the expense of code complexity. > Store data and checksums together in block file > ----------------------------------------------- > > Key: HDFS-2699 > URL: https://issues.apache.org/jira/browse/HDFS-2699 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: dhruba borthakur > Assignee: dhruba borthakur > > The current implementation of HDFS stores the data in one block file and the > metadata(checksum) in another block file. This means that every read from > HDFS actually consumes two disk iops, one to the datafile and one to the > checksum file. This is a major problem for scaling HBase, because HBase is > usually bottlenecked on the number of random disk iops that the > storage-hardware offers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira