[ https://issues.apache.org/jira/browse/HDFS-2699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13171698#comment-13171698 ]
Todd Lipcon commented on HDFS-2699: ----------------------------------- +1 on considering putting them in the same file. Block files already have a metadata header so we could backward-compatibly support the earlier format without requiring a data rewrite on upgrade (prohibitively expensive) Regarding the other ideas, like caching checksums in buffer cache or on SSD, I think the issue here is that the 0.78% overhead (4/512) still makes for fairly large checksum size on a big DN. For example, if the application has a dataset of 4TB per node, then even caching just the checksums is 31GB of RAM. If you're mostly missing HBase's data cache, then you'll probably be missing the checksum cache too (are you really going to devote 30G to it?) > Store data and checksums together in block file > ----------------------------------------------- > > Key: HDFS-2699 > URL: https://issues.apache.org/jira/browse/HDFS-2699 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: dhruba borthakur > Assignee: dhruba borthakur > > The current implementation of HDFS stores the data in one block file and the > metadata(checksum) in another block file. This means that every read from > HDFS actually consumes two disk iops, one to the datafile and one to the > checksum file. This is a major problem for scaling HBase, because HBase is > usually bottlenecked on the number of random disk iops that the > storage-hardware offers. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira