[
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Zhihong Yu updated HBASE-5074:
------------------------------
Comment: was deleted
(was: tedyu has commented on the revision "[jira] [HBASE-5074] Support
checksums in HBase block cache".
INLINE COMMENTS
src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java:425 This cast is
not safe. See
https://builds.apache.org/job/PreCommit-HBASE-Build/907//testReport/org.apache.hadoop.hbase.mapreduce/TestLoadIncrementalHFiles/testSimpleLoad/:
Caused by: java.lang.ClassCastException:
org.apache.hadoop.hdfs.DistributedFileSystem cannot be cast to
org.apache.hadoop.hbase.util.HFileSystem
at
org.apache.hadoop.hbase.io.hfile.HFile.createReaderWithEncoding(HFile.java:425)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:433)
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:407)
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:328)
at
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:326)
src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java:160 Should we
default to CRC32C ?
src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java:2 No year is
needed.
src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java:59 Shall we
name this variable ctor ?
Similar comment applies to other meth variables in this patch.
REVISION DETAIL
https://reviews.facebook.net/D1521
)
> support checksums in HBase block cache
> --------------------------------------
>
> Key: HBASE-5074
> URL: https://issues.apache.org/jira/browse/HBASE-5074
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch,
> D1521.2.patch, D1521.3.patch, D1521.3.patch
>
>
> The current implementation of HDFS stores the data in one block file and the
> metadata(checksum) in another block file. This means that every read into the
> HBase block cache actually consumes two disk iops, one to the datafile and
> one to the checksum file. This is a major problem for scaling HBase, because
> HBase is usually bottlenecked on the number of random disk iops that the
> storage-hardware offers.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira