[
https://issues.apache.org/jira/browse/HBASE-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Phabricator updated HBASE-5074:
-------------------------------
Attachment: D1521.3.patch
dhruba updated the revision "[jira] [HBASE-5074] Support checksums in HBase
block cache".
Reviewers: mbautin
Many new goodies, thanks to the feedback from Mikhail and Todd. This completes
my addressing all the current set of review comments. If somebody can
re-review it
again, that will be great.
1. The bytesPerChecksum is configurable. One can set
hbase.hstore.bytes.per.checksum
in the config to set this. The default value is 16K. Similarly, one can set
hbase.hstore.checksum.name to either CRC32 or CRC32C. The default is CRC32. If
PureJavaCRC32 algoritm is available in the classpath, then it is used,
otherwise it falls back to using java.util.zip.CRC32. Each checksum value is
assumed to be 4 bytes,
it is currently not configurable (any comments here?). The reflection-method
of
creating checksum objects is reworked to incur much lower overhead.
2. If a hbase-level crc check fails, then it falls back to using hdfs-level
checksums for the next few reads (defalts to 100). After that, it will retry
using hbase-level checksums. I picked 100 as the default so that even in the
case
of continuous hbase-checksum failures, the overhead for additionals iops is
limited
to 1%. Enahnced unit test to validate this behaviour.
3. Enhanced unit tests to test different sizes of bytesPerChecksum. Also,
added
JMX metrics to record the number of times hbase-checksum verification
failures occur.
REVISION DETAIL
https://reviews.facebook.net/D1521
AFFECTED FILES
src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java
src/test/java/org/apache/hadoop/hbase/regionserver/CreateRandomStoreFile.java
src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestCloseRegionHandler.java
src/test/java/org/apache/hadoop/hbase/regionserver/HFileReadWriteTest.java
src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWithBloomError.java
src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java
src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java
src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileReaderV1.java
src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java
src/test/java/org/apache/hadoop/hbase/util/MockRegionServerServices.java
src/main/java/org/apache/hadoop/hbase/HConstants.java
src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java
src/main/java/org/apache/hadoop/hbase/util/HFileSystem.java
src/main/java/org/apache/hadoop/hbase/util/ChecksumByteArrayOutputStream.java
src/main/java/org/apache/hadoop/hbase/util/CompoundBloomFilter.java
src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java
src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java
src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerMetrics.java
src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
src/main/java/org/apache/hadoop/hbase/io/hfile/ChecksumUtil.java
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV1.java
src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV1.java
> support checksums in HBase block cache
> --------------------------------------
>
> Key: HBASE-5074
> URL: https://issues.apache.org/jira/browse/HBASE-5074
> Project: HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Attachments: D1521.1.patch, D1521.1.patch, D1521.2.patch,
> D1521.2.patch, D1521.3.patch, D1521.3.patch
>
>
> The current implementation of HDFS stores the data in one block file and the
> metadata(checksum) in another block file. This means that every read into the
> HBase block cache actually consumes two disk iops, one to the datafile and
> one to the checksum file. This is a major problem for scaling HBase, because
> HBase is usually bottlenecked on the number of random disk iops that the
> storage-hardware offers.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira