[ 
https://issues.apache.org/jira/browse/HBASE-15908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15305237#comment-15305237
 ] 

Appy commented on HBASE-15908:
------------------------------

Here's the patch. For some reason, i am not getting option to attach files.
{noformat}
>From d1c83054e72ec42f4ec499a15bb3287660600d76 Mon Sep 17 00:00:00 2001
From: Apekshit <[email protected]>
Date: Sat, 28 May 2016 01:09:01 -0700
Subject: [PATCH] HBASE-15908 Pass mutable ByteBuffer to validateChecksum() in
 HFileBlock because in downstream code, Hadoop's DataChecksum class requires
 either direct byte-buffer of mutable byte-buffer. (Apekshit)

Change-Id: I988b18c40f5aa7a3670eb5bf4ece8ac0eca225e2
---
 .../src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java      | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
index efc9a30..14a5cd1 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
@@ -1712,7 +1712,7 @@ public class HFileBlock implements Cacheable {
       ByteBuffer onDiskBlockByteBuffer = ByteBuffer.wrap(onDiskBlock, 0, 
onDiskSizeWithHeader);
       // Verify checksum of the data before using it for building HFileBlock.
       if (verifyChecksum &&
-          !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), 
hdrSize)) {
+          !validateChecksum(offset, onDiskBlockByteBuffer, hdrSize)) {
         return null;
       }
       // The onDiskBlock will become the headerAndDataBuffer for this block.
--
2.3.2 (Apple Git-55)
{noformat}

> Checksum verification is broken
> -------------------------------
>
>                 Key: HBASE-15908
>                 URL: https://issues.apache.org/jira/browse/HBASE-15908
>             Project: HBase
>          Issue Type: Bug
>          Components: HFile
>    Affects Versions: 1.3.0
>            Reporter: Mikhail Antonov
>            Assignee: Mikhail Antonov
>            Priority: Critical
>
> It looks like HBASE-11625 (cc [~stack], [~appy]) has broken checksum 
> verification? I'm seeing the following on my cluster (1.3.0, Hadoop 2.7).
> Caused by: org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem 
> reading HFile Trailer from file <file path>
>       at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:497)
>       at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:525)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.<init>(StoreFile.java:1135)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:259)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:427)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:528)
>       at 
> org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:518)
>       at 
> org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:652)
>       at 
> org.apache.hadoop.hbase.regionserver.HStore.access$000(HStore.java:117)
>       at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:519)
>       at org.apache.hadoop.hbase.regionserver.HStore$1.call(HStore.java:516)
>       ... 6 more
> Caused by: java.lang.IllegalArgumentException: input ByteBuffers must be 
> direct buffers
>       at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSums(Native 
> Method)
>       at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:59)
>       at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:301)
>       at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateChecksum(ChecksumUtil.java:120)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateChecksum(HFileBlock.java:1785)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1728)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1558)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlock(HFileBlock.java:1397)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader$1.nextBlockWithBlockType(HFileBlock.java:1405)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:151)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:78)
>       at 
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:487)
>       ... 16 more
> Prior this change we won't use use native crc32 checksum verification as in 
> Hadoop's DataChecksum#verifyChunkedSums we would go this codepath
> if (data.hasArray() && checksums.hasArray()) {
>   <check native checksum, but using byte[] instead of byte buffers>
> }
> So we were fine. However, now we're dropping below and try to use the 
> slightly different variant of native crc32 (if one is available)  taking 
> ByteBuffer instead of byte[], which expects DirectByteBuffer, not Heap BB. 
> I think easiest fix working on all Hadoops would be to remove asReadonly() 
> conversion here:
> !validateChecksum(offset, onDiskBlockByteBuffer.asReadOnlyBuffer(), hdrSize)) 
> {
> I don't see why do we need it. Let me test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to