[
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15080650#comment-15080650
]
Qianxi Zhang commented on HBASE-11625:
--------------------------------------
I think it is a big problem. [~tedyu] [~apurtell]
If we use the hbase checksum, when the data is corrupt, this code " b = new
HFileBlock(headerBuf, fileContext.isUseHBaseChecksum());" will throw exception
before the checksum code " if (verifyChecksum && !validateBlockChecksum(b,
onDiskBlock, hdrSize))".
{code}
b = new HFileBlock(headerBuf, fileContext.isUseHBaseChecksum());
onDiskBlock = new byte[b.getOnDiskSizeWithHeader() + hdrSize];
// headerBuf is HBB
System.arraycopy(headerBuf.array(), headerBuf.arrayOffset(),
onDiskBlock, 0, hdrSize);
nextBlockOnDiskSize =
readAtOffset(is, onDiskBlock, hdrSize, b.getOnDiskSizeWithHeader()
- hdrSize, true, offset + hdrSize, pread);
onDiskSizeWithHeader = b.onDiskSizeWithoutHeader + hdrSize;
}
if (!fileContext.isCompressedOrEncrypted()) {
b.assumeUncompressed();
}
if (verifyChecksum && !validateBlockChecksum(b, onDiskBlock, hdrSize)) {
return null; // checksum mismatch
}
{code}
> Reading datablock throws "Invalid HFile block magic" and can not switch to
> hdfs checksum
> -----------------------------------------------------------------------------------------
>
> Key: HBASE-11625
> URL: https://issues.apache.org/jira/browse/HBASE-11625
> Project: HBase
> Issue Type: Bug
> Components: HFile
> Affects Versions: 0.94.21, 0.98.4, 0.98.5
> Reporter: qian wang
> Assignee: Pankaj Kumar
> Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it
> could happen file corruption but it only can switch to hdfs checksum
> inputstream till validateBlockChecksum(). If the datablock's header corrupted
> when b = new HFileBlock(),it throws the exception "Invalid HFile block magic"
> and the rpc call fail
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)