[
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15081573#comment-15081573
]
Enis Soztutar commented on HBASE-11625:
---------------------------------------
bq. Yeah, it will be big problem when data is corrupted and HDFS checksum is
disabled. We need to enable it in HFileSystem as I mentioned above
This is not how the logic works right now. Notice that if
{{useHBaseChecksum=true}} we are doing HBase level checksums. Thus the file
system object, {{noChecksumFs}} does not do checksums. I think there is no way
to dynamically switch the checksumming of the {{FileSystem}} object that is why
we need to have two FS objects around.
The actual logic to fallback to using the hdfs checksums is in
{{FSDataInputStreamWrapper.prepareForBlockReader()}} and
{{fallbackToFsChecksum()}} methods.
> Reading datablock throws "Invalid HFile block magic" and can not switch to
> hdfs checksum
> -----------------------------------------------------------------------------------------
>
> Key: HBASE-11625
> URL: https://issues.apache.org/jira/browse/HBASE-11625
> Project: HBase
> Issue Type: Bug
> Components: HFile
> Affects Versions: 0.94.21, 0.98.4, 0.98.5, 1.0.1.1, 1.0.3
> Reporter: qian wang
> Assignee: Pankaj Kumar
> Fix For: 2.0.0
>
> Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz, HBASE-11625.patch
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it
> could happen file corruption but it only can switch to hdfs checksum
> inputstream till validateBlockChecksum(). If the datablock's header corrupted
> when b = new HFileBlock(),it throws the exception "Invalid HFile block magic"
> and the rpc call fail
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)