[
https://issues.apache.org/jira/browse/HBASE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14158395#comment-14158395
]
Enis Soztutar commented on HBASE-11625:
---------------------------------------
Nick pointed me out to this issue. I have been trying to nail down a test
failure on windows (TestHFileBlock#testConcurrentReading) which fails with the
same stack trace. I can repro the failure only on windows, but with jdk6,
jdk7u45 and jdk7u67 as well, and hadoop versions 2.2.0, 2.4.0, 2.5.0 and
2.6.0-SNAPSHOT.
The test does write a file containing random HFileBlocks and does concurrent
reads from multiple threads. Once in a while, whatever we seek() + read() does
not match what is there in the file (I've verified multiple times from offsets
to the actual file). I think there is a rare edge case that we are hitting.
The other interesting bit is that the test only starts failing after 0.98.3. I
was not able to get previous versions to fail. But also I was not able to
bisect the commit because reproducing the failure is not easy.
> Reading datablock throws "Invalid HFile block magic" and can not switch to
> hdfs checksum
> -----------------------------------------------------------------------------------------
>
> Key: HBASE-11625
> URL: https://issues.apache.org/jira/browse/HBASE-11625
> Project: HBase
> Issue Type: Bug
> Components: HFile
> Affects Versions: 0.94.21, 0.98.4, 0.98.5
> Reporter: qian wang
> Attachments: 2711de1fdf73419d9f8afc6a8b86ce64.gz
>
>
> when using hbase checksum,call readBlockDataInternal() in hfileblock.java, it
> could happen file corruption but it only can switch to hdfs checksum
> inputstream till validateBlockChecksum(). If the datablock's header corrupted
> when b = new HFileBlock(),it throws the exception "Invalid HFile block magic"
> and the rpc call fail
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)