[
https://issues.apache.org/jira/browse/HBASE-26780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762398#comment-17762398
]
Yutong Xiao commented on HBASE-26780:
-------------------------------------
[~ndimiduk] hi, when we got the feedback of the issue, the problematic file is
already compacted away as well. The problem cannot be reproduced. So that we
only did some reviews about related hbase code. When we met the issue, the B is
33, which is just the header size with checksum used in
verifyOnDiskSizeMatchesHeader#getOnDiskSizeWithHeader#headerSize. That means
the actual block size is missed. We currently try to re-read the header once
again like I did in my mr. We have onlined the changes in my MR, it looks so
far so good. If it is actual the corruption in hdfs block, we should meet it
once again.
But from [~cribbee] 's log, the error value is not the same. The cause may be
different.
FYI, we are running 1.4.12 + some customised patches.
> HFileBlock.verifyOnDiskSizeMatchesHeader throw IOException: Passed in
> onDiskSizeWithHeader= A != B
> --------------------------------------------------------------------------------------------------
>
> Key: HBASE-26780
> URL: https://issues.apache.org/jira/browse/HBASE-26780
> Project: HBase
> Issue Type: Bug
> Components: BlockCache
> Affects Versions: 2.2.2
> Reporter: yuzhang
> Priority: Major
> Attachments: IOException.png
>
>
> When I scan a region, HBase throw IOException: Passed in
> onDiskSizeWithHeader= A != B
> The HFile mentioned Error message can be access normally.
> it recover by command – move region. I guess that onDiskSizeWithHeader of
> HFileBlock has been changed. And RS get the correct BlockHeader Info after
> region reopened.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)