[
https://issues.apache.org/jira/browse/HBASE-26780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17754506#comment-17754506
]
Yutong Xiao commented on HBASE-26780:
-------------------------------------
We also met this issue recently. The cause is that the block size read from the
cached header in FSReaderImpl is not correct (But not clear why the header is
incorrect without any IOException from hdfs client). As the hfile is not
corrupted, read the header again from hdfs should work to avoid this issue. I
raised an MR to read the header again when the cached header dose not match the
block size.
> HFileBlock.verifyOnDiskSizeMatchesHeader throw IOException: Passed in
> onDiskSizeWithHeader= A != B
> --------------------------------------------------------------------------------------------------
>
> Key: HBASE-26780
> URL: https://issues.apache.org/jira/browse/HBASE-26780
> Project: HBase
> Issue Type: Bug
> Components: BlockCache
> Affects Versions: 2.2.2
> Reporter: yuzhang
> Priority: Major
> Attachments: IOException.png
>
>
> When I scan a region, HBase throw IOException: Passed in
> onDiskSizeWithHeader= A != B
> The HFile mentioned Error message can be access normally.
> it recover by command – move region. I guess that onDiskSizeWithHeader of
> HFileBlock has been changed. And RS get the correct BlockHeader Info after
> region reopened.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)