[
https://issues.apache.org/jira/browse/HADOOP-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12663864#action_12663864
]
dhruba borthakur commented on HADOOP-4848:
------------------------------------------
I think this check should occur only for files that are not under construction.
For blocks that are being actively written to, it is better for lease recovery
to decide which blocks to delete and which to keep. More discussion on this via
HADOOP-5027.
> NN should check a block's length even if the block is not a new block when
> processing a blockreport
> ---------------------------------------------------------------------------------------------------
>
> Key: HADOOP-4848
> URL: https://issues.apache.org/jira/browse/HADOOP-4848
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.18.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Priority: Blocker
> Fix For: 0.20.0
>
>
> If the block length does not match the one in the blockMap, we should mark
> the block as corrupted. This could help clearing the polluted replicas caused
> by HADOOP-4810 and also help detect the on-disk block gets truncated/enlarged
> manually by accident.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.