[
https://issues.apache.org/jira/browse/HADOOP-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12671334#action_12671334
]
dhruba borthakur commented on HADOOP-5133:
------------------------------------------
> if the new replica's length is greater than the default block length or
> smaller than
Just to mbe more explicit: the "default block length" referred to here is the
the preferredBlockSize for this file. It does not refer to the default block
size of the filesystem.
> FSNameSystem#addStoredBlock does not handle inconsistent block length
> correctly
> -------------------------------------------------------------------------------
>
> Key: HADOOP-5133
> URL: https://issues.apache.org/jira/browse/HADOOP-5133
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.18.2
> Reporter: Hairong Kuang
> Priority: Blocker
> Fix For: 0.19.1
>
>
> Currently NameNode treats either the new replica or existing replicas as
> corrupt if the new replica's length is inconsistent with NN recorded block
> length. The correct behavior should be
> 1. For a block that is not under construction, the new replica should be
> marked as corrupt if its length is inconsistent (no matter shorter or longer)
> with the NN recorded block length;
> 2. For an under construction block, if the new replica's length is shorter
> than the NN recorded block length, the new replica could be marked as
> corrupt; if the new replica's length is longer, NN should update its recorded
> block length. But it should not mark existing replicas as corrupt. This is
> because NN recorded length for an under construction block does not
> accurately match the block length on datanode disk. NN should not judge an
> under construction replica to be corrupt by looking at the inaccurate
> information: its recorded block length.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.