[ 
https://issues.apache.org/jira/browse/HADOOP-5133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12668192#action_12668192
 ] 

Hairong Kuang commented on HADOOP-5133:
---------------------------------------

Yes, in this case, the problem is caused by the recovery from an unsuccessful 
pipeline. 

> Deciding which replica is correct should be based on completely other than 
> the size properties. 
yes, i agree. But for some of cases we can decide which one is corrupt. For a 
finalized block (NN has received blockReceived), if the reported block length 
is not the same as the NN recorded length, the reported block must be corrupt. 
For a block that's being written (calling addStoredBlock through 
blockReceived), if the reported length is shorter than the NN recorded one, it 
must be corrupt too. If it is longer, then it is hard to decide which ones are 
corrupt because NN recorded length does not accurately match the length of the 
ones at the DataNode disk.

> FSNameSystem#addStoredBlock does not handle inconsistent block length 
> correctly
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-5133
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5133
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.2
>            Reporter: Hairong Kuang
>             Fix For: 0.19.1
>
>
> Currently NameNode treats either the new replica or existing replicas as 
> corrupt if the new replica's length is inconsistent with NN recorded block 
> length. The correct behavior should be
> 1. For a block that is not under construction, the new replica should be 
> marked as corrupt if its length is inconsistent (no matter shorter or longer) 
> with the NN recorded block length;
> 2. For an under construction block, if the new replica's length is shorter 
> than the NN recorded block length, the new replica could be marked as 
> corrupt; if the new replica's length is longer, NN should update its recorded 
> block length. But it should not mark existing replicas as corrupt. This is 
> because NN recorded length for an under construction block does not 
> accurately match the block length on datanode disk. NN should not judge an 
> under construction replica to be corrupt by looking at the inaccurate 
> information:  its recorded block length.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to