Corrupted blocks get deleted but not replicated
-----------------------------------------------

                 Key: HADOOP-1349
                 URL: https://issues.apache.org/jira/browse/HADOOP-1349
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
            Reporter: Hairong Kuang
             Fix For: 0.14.0


When I test the patch to HADOOP-1345 on a two node dfs cluster, I see that dfs 
correctly delete the corrupted replica and successfully retry reading from the 
other correct replica, but the block does not get replicated. The block remains 
with only 1 replica until the next block report comes in.

In my testcase, since the dfs cluster has only 2 datanodes, the target of 
replication is the same as the target of block invalidation.  After poking the 
logs, I found out that the namenode sent the replication request before the 
block invalidation request. 

This is because the namenode does not invalidate a block well. In 
FSNamesystem.invalidateBlock, it first puts the invalidate request in a queue 
and then immediately removes the replica from its state, which triggers the 
choosing a target for the block. When requests are sent back to the target 
datanode as a reply to a heartbeat message, the replication requests have 
higher priority than the invalidate requests.

This problem could be solved if a namenode removes an invalidated replica from 
its state only after the invalidate request is sent to the datanode.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to