[ https://issues.apache.org/jira/browse/HADOOP-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang reassigned HADOOP-5605: ------------------------------------- Assignee: Hairong Kuang > All the replicas incorrectly got marked as corrupt. > --------------------------------------------------- > > Key: HADOOP-5605 > URL: https://issues.apache.org/jira/browse/HADOOP-5605 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.20.0 > Reporter: Raghu Angadi > Assignee: Hairong Kuang > Priority: Blocker > Fix For: 0.20.0 > > Attachments: reportBadBlock.patch > > > NameNode does not handle {{reportBadBlocks()}} properly. As a result, when > DataNode reports the corruption (only in the case of block transfer between > two datanodes), further attempts to replicate the block end up marking all > the replicas as corrupt! > From the implementation, it looks like NN incorrectly uses the block object > used in RPC to queue to neededReplication queue instead of using internal > block object. > will include an actual example in the next comment. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.