[ https://issues.apache.org/jira/browse/HADOOP-5605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang updated HADOOP-5605: ---------------------------------- Attachment: reportBadBlock1.patch This patch additionally handles the case that the reported bad block does not belong to any file. In this case, NN logs the information and put the block into invalidate blocks queue. > All the replicas incorrectly got marked as corrupt. > --------------------------------------------------- > > Key: HADOOP-5605 > URL: https://issues.apache.org/jira/browse/HADOOP-5605 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.20.0 > Reporter: Raghu Angadi > Assignee: Hairong Kuang > Priority: Blocker > Fix For: 0.20.0 > > Attachments: reportBadBlock.patch, reportBadBlock1.patch > > > NameNode does not handle {{reportBadBlocks()}} properly. As a result, when > DataNode reports the corruption (only in the case of block transfer between > two datanodes), further attempts to replicate the block end up marking all > the replicas as corrupt! > From the implementation, it looks like NN incorrectly uses the block object > used in RPC to queue to neededReplication queue instead of using internal > block object. > will include an actual example in the next comment. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.