[
https://issues.apache.org/jira/browse/HDFS-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13022484#comment-13022484
]
Hudson commented on HDFS-900:
-----------------------------
Integrated in Hadoop-Hdfs-trunk #643 (See
[https://builds.apache.org/hudson/job/Hadoop-Hdfs-trunk/643/])
> Corrupt replicas are not tracked correctly through block report from DN
> -----------------------------------------------------------------------
>
> Key: HDFS-900
> URL: https://issues.apache.org/jira/browse/HDFS-900
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 0.22.0
> Reporter: Todd Lipcon
> Assignee: Konstantin Shvachko
> Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: log-commented, reportCorruptBlock.patch,
> to-reproduce.patch
>
>
> This one is tough to describe, but essentially the following order of events
> is seen to occur:
> # A client marks one replica of a block to be corrupt by telling the NN about
> it
> # Replication is then scheduled to make a new replica of this node
> # The replication completes, such that there are now 3 good replicas and 1
> corrupt replica
> # The DN holding the corrupt replica sends a block report. Rather than
> telling this DN to delete the node, the NN instead marks this as a new *good*
> replica of the block, and schedules deletion on one of the good replicas.
> I don't know if this is a dataloss bug in the case of 1 corrupt replica with
> dfs.replication=2, but it seems feasible. I will attach a debug log with some
> commentary marked by '============>', plus a unit test patch which I can get
> to reproduce this behavior reliably. (it's not a proper unit test, just some
> edits to an existing one to show it)
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira