[
https://issues.apache.org/jira/browse/HDFS-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12867385#action_12867385
]
Todd Lipcon commented on HDFS-1103:
-----------------------------------
Ah, good point Hairong. I think, though, we should still consider taking
MAX(validated lengths of RWR replicas) to avoid the problem demonstrated by the
attached test case. What do you think? Is the test case too unrealistic?
> Replica recovery doesn't distinguish between flushed-but-corrupted last chunk
> and unflushed last chunk
> ------------------------------------------------------------------------------------------------------
>
> Key: HDFS-1103
> URL: https://issues.apache.org/jira/browse/HDFS-1103
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node
> Affects Versions: 0.21.0, 0.22.0
> Reporter: Todd Lipcon
> Priority: Blocker
> Attachments: hdfs-1103-test.txt
>
>
> When the DN creates a replica under recovery, it calls validateIntegrity,
> which truncates the last checksum chunk off of a replica if it is found to be
> invalid. Then when the block recovery process happens, this shortened block
> wins over a longer replica from another node where there was no corruption.
> Thus, if just one of the DNs has an invalid last checksum chunk, data that
> has been sync()ed to other datanodes can be lost.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.