[ 
https://issues.apache.org/jira/browse/HDFS-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12859541#action_12859541
 ] 

Todd Lipcon commented on HDFS-1103:
-----------------------------------

Haven't thought it through entirely, but I think this could be fixed something 
like this:

- add a field lastChecksumChunkCorrupt to ReplicaWaitingToBeRecovered, 
ReplicaUnderRecovery, and ReplicaRecoveryInfo
- rather than using validateIntegrity to set a shortened block length in 
ReplicaWaitingToBeRecovered, set the lastChecksumChunkCorrupt flag
- in DataNode.recoverBlock, use the following logic:
-- If all of the recovery candidates have corrupt last chunks, truncate all of 
them like we're doing now
-- If only some of the recovery candidates have corrupt last chunks, don't 
allow them to participate in recovery

> Replica recovery doesn't distinguish between flushed-but-corrupted last chunk 
> and unflushed last chunk
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-1103
>                 URL: https://issues.apache.org/jira/browse/HDFS-1103
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Todd Lipcon
>         Attachments: hdfs-1103-test.txt
>
>
> When the DN creates a replica under recovery, it calls validateIntegrity, 
> which truncates the last checksum chunk off of a replica if it is found to be 
> invalid. Then when the block recovery process happens, this shortened block 
> wins over a longer replica from another node where there was no corruption. 
> Thus, if just one of the DNs has an invalid last checksum chunk, data that 
> has been sync()ed to other datanodes can be lost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to