[
https://issues.apache.org/jira/browse/HADOOP-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
dhruba borthakur resolved HADOOP-1557.
--------------------------------------
Resolution: Won't Fix
Periodic block verification and handling-corrupt replicas are now part of the
Hadoop code base. No additional work is necessary for this one.
> Deletion of excess replicas should prefer to delete corrupted replicas before
> deleting valid replicas
> -----------------------------------------------------------------------------------------------------
>
> Key: HADOOP-1557
> URL: https://issues.apache.org/jira/browse/HADOOP-1557
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: dhruba borthakur
>
> Suppose a block has three replicas and two of the replicas are corrupted. If
> the replication factor of the file is reduced to 2. The filesystem should
> preferably delete the two corrupted replicas, otherwise it could lead to a
> corrupted file.
> One option would be to make the datanode periodically validate all blocks
> with their corresponding CRCs. The other option would be to make the
> setReplication call validate existing replicas before deleting excess
> replicas.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.