[ 
https://issues.apache.org/jira/browse/HDFS-16968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17772512#comment-17772512
 ] 

ASF GitHub Bot commented on HDFS-16968:
---------------------------------------

zhangshuyan0 commented on PR #6125:
URL: https://github.com/apache/hadoop/pull/6125#issuecomment-1750361000

   If you want to improve the reliability of 2-replica writing, it is 
recommended that you directly configure 
`dfs.client.block.write.replace-datanode-on-failure.policy` to `ALWAYS`. The 
current changes conflict with the design ideas of the `DEFAULT` policy. See:
   
https://github.com/apache/hadoop/blob/daa78adc888704e5688b84b404573ed1e28012db/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml#L765-L783




> Corrupted blocks appear in files written with two replicas
> ----------------------------------------------------------
>
>                 Key: HDFS-16968
>                 URL: https://issues.apache.org/jira/browse/HDFS-16968
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: dfsclient
>            Reporter: leo sun
>            Priority: Major
>              Labels: pull-request-available
>
> If the 2-replica files fail in the process of writing, only 1-replica will be 
> recovered. 
> If the recovered replica is broken before the block is completed, the file 
> will miss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to