haiyang1987 commented on code in PR #6559:
URL: https://github.com/apache/hadoop/pull/6559#discussion_r1505502474


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml:
##########
@@ -3982,6 +3982,17 @@
   </description>
 </property>
 
+<property>
+  <name>dfs.datanode.delete.corrupt.replica.from.disk.enable</name>
+  <value>true</value>

Review Comment:
   Thanks @zhangshuyan0 for your comment.
   
   From the DataNode point of view, if  already confirmed the meta file or data 
file is lost. it should be deleted directly from the memory and disk and this 
is expected behavior.
   
   For HDFS-16985 mentioned, if the current cluster deployment adopts the AWS 
EC2 + EBS solution, can adjust 
`dfs.datanode.delete.corrupt.replica.from.disk.enable` is false as needed to 
avoid deleting disk data.
   
   So I think it might be better from datanode perspective default to set 
`dfs.datanode.delete.corrupt.replica.from.disk.enable` to true
   
   looking forward to your suggestions again.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to