holdenk commented on a change in pull request #28370:
URL: https://github.com/apache/spark/pull/28370#discussion_r425944314



##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -413,6 +413,34 @@ package object config {
       .intConf
       .createWithDefault(1)
 
+  private[spark] val STORAGE_DECOMMISSION_ENABLED =
+    ConfigBuilder("spark.storage.decommission.enabled")
+      .doc("Whether to decommission the block manager when decommissioning 
executor")
+      .version("3.1.0")
+      .booleanConf
+      .createWithDefault(false)
+
+  private[spark] val STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK =
+    ConfigBuilder("spark.storage.decommission.maxReplicationFailuresPerBlock")

Review comment:
       So I'm not sure that's a great idea. Looking at `maxReplicationFailures` 
the default is set to one, which certainly makes sense in the situation where 
we don't expect the host to be exiting. But this situation is different, we 
know the current block is going to disappear soon so it makes sense to more 
aggressively try and copy the block.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to