holdenk commented on a change in pull request #27864: [SPARK-20732][CORE]
Decommission cache blocks to other executors when an executor is decommissioned
URL: https://github.com/apache/spark/pull/27864#discussion_r398944448
##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -412,6 +412,21 @@ package object config {
.intConf
.createWithDefault(1)
+ private[spark] val STORAGE_DECOMMISSION_ENABLED =
+ ConfigBuilder("spark.storage.decommission.enabled")
+ .doc("Whether to decommission the block manager when decommissioning
executor")
+ .version("3.1.0")
+ .booleanConf
+ .createWithDefault(false)
+
+ private[spark] val STORAGE_DECOMMISSION_MAX_REPLICATION_FAILURE_PER_BLOCK =
+ ConfigBuilder("spark.storage.decommission.maxReplicationFailuresPerBlock")
+ .doc("Maximum number of failures to tolerate for offloading " +
+ "one block in single decommission cache blocks iteration")
Review comment:
I think we could work on the wording here but not critical.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]