cloud-fan commented on a change in pull request #28619:
URL: https://github.com/apache/spark/pull/28619#discussion_r442005198
##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -1842,6 +1842,17 @@ package object config {
.timeConf(TimeUnit.MILLISECONDS)
.createOptional
+ private[spark] val EXECUTOR_DECOMMISSION_KILL_INTERVAL =
+ ConfigBuilder("spark.executor.decommission.killInterval")
+ .doc("Duration after which a decommissioned executor will be killed
forcefully." +
+ "This config is useful for cloud environments where we know in advance
when " +
+ "an executor is going to go down after decommissioning signal i.e.
around 2 mins " +
+ "in aws spot nodes, 1/2 hrs in spot block nodes etc. This config is
currently " +
Review comment:
is it possible that Spark can get this timeout value from the cluster
manager? So that users don't need to set it manually. cc @holdenk
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]