cloud-fan commented on a change in pull request #28619: URL: https://github.com/apache/spark/pull/28619#discussion_r441376782
########## File path: core/src/main/scala/org/apache/spark/internal/config/package.scala ########## @@ -1842,6 +1842,17 @@ package object config { .timeConf(TimeUnit.MILLISECONDS) .createOptional + private[spark] val EXECUTOR_DECOMMISSION_KILL_INTERVAL = + ConfigBuilder("spark.executor.decommission.killInterval") + .doc("Duration after which a decommissioned executor will be killed forcefully." + + "This config is useful for cloud environments where we know in advance when " + + "an executor is going to go down after decommissioning signal i.e. around 2 mins " + + "in aws spot nodes, 1/2 hrs in spot block nodes etc. This config is currently " + Review comment: So the timeout is decided by the cloud vendors? What does this config specify? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org