prakharjain09 commented on a change in pull request #28619:
URL: https://github.com/apache/spark/pull/28619#discussion_r443322288
##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -1842,6 +1842,17 @@ package object config {
.timeConf(TimeUnit.MILLISECONDS)
.createOptional
+ private[spark] val EXECUTOR_DECOMMISSION_KILL_INTERVAL =
+ ConfigBuilder("spark.executor.decommission.killInterval")
+ .doc("Duration after which a decommissioned executor will be killed
forcefully." +
+ "This config is useful for cloud environments where we know in advance
when " +
+ "an executor is going to go down after decommissioning signal i.e.
around 2 mins " +
+ "in aws spot nodes, 1/2 hrs in spot block nodes etc. This config is
currently " +
Review comment:
@cloud-fan As per my understanding, Worker Decommissioning is getting
triggered currently using SIGPWR signal (and not via some message coming from
YARN/Kubernetes Cluster manager). So getting this timeout from Spark Cluster
Manager might not be possible. We might be able to do this once Spark's Worker
Decommissioning logic starts triggering via communication from YARN etc in
future. cc @holdenk
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]