prakharjain09 commented on a change in pull request #28619:
URL: https://github.com/apache/spark/pull/28619#discussion_r441973394
##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -1842,6 +1842,17 @@ package object config {
.timeConf(TimeUnit.MILLISECONDS)
.createOptional
+ private[spark] val EXECUTOR_DECOMMISSION_KILL_INTERVAL =
+ ConfigBuilder("spark.executor.decommission.killInterval")
+ .doc("Duration after which a decommissioned executor will be killed
forcefully." +
+ "This config is useful for cloud environments where we know in advance
when " +
+ "an executor is going to go down after decommissioning signal i.e.
around 2 mins " +
+ "in aws spot nodes, 1/2 hrs in spot block nodes etc. This config is
currently " +
Review comment:
@cloud-fan This config can be set by users based on their setups. If
they are using AWS spot nodes, timeout can be set to somewhere around 120
seconds, if they are using fix duration 6hrs spot blocks (say they decommission
executors at 5:45), timeout can be set to 15 mins and so on.
If user doesn't set this timeout, things will remain as they were and tasks
running on decommission executors won't get any special treatment with respect
to speculation.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]