[ 
https://issues.apache.org/jira/browse/SPARK-34104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-34104:
------------------------------------

    Assignee: Holden Karau  (was: Apache Spark)

> Allow users to specify a maximum decommissioning time
> -----------------------------------------------------
>
>                 Key: SPARK-34104
>                 URL: https://issues.apache.org/jira/browse/SPARK-34104
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.1.0, 3.2.0, 3.1.1
>            Reporter: Holden Karau
>            Assignee: Holden Karau
>            Priority: Major
>
> We currently have the ability for users to set the predicted time of the 
> cluster manager or cloud provider to terminate a decommissioning executor, 
> but for nodes where Spark it's self is triggering decommissioning we should 
> add the ability of users to specify a maximum time we want to allow the 
> executor to decommission.
>  
> This is important especially if we start to in more places (like with 
> excluded executors that are found to be flaky, that may or may not be able to 
> decommission successfully).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to