[ 
https://issues.apache.org/jira/browse/SPARK-20624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17468430#comment-17468430
 ] 

Apache Spark commented on SPARK-20624:
--------------------------------------

User 'sungpeo' has created a pull request for this issue:
https://github.com/apache/spark/pull/35094

> SPIP: Add better handling for node shutdown
> -------------------------------------------
>
>                 Key: SPARK-20624
>                 URL: https://issues.apache.org/jira/browse/SPARK-20624
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Holden Karau
>            Priority: Major
>
> While we've done some good work with better handling when Spark is choosing 
> to decommission nodes (SPARK-7955), it might make sense in environments where 
> we get preempted without our own choice (e.g. YARN over-commit, EC2 spot 
> instances, GCE Preemptiable instances, etc.) to do something for the data on 
> the node (or at least not schedule any new tasks).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to