[
https://issues.apache.org/jira/browse/SPARK-32198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17152312#comment-17152312
]
Apache Spark commented on SPARK-32198:
--------------------------------------
User 'agrawaldevesh' has created a pull request for this issue:
https://github.com/apache/spark/pull/29014
> Don't fail running jobs when decommissioned executors finally go away
> ---------------------------------------------------------------------
>
> Key: SPARK-32198
> URL: https://issues.apache.org/jira/browse/SPARK-32198
> Project: Spark
> Issue Type: Sub-task
> Components: Spark Core
> Affects Versions: 3.1.0
> Reporter: Devesh Agrawal
> Priority: Major
>
> When a decommissioned executor is finally lost, its death shouldn't fail
> running jobs.
> A decommissioned executor will eventually die, and in response to its
> heartbeat failure we will generate a `SlaveLost` message. This SlaveLost
> message should be treated specially for decommissioned executors: It should
> not be deemed that this loss is due to the running application.
> Decommissioning is an exogenous event and the running application shouldn't
> be penalized for it.
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]