[
https://issues.apache.org/jira/browse/SPARK-20217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956093#comment-15956093
]
Apache Spark commented on SPARK-20217:
--------------------------------------
User 'ericl' has created a pull request for this issue:
https://github.com/apache/spark/pull/17531
> Executor should not fail stage if killed task throws non-interrupted exception
> ------------------------------------------------------------------------------
>
> Key: SPARK-20217
> URL: https://issues.apache.org/jira/browse/SPARK-20217
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.2.0
> Reporter: Eric Liang
>
> This is reproducible as follows. Run the following, and then use
> SparkContext.killTaskAttempt to kill one of the tasks. The entire stage will
> fail since we threw a RuntimeException instead of InterruptedException.
> We should probably unconditionally return TaskKilled instead of TaskFailed if
> the task was killed by the driver, regardless of the actual exception thrown.
> {code}
> spark.range(100).repartition(100).foreach { i =>
> try {
> Thread.sleep(10000000)
> } catch {
> case t: InterruptedException =>
> throw new RuntimeException(t)
> }
> }
> {code}
> Based on the code in TaskSetManager, I think this also affects kills of
> speculative tasks. However, since the number of speculated tasks is few, and
> usually you need to fail a task a few times before the stage is cancelled,
> probably no-one noticed this in production.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]