squito commented on a change in pull request #25759: [SPARK-19147][CORE]
Gracefully handle error in task after executor is stopped
URL: https://github.com/apache/spark/pull/25759#discussion_r325346895
##########
File path: core/src/main/scala/org/apache/spark/executor/Executor.scala
##########
@@ -604,6 +604,21 @@ private[spark] class Executor(
val serializedTK = ser.serialize(TaskKilled(killReason, accUpdates,
accums, metricPeaks))
execBackend.statusUpdate(taskId, TaskState.KILLED, serializedTK)
+ // When put the task in the pool, executor.stop may be called before
task.run.
+ // The exception will be thrown from the task becauseof the unexpected
status,
+ // see: SPARK-19147, here is to process the exception after
executor.stop
+ // as the excepted exception.
+ case t: Throwable if !isLocal && env.isStopped =>
Review comment:
you might succeed, as this will race against the stopping the executor. But
you're very likely to trigger more exceptions from `execBackend.statusUpdate`,
so it probably doesn't make sense to try, especially if the whole point of this
change is to cut down on scary error msgs during shutdown.
btw I think `env.isStopped` will need to be `volatile` for this to work
reliably.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]