Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/10951#issuecomment-191602999
Sorry for being insanely slow to look at this. I'm concerned about this
code because of this line of code:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L793
We call taskEnded (which results in the code you modified getting called)
when an executor is lost, for all of the tasks on that executor. I think it's
possible, as a result, to get multiple task-end events for a particular task,
in theory (if messages get re-ordered), so I *think* this could result in
multiple SparkListenerTaskEnd events for the same task. I didn't look at this
super thoroughly so let me know if you think this is a non-issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]