Github user squito commented on the issue:
https://github.com/apache/spark/pull/13685
Hi @lw-lin . Your description makes sense, but I'm having trouble seeing
how you'd have a `TaskKilledException`, but without setting the task to killed.
Eg., `TaskSchedulerImpl.cancelTasks`, which eventually sends a `KillTask` msg
to the Executor, which then calls `Task.kill`. But this sets `_killed = true`,
so it seems like that shouldn't cause any problems.
Is the problem that there is a race in `Executor.kill`, it sets
`killed=true` before `task.kill`? Then this seems like a reasonable
explanation, but is also worth a comment explaining that. And though the
change is straightforward, a unit test would be nice -- sorry that there aren't
currently any tests for `TaskRunner`, but it would be great if you could add
something. Even if the change is clear, it will help for future maintenance.
Sorry to ask for a few more details, but thanks for reporting this, looks
like a good fix!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]