Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-122434463
@XuTingjun I dug into the scheduler code a little. When a task is
resubmitted, it uses a new task ID, but not a new task index. To calculate the
number of pending tasks, we use the [task
index](https://github.com/apache/spark/blob/b2aa490bb60176631c94ecadf87c14564960f12c/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala#L566),
not the task ID. Therefore, it should handle resubmit correctly.
Could you clarify what the resulting behavior of this bug is? It will be
useful to describe the symptoms without referring to the low-level
implementation. I just want to know what the consequences the issue has for the
Spark user who knows nothing about `ExecutorAllocationManager`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]