Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r32694699
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -553,12 +562,13 @@ private[spark] class ExecutorAllocationManager(
}
// If this is the last pending task, mark the scheduler queue as
empty
- stageIdToTaskIndices.getOrElseUpdate(stageId, new
mutable.HashSet[Int]) += taskIndex
+ stageIdToTaskIndices.getOrElseUpdate(stageId, new
mutable.HashSet[String]) += (taskIndex + "." + attemptId)
--- End diff --
yeah, I understand what do you mean. But why I change the code is because,
if a task is failed, a new one will append, this two tasks' ```taskIndex```
are the same
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]