Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r32484601
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -537,10 +537,19 @@ private[spark] class ExecutorAllocationManager(
}
}
+ override def onTaskResubmit(taskResubmit: SparkListenerTaskResubmit):
Unit = {
+ val stageId = taskResubmit.stageId
+ allocationManager.synchronized {
+ val num = stageIdToNumTasks.getOrElse(stageId, 0)
+ stageIdToNumTasks.update(stageId, num + 1)
+ }
+ }
+
--- End diff --
@squito, I think when an executor goes down, the stages won't be
resubmitted. Here I means when a task fails, it will retry, and so a new task
will append. And in order to let the **ExecutorAllocationManager** knows there
are new tasks submit, I add **SparkListenerTaskResubmit** event.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]