Ngone51 commented on a change in pull request #34578:
URL: https://github.com/apache/spark/pull/34578#discussion_r749852850
##########
File path:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
##########
@@ -871,19 +871,23 @@ private[spark] class TaskSchedulerImpl(
taskSetManager: TaskSetManager,
tid: Long,
taskResult: DirectTaskResult[_]): Unit = synchronized {
- taskSetManager.handleSuccessfulTask(tid, taskResult)
+ if (!taskSetManager.taskFinished(tid)) {
+ taskSetManager.handleSuccessfulTask(tid, taskResult)
+ }
}
def handleFailedTask(
taskSetManager: TaskSetManager,
tid: Long,
taskState: TaskState,
reason: TaskFailedReason): Unit = synchronized {
- taskSetManager.handleFailedTask(tid, taskState, reason)
- if (!taskSetManager.isZombie && !taskSetManager.someAttemptSucceeded(tid))
{
- // Need to revive offers again now that the task set manager state has
been updated to
- // reflect failed tasks that need to be re-run.
- backend.reviveOffers()
+ if (!taskSetManager.taskFinished(tid)) {
+ taskSetManager.handleFailedTask(tid, taskState, reason)
Review comment:
This's only one of the cases where `handleFailedTask` is called. Shall
we check `taskFinished` inside the `handleFailedTask` to ensure all the calls
are covered? Same for the `handleSuccessfulTask`.
And let's add some comments to explain why we do this.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]