squito commented on a change in pull request #24375: [SPARK-25250][CORE] try 
best to not submit tasks when the partitions are already completed
URL: https://github.com/apache/spark/pull/24375#discussion_r275516792
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
 ##########
 @@ -870,22 +874,21 @@ private[spark] class TaskSchedulerImpl(
   }
 
   /**
-   * Marks the task has completed in all TaskSetManagers for the given stage.
+   * Marks the task has completed in the active TaskSetManager for the given 
stage.
    *
    * After stage failure and retry, there may be multiple TaskSetManagers for 
the stage.
-   * If an earlier attempt of a stage completes a task, we should ensure that 
the later attempts
-   * do not also submit those same tasks.  That also means that a task 
completion from an earlier
-   * attempt can lead to the entire stage getting marked as successful.
+   * If an earlier zombie attempt of a stage completes a task, we can ask the 
later active attempt
+   * to skip submitting and running the task for the same partition, to save 
resource. That also
+   * means that a task completion from an earlier zombie attempt can lead to 
the entire stage
+   * getting marked as successful.
    */
-  private[scheduler] def markPartitionCompletedInAllTaskSets(
+  private[scheduler] def markPartitionCompleted(
 
 Review comment:
   taskResultGetter is a multithreaded pool (default 4) so I think you still 
need extra protection here

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to