This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch branch-3.5
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.5 by this push:
     new b5f3dc9e760 [SPARK-45498][CORE] Followup: Ignore task completion from 
old stage a…
b5f3dc9e760 is described below

commit b5f3dc9e76082a81357555ace0c489df97e6f81a
Author: mayurb <may...@uber.com>
AuthorDate: Fri Oct 13 10:17:56 2023 +0800

    [SPARK-45498][CORE] Followup: Ignore task completion from old stage a…
    
    ### What changes were proposed in this pull request?
    With [SPARK-45182](https://issues.apache.org/jira/browse/SPARK-45182), we 
added a fix for not letting laggard tasks of the older attempts of the 
indeterminate stage from marking the partition has completed in the map output 
tracker.
    
    When a task is completed, the DAG scheduler also notifies all the task sets 
of the stage about that partition being completed. Tasksets would not schedule 
such tasks if they are not already scheduled. This is not correct for the 
indeterminate stage, since we want to re-run all the tasks on a re-attempt
    
    ### Why are the changes needed?
    Since the partition is not completed by older attempts and the partition 
from the newer attempt also doesn't get scheduled, the stage will have to be 
rescheduled to complete that partition. Since the stage is indeterminate, all 
the partitions will be recomputed
    
    ### Does this PR introduce _any_ user-facing change?
    No
    
    ### How was this patch tested?
    Added check in existing unit test
    
    ### Was this patch authored or co-authored using generative AI tooling?
    No
    
    Closes #43326 from mayurdb/indeterminateFix.
    
    Authored-by: mayurb <may...@uber.com>
    Signed-off-by: Wenchen Fan <wenc...@databricks.com>
    (cherry picked from commit fb3b707bc1c875c14ff7c6e7a3f39b5c4b852c86)
    Signed-off-by: Wenchen Fan <wenc...@databricks.com>
---
 core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala   | 6 +++---
 .../test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala   | 5 ++++-
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
b/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
index d73bb633901..d8adaae19b9 100644
--- a/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
+++ b/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
@@ -1847,9 +1847,9 @@ private[spark] class DAGScheduler(
       case Success =>
         // An earlier attempt of a stage (which is zombie) may still have 
running tasks. If these
         // tasks complete, they still count and we can mark the corresponding 
partitions as
-        // finished. Here we notify the task scheduler to skip running tasks 
for the same partition,
-        // to save resource.
-        if (task.stageAttemptId < stage.latestInfo.attemptNumber()) {
+        // finished if the stage is determinate. Here we notify the task 
scheduler to skip running
+        // tasks for the same partition to save resource.
+        if (!stage.isIndeterminate && task.stageAttemptId < 
stage.latestInfo.attemptNumber()) {
           taskScheduler.notifyPartitionCompletion(stageId, task.partitionId)
         }
 
diff --git 
a/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala 
b/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
index e351f8b95bb..9b7c5d5ace3 100644
--- a/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
+++ b/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
@@ -3169,13 +3169,16 @@ class DAGSchedulerSuite extends SparkFunSuite with 
TempLocalSparkContext with Ti
       makeMapStatus("hostB",
         2)))
 
-    // The second task of the  shuffle map stage 1 from 1st attempt succeeds
+    // The second task of the shuffle map stage 1 from 1st attempt succeeds
     runEvent(makeCompletionEvent(
       taskSets(1).tasks(1),
       Success,
       makeMapStatus("hostC",
         2)))
 
+    // Above task completion should not mark the partition 1 complete from 2nd 
attempt
+    assert(!tasksMarkedAsCompleted.contains(taskSets(3).tasks(1)))
+
     // This task completion should get ignored and partition 1 should be 
missing
     // for shuffle map stage 1
     assert(mapOutputTracker.findMissingPartitions(shuffleId2) == Some(Seq(1)))


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to