This is an automated email from the ASF dual-hosted git repository.
wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new fb3b707bc1c [SPARK-45498][CORE] Followup: Ignore task completion from
old stage a…
fb3b707bc1c is described below
commit fb3b707bc1c875c14ff7c6e7a3f39b5c4b852c86
Author: mayurb <[email protected]>
AuthorDate: Fri Oct 13 10:17:56 2023 +0800
[SPARK-45498][CORE] Followup: Ignore task completion from old stage a…
### What changes were proposed in this pull request?
With [SPARK-45182](https://issues.apache.org/jira/browse/SPARK-45182), we
added a fix for not letting laggard tasks of the older attempts of the
indeterminate stage from marking the partition has completed in the map output
tracker.
When a task is completed, the DAG scheduler also notifies all the task sets
of the stage about that partition being completed. Tasksets would not schedule
such tasks if they are not already scheduled. This is not correct for the
indeterminate stage, since we want to re-run all the tasks on a re-attempt
### Why are the changes needed?
Since the partition is not completed by older attempts and the partition
from the newer attempt also doesn't get scheduled, the stage will have to be
rescheduled to complete that partition. Since the stage is indeterminate, all
the partitions will be recomputed
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
Added check in existing unit test
### Was this patch authored or co-authored using generative AI tooling?
No
Closes #43326 from mayurdb/indeterminateFix.
Authored-by: mayurb <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
---
core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala | 6 +++---
.../test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala | 5 ++++-
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
b/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
index a456f91d4c9..07a71ebed08 100644
--- a/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
+++ b/core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
@@ -1847,9 +1847,9 @@ private[spark] class DAGScheduler(
case Success =>
// An earlier attempt of a stage (which is zombie) may still have
running tasks. If these
// tasks complete, they still count and we can mark the corresponding
partitions as
- // finished. Here we notify the task scheduler to skip running tasks
for the same partition,
- // to save resource.
- if (task.stageAttemptId < stage.latestInfo.attemptNumber()) {
+ // finished if the stage is determinate. Here we notify the task
scheduler to skip running
+ // tasks for the same partition to save resource.
+ if (!stage.isIndeterminate && task.stageAttemptId <
stage.latestInfo.attemptNumber()) {
taskScheduler.notifyPartitionCompletion(stageId, task.partitionId)
}
diff --git
a/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
b/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
index 7bb8f49e6bf..7691b98f620 100644
--- a/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
+++ b/core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala
@@ -3169,13 +3169,16 @@ class DAGSchedulerSuite extends SparkFunSuite with
TempLocalSparkContext with Ti
makeMapStatus("hostB",
2)))
- // The second task of the shuffle map stage 1 from 1st attempt succeeds
+ // The second task of the shuffle map stage 1 from 1st attempt succeeds
runEvent(makeCompletionEvent(
taskSets(1).tasks(1),
Success,
makeMapStatus("hostC",
2)))
+ // Above task completion should not mark the partition 1 complete from 2nd
attempt
+ assert(!tasksMarkedAsCompleted.contains(taskSets(3).tasks(1)))
+
// This task completion should get ignored and partition 1 should be
missing
// for shuffle map stage 1
assert(mapOutputTracker.findMissingPartitions(shuffleId2) == Some(Seq(1)))
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]