venkata91 commented on a change in pull request #33896:
URL: https://github.com/apache/spark/pull/33896#discussion_r741556426



##########
File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
##########
@@ -1716,7 +1739,31 @@ private[spark] class DAGScheduler(
             if (runningStages.contains(shuffleStage) && 
shuffleStage.pendingPartitions.isEmpty) {
               if (!shuffleStage.shuffleDep.shuffleMergeFinalized &&
                 shuffleStage.shuffleDep.getMergerLocs.nonEmpty) {
-                scheduleShuffleMergeFinalize(shuffleStage)
+                // Check if a finalize task has already been scheduled. This 
is to prevent the
+                // following scenario: Stage A attempt 0 fails and gets 
retried. Stage A attempt 1
+                // succeeded, triggering the scheduling of shuffle merge 
finalization. However,
+                // tasks from Stage A attempt 0 might still be running and 
sending task completion
+                // events to DAGScheduler. This check prevents multiple 
attempts to schedule merge
+                // finalization get triggered due to this.
+                if (shuffleStage.shuffleDep.getFinalizeTask.isEmpty) {
+                  // If total shuffle size is smaller than the threshold, 
attempt to immediately
+                  // schedule shuffle merge finalization and process map stage 
completion.
+                  val totalSize = Try(mapOutputTracker
+                    
.getStatistics(shuffleStage.shuffleDep).bytesByPartitionId.sum).getOrElse(0L)

Review comment:
       This would fail when some of the tasks outputs are lost due to 
node/executor loss which basically causes `MapOutputTracker.getStatistics` 
throw exception. Thoughts?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to