Ngone51 commented on a change in pull request #33896:
URL: https://github.com/apache/spark/pull/33896#discussion_r745329800



##########
File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
##########
@@ -1716,7 +1739,31 @@ private[spark] class DAGScheduler(
             if (runningStages.contains(shuffleStage) && 
shuffleStage.pendingPartitions.isEmpty) {
               if (!shuffleStage.shuffleDep.shuffleMergeFinalized &&
                 shuffleStage.shuffleDep.getMergerLocs.nonEmpty) {
-                scheduleShuffleMergeFinalize(shuffleStage)
+                // Check if a finalize task has already been scheduled. This 
is to prevent the
+                // following scenario: Stage A attempt 0 fails and gets 
retried. Stage A attempt 1
+                // succeeded, triggering the scheduling of shuffle merge 
finalization. However,
+                // tasks from Stage A attempt 0 might still be running and 
sending task completion
+                // events to DAGScheduler. This check prevents multiple 
attempts to schedule merge
+                // finalization get triggered due to this.

Review comment:
       > Pass the stage attempt id to finalizeShuffleMerge, etc.
   Before finalizing a stage, check if the latest attempt id is same as input 
param - and if not, avoid finalization
   
   IIUC, I think we need a finalizing flag of the stage instead of the stage 
attempt id..
   
   > Also we would still need the ScheduledTask right
   
   The `ScheduledTask` is needed anyway. But we can avoid track/cancel 
`ScheduledTask` by adding a new flag.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to