mridulm commented on a change in pull request #33896:
URL: https://github.com/apache/spark/pull/33896#discussion_r744193682



##########
File path: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
##########
@@ -1716,7 +1739,31 @@ private[spark] class DAGScheduler(
             if (runningStages.contains(shuffleStage) && 
shuffleStage.pendingPartitions.isEmpty) {
               if (!shuffleStage.shuffleDep.shuffleMergeFinalized &&
                 shuffleStage.shuffleDep.getMergerLocs.nonEmpty) {
-                scheduleShuffleMergeFinalize(shuffleStage)
+                // Check if a finalize task has already been scheduled. This 
is to prevent the
+                // following scenario: Stage A attempt 0 fails and gets 
retried. Stage A attempt 1
+                // succeeded, triggering the scheduling of shuffle merge 
finalization. However,
+                // tasks from Stage A attempt 0 might still be running and 
sending task completion
+                // events to DAGScheduler. This check prevents multiple 
attempts to schedule merge
+                // finalization get triggered due to this.
+                if (shuffleStage.shuffleDep.getFinalizeTask.isEmpty) {
+                  // If total shuffle size is smaller than the threshold, 
attempt to immediately
+                  // schedule shuffle merge finalization and process map stage 
completion.
+                  val totalSize = Try(mapOutputTracker
+                    
.getStatistics(shuffleStage.shuffleDep).bytesByPartitionId.sum).getOrElse(0L)

Review comment:
       Need to finalize for all stages, irrespective of  failed or not, makes 
sense - thanks for clarifying !
   
   For non-deterministic stage, we should do what is being done right now - 
immediately finalize and ignore all ESS response when stage is not available 
(since we are not going to use it).
   
   What we need to address here is that we are throwing away deterministic 
stage output due to some output being unavailable - which can actually be 
reused during retry.
   
   This is a consequence of using `getStatistics` api (NPE is thrown from 
`getStatistics` and `Try` will return 0L).
   This needs to be addressed.
   
   A rough sketch would be, we mirror functionality from 
`ShufflePartitionUtil.getMapSizesForReduceId` and set size to `-1` when 
`MapStatus` is null in `getStatistics` - all existing usages will continue to 
work as-is : given they are within `isAvailable`.
   
   ```
   if (shuffleStage.shuffleDep.getFinalizeTask.isEmpty) {
   
     // Comment describing:
     // if failed and NON_DET - immediately finalize and ignore merge output.
     // if failed and DET - decide based on size available and 
shuffleMergeWaitMinSizeThreshold (to keep or ignore merge output)
     // If available, decide based on size available and 
shuffleMergeWaitMinSizeThreshold
   
     val totalSize = {
       lazy val computedTotalSize = MOT.getStatistics(). bytesByPartitionId 
.filter(_ > 0).sum()
       // note: isAvialble is cheap compared to computing it via getStatistics 
output for missing
       if (shuffleStage.isAvailable) {
         computedTotalSize
       } else {
         if (shuffleStage.isNonDeterministic) {
           0L
         } else {
           // Modify `getStatistics` to be null aware for `MapStatus` .
           computedTotalSize
         }
       }
     }
   
     ...
   }
   
   ```
   
   Thoughts ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to