Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2127#discussion_r16752571
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala 
---
    @@ -1045,31 +1045,38 @@ class DAGScheduler(
             stage.pendingTasks += task
     
           case FetchFailed(bmAddress, shuffleId, mapId, reduceId) =>
    -        // Mark the stage that the reducer was in as unrunnable
             val failedStage = stageIdToStage(task.stageId)
    -        markStageAsFinished(failedStage, Some("Fetch failure"))
    -        runningStages -= failedStage
    -        // TODO: Cancel running tasks in the stage
    -        logInfo("Marking " + failedStage + " (" + failedStage.name +
    -          ") for resubmision due to a fetch failure")
    -        // Mark the map whose fetch failed as broken in the map stage
             val mapStage = shuffleToMapStage(shuffleId)
    +        // It is likely that we receive multiple FetchFailed for a single 
stage (because we have
    +        // multiple tasks running concurrently on different executors). In 
that case, it is possible
    +        // the fetch failure has already been handled by the executor.
    +        if (runningStages.contains(failedStage)) {
    +          markStageAsFinished(failedStage, Some("Fetch failure"))
    +          runningStages -= failedStage
    +          // TODO: Cancel running tasks in the stage
    +          logInfo("Marking " + failedStage + " (" + failedStage.name +
    +            ") for resubmision due to a fetch failure")
    +
    +          logInfo("The failed fetch was from " + mapStage + " (" + 
mapStage.name +
    +            "); marking it for resubmission")
    +          if (failedStages.isEmpty && eventProcessActor != null) {
    +            // Don't schedule an event to resubmit failed stages if failed 
isn't empty, because
    +            // in that case the event will already have been scheduled. 
eventProcessActor may be
    +            // null during unit tests.
    +            import env.actorSystem.dispatcher
    +            env.actorSystem.scheduler.scheduleOnce(
    +              RESUBMIT_TIMEOUT, eventProcessActor, ResubmitFailedStages)
    +          }
    +          failedStages += failedStage
    +          failedStages += mapStage
    +        }
    +
    +        // Mark the map whose fetch failed as broken in the map stage
             if (mapId != -1) {
               mapStage.removeOutputLoc(mapId, bmAddress)
               mapOutputTracker.unregisterMapOutput(shuffleId, mapId, bmAddress)
    --- End diff --
    
    Is that a problem? I think the reduce stage retry will fail, leading to 
resubmission anyway?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to