Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4055#discussion_r35121095
  
    --- Diff: 
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
    @@ -739,6 +742,88 @@ class DAGSchedulerSuite
         assertDataStructuresEmpty()
       }
     
    +  test("verify not submit next stage while not have registered mapStatus") 
{
    +    val firstRDD = new MyRDD(sc, 3, Nil)
    +    val firstShuffleDep = new ShuffleDependency(firstRDD, null)
    +    val firstShuffleId = firstShuffleDep.shuffleId
    +    val shuffleMapRdd = new MyRDD(sc, 3, List(firstShuffleDep))
    +    val shuffleDep = new ShuffleDependency(shuffleMapRdd, null)
    +    val reduceRdd = new MyRDD(sc, 1, List(shuffleDep))
    +    submit(reduceRdd, Array(0))
    +
    +    // things start out smoothly, stage 0 completes with no issues
    +    complete(taskSets(0), Seq(
    +      (Success, makeMapStatus("hostB", shuffleMapRdd.partitions.size)),
    +      (Success, makeMapStatus("hostB", shuffleMapRdd.partitions.size)),
    +      (Success, makeMapStatus("hostA", shuffleMapRdd.partitions.size))
    +    ))
    +
    +    // then one executor dies, and a task fails in stage 1
    +    runEvent(ExecutorLost("exec-hostA"))
    +    runEvent(CompletionEvent(taskSets(1).tasks(0),
    +      FetchFailed(null, firstShuffleId, 2, 0, "Fetch failed"),
    +      null, null, createFakeTaskInfo(), null))
    +
    +    // so we resubmit stage 0, which completes happily
    +    Thread.sleep(1000)
    +    val stage0Resubmit = taskSets(2)
    +    assert(stage0Resubmit.stageId == 0)
    +    assert(stage0Resubmit.stageAttemptId === 1)
    +    val task = stage0Resubmit.tasks(0)
    +    assert(task.partitionId === 2)
    +    runEvent(CompletionEvent(task, Success,
    +      makeMapStatus("hostC", shuffleMapRdd.partitions.size), null, 
createFakeTaskInfo(), null))
    +
    +    // now here is where things get tricky : we will now have a task set 
representing
    +    // the second attempt for stage 1, but we *also* have some tasks for 
the first attempt for
    +    // stage 1 still going
    +    val stage1Resubmit = taskSets(3)
    +    assert(stage1Resubmit.stageId == 1)
    +    assert(stage1Resubmit.stageAttemptId === 1)
    +    assert(stage1Resubmit.tasks.length === 3)
    +
    +    // we'll have some tasks finish from the first attempt, and some 
finish from the second attempt,
    +    // so that we actually have all stage outputs, though no attempt has 
completed all its
    +    // tasks
    +    runEvent(CompletionEvent(taskSets(3).tasks(0), Success,
    +      makeMapStatus("hostC", reduceRdd.partitions.size), null, 
createFakeTaskInfo(), null))
    +    runEvent(CompletionEvent(taskSets(3).tasks(1), Success,
    +      makeMapStatus("hostC", reduceRdd.partitions.size), null, 
createFakeTaskInfo(), null))
    +    // late task finish from the first attempt
    +    runEvent(CompletionEvent(taskSets(1).tasks(2), Success,
    +      makeMapStatus("hostB", reduceRdd.partitions.size), null, 
createFakeTaskInfo(), null))
    +
    +    // What should happen now is that we submit stage 2.  However, we 
might not see an error
    +    // b/c of DAGScheduler's error handling (it tends to swallow errors 
and just log them).  But
    +    // we can check some conditions.
    +    // Note that the really important thing here is not so much that we 
submit stage 2 *immediately*
    +    // but that we don't end up with some error from these interleaved 
completions.  It would also
    +    // be OK (though sub-optimal) if stage 2 simply waited until the 
resubmission of stage 1 had
    +    // all its tasks complete
    +
    +    // check that we have all the map output for stage 0 (it should have 
been there even before
    +    // the last round of completions from stage 1, but just to double 
check it hasn't been messed
    +    // up)
    +    (0 until 3).foreach { reduceIdx =>
    +      val arr = mapOutputTracker.getServerStatuses(0, reduceIdx)
    +      assert(arr != null)
    +      assert(arr.nonEmpty)
    --- End diff --
    
    `getServerStatuses` has been removed in master -- I guess both of these 
should be
    
    ```scala
    val statuses = mapOutputTracker.getMapSizesByExecutorId(0, reduceIdx)
    assert(statuses != null)
    assert(statuses.nonEmpty)
    ```
    
    The new code will now throw an exception if we're missing the map output 
data, but I feel like its probably still good to leave those asserts in.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to