Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/5636#discussion_r35719924
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -473,6 +473,322 @@ class DAGSchedulerSuite
assertDataStructuresEmpty()
}
+ // Helper function to validate state when creating tests for task
failures
+ def checkStageId(stageId: Int, attempt: Int, stageAttempt: TaskSet) {
+ assert(stageAttempt.stageId === stageId)
+ assert(stageAttempt.stageAttemptId == attempt)
+ }
+
+ def makeCompletions(stageAttempt: TaskSet): Seq[(Success.type,
MapStatus)] = {
+ stageAttempt.tasks.zipWithIndex.map { case (task, idx) =>
+ (Success, makeMapStatus("host" + ('A' + idx).toChar,
stageAttempt.tasks.size))
+ }.toSeq
+ }
+
+ /**
+ * In this test we simulate a job failure where the first stage
completes successfully and
+ * the second stage fails due to a fetch failure. Multiple successive
fetch failures of a stage
+ * trigger an overall stage abort to avoid endless retries.
+ */
+ test("Multiple consecutive stage failures should lead to task being
aborted.") {
+ // Create a new Listener to confirm that the listenerBus sees the
JobEnd message
+ // when we abort the stage. This message will also be consumed by the
EventLoggingListener
+ // so this will propagate up to the user.
+ var ended = false
+ var jobResult : JobResult = null
+ class EndListener extends SparkListener {
+ override def onJobEnd(jobEnd: SparkListenerJobEnd): Unit = {
+ jobResult = jobEnd.jobResult
+ ended = true
+ }
+ }
+
+ sc.listenerBus.addListener(new EndListener())
+
+ val shuffleMapRdd = new MyRDD(sc, 2, Nil)
+ val shuffleDep = new ShuffleDependency(shuffleMapRdd, null)
+ val shuffleId = shuffleDep.shuffleId
+ val reduceRdd = new MyRDD(sc, 2, List(shuffleDep))
+ submit(reduceRdd, Array(0, 1))
+
+ for (attempt <- 0 until Stage.MAX_STAGE_FAILURES) {
+ // Complete all the tasks for the current attempt of stage 0
successfully
+ val stage0Attempt = taskSets.last
+
+ // Confirm that this is the next attempt for stage 0
+ checkStageId(0, attempt, stage0Attempt)
+
+ // Make each task in stage 0 success
+ val completions = makeCompletions(stage0Attempt)
--- End diff --
eg., here, the `reduceParts` arg to `makeCompletions` should be 2, since
the next stage has 2 partitions. (That matters for when this gets run on the
second attempt, b/c there will only be one task, so its not the same.)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]