Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/22004#discussion_r207754816
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -2369,39 +2369,12 @@ class DAGSchedulerSuite extends SparkFunSuite with
LocalSparkContext with TimeLi
assert(scheduler.getShuffleDependencies(rddE) === Set(shuffleDepA,
shuffleDepC))
}
- test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages" +
+ test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages " +
"still behave correctly on fetch failures") {
- // Runs a job that always encounters a fetch failure, so should
eventually be aborted
--- End diff --
Yeah, I was also wondering if I had to implement this as well, but I feel
people need to move to 2.12 with a different mindset as things have changed.
Not sure if it is possible as well, so asked @LRytz in jira.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]