Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22004#discussion_r207752563
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -2369,39 +2369,12 @@ class DAGSchedulerSuite extends SparkFunSuite with
LocalSparkContext with TimeLi
assert(scheduler.getShuffleDependencies(rddE) === Set(shuffleDepA,
shuffleDepC))
}
- test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages" +
+ test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages " +
"still behave correctly on fetch failures") {
- // Runs a job that always encounters a fetch failure, so should
eventually be aborted
--- End diff --
@skonto in answer to your question, here's an example of a method that runs
a closure that seems to capture the enclosing test class and fails. I moved the
definition of these methods out of the test method, but didn't help. Moving to
the companion object did. Not sure what is going on underneath there, or
whether you might have expected the closure cleaner to handle this case. I am
not worried about it, but just pointing out a slightly more complex example.
There are more failures in the mllib module, coming soon ...
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]