Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22004#discussion_r207753917
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -2369,39 +2369,12 @@ class DAGSchedulerSuite extends SparkFunSuite with
LocalSparkContext with TimeLi
assert(scheduler.getShuffleDependencies(rddE) === Set(shuffleDepA,
shuffleDepC))
}
- test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages" +
+ test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages " +
"still behave correctly on fetch failures") {
- // Runs a job that always encounters a fetch failure, so should
eventually be aborted
--- End diff --
Yeah you've said it more correctly, it's really the body of the lambda
capturing references, and it just happens to be to an enclosing class of the
test method in this case.
This is the kind of the I wonder if we *do* have to do, like in the old
closure cleaner. Because LMF closures capture far less to begin with, it's much
less of an issue. I also remember that closure cleaning such things got dicey
because it's not clear when it's just OK to null some object's field. It may
not be possible to do reasonably.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]