Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/22004#discussion_r207753682
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -2369,39 +2369,12 @@ class DAGSchedulerSuite extends SparkFunSuite with
LocalSparkContext with TimeLi
assert(scheduler.getShuffleDependencies(rddE) === Set(shuffleDepA,
shuffleDepC))
}
- test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages" +
+ test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages " +
"still behave correctly on fetch failures") {
- // Runs a job that always encounters a fetch failure, so should
eventually be aborted
--- End diff --
If you move
```
object FailThisAttempt {
val _fail = new AtomicBoolean(true)
}
```
outside tests as a top object, tests pass, no need to move the functions to
the companion object.
Btw the closure cleaner does not look into the body of the lambda to check
if references of other objects create an issue. This is done only for the old
closures. According to document we only checked for the return statements. Also
Lambdas dont have outers by definition.
Regarding the LegacyAccumulatorWrapper there is no closure
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]