Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/22004#discussion_r207877348
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -2369,39 +2369,12 @@ class DAGSchedulerSuite extends SparkFunSuite with
LocalSparkContext with TimeLi
assert(scheduler.getShuffleDependencies(rddE) === Set(shuffleDepA,
shuffleDepC))
}
- test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages" +
+ test("SPARK-17644: After one stage is aborted for too many failed
attempts, subsequent stages " +
"still behave correctly on fetch failures") {
- // Runs a job that always encounters a fetch failure, so should
eventually be aborted
--- End diff --
@adriaanm thanks for that comment great to understand what is happening
with the janino thing.
Here I am referring to object FailThisAttempt that has to be moved outside
the function in the test case to make serialization work. So it seems in scala
2.11 serialization worked without even cleaning anything.
The similar local example I have is:
```
test("external reference") {
def runJobWithTemporaryFetchFailure: Unit = {
object FailThisAttempt {
val _fail = new AtomicBoolean(true)
}
val retC = new C1()
ClosureCleaner.clean( () => { if(FailThisAttempt._fail.get())
println("dsdsds"); else println("dd"); 4 } )
}
runJobWithTemporaryFetchFailure
}
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]