rednaxelafx opened a new pull request #28463:
URL: https://github.com/apache/spark/pull/28463


   ### What changes were proposed in this pull request?
   
   This PR proposes to enhance Spark's `ClosureCleaner` to support "indylambda" 
style of Scala closures to the same level as the existing implementation for 
the old (inner class) style ones. The goal is to reach feature parity with the 
support of the old style Scala closures, with as close to bug-for-bug 
compatibility as possible.
   
   The changes are intended to be minimal, with further code cleanups planned 
in separate PRs.
   
   Jargons:
   - old, inner class style Scala closures, aka `delambdafy:inline`: default in 
Scala 2.11 and before
   - new, "indylambda" style Scala closures, aka `delambdafy:method`: default 
in Scala 2.12 and later
   
   ### Why are the changes needed?
   
   There had been previous effortsto extend Spark's `ClosureCleaner` to support 
"indylambda" Scala closures, which is necessary for proper Scala 2.12 support. 
Most notably the work done for 
[SPARK-14540](https://issues.apache.org/jira/browse/SPARK-14540).
   
   But the previous efforts had missed one import scenario: a Scala closure 
declared in a Scala REPL, and it captures the enclosing `this` -- a REPL line 
object. For instance, in a Spark Shell:
   ```scala
   :pa
   class NotSerializableClass(val x: Int)
   val ns = new NotSerializableClass(42)
   val topLevelValue = "someValue"
   val func = (j: Int) => {
     (1 to j).flatMap { x =>
       (1 to x).map { y => y + topLevelValue }
     }
   }
   <Ctrl+D>
   sc.parallelize(0 to 2).map(func).collect
   ```
   In this example, `func` refers to a Scala closure that captures the 
enclosing `this` because it needs to access `topLevelValue`, which is in turn 
implemented as a field on the enclosing REPL line object.
   
   The existing `ClosureCleaner` in Spark supports cleaning this case in Scala 
2.11-, and this PR brings feature parity to Scala 2.12+.
   
   For more background of the new and old ways Scala lowers closures to Java 
bytecode, please see [A note on how NSC (New Scala Compiler) lowers 
lambdas](https://gist.github.com/rednaxelafx/e9ecd09bbd1c448dbddad4f4edf25d48#file-notes-md).
   
   For more background on how Spark's `ClosureCleaner` works and what's needed 
to make it support "indylambda" Scala closures, please refer to [A Note on 
Apache Spark's 
ClosureCleaner](https://gist.github.com/rednaxelafx/e9ecd09bbd1c448dbddad4f4edf25d48#file-spark_closurecleaner_notes-md).
   
   #### tl;dr
   
   The `ClosureCleaner` works like a mark-sweep algorithm on fields:
   - Finding (a chain of) outer objects referenced by the starting closure;
   - Scanning the starting closure and its inner closures and marking the 
fields on the outer objects accessed;
   - Cloning the outer objects, nulling out fields that are not accessed by any 
closure of concern.
   
   ##### Outer Objects
   
   For the old, inner class style Scala closures, the "outer objects" is 
defined as the lexically enclosing closures of the starting closure, plus an 
optional enclosing REPL line object if these closures are defined in a Scala 
REPL. All of them are on a singly-linked `$outer` chain.
   
   For the new, "indylambda" style Scala closures, the capturing implementation 
changed, so closures no longer refer to their enclosing closures via an 
`$outer` chain. However, a closure can still capture its enclosing REPL line 
object, much like the old style closures. The name of the field that captures 
this reference would be `arg$1` (instead of `$outer`).
   
   So what's missing in the `ClosureCleaner` for the "indylambda" support is 
find and potentially clone+clean the captured enclosing `this` REPL line 
object. That's what this PR implements.
   
   ##### Inner Closures
   
   The old, inner class style of Scala closures are compiled into separate 
inner classes, one per lambda body. So in order to discover the implementation 
(bytecode) of the inner closures, one has to jump over multiple classes. The 
name of such a class would contain the marker substring `$anonfun$`.
   
   The new, "indylambda" style Scala closures are compiled into **static 
methods** in the class where the lambdas were declared. So for lexically nested 
closures, their lambda bodies would all be compiled into static methods **in 
the same class**. This makes it much easier to discover the implementation 
(bytecode) of the nested lambda bodies. The name of such a static method would 
contain the marker substring `$anonfun$`.
   
   Discovery of inner closures involves scanning bytecode for certain patterns 
that represent the creation of a closure object for the inner closure.
   - For inner class style: the closure object creation site is like `new 
<InnerClassForTheClosure>(captured args)`
   - For "indylambda" style: the closure object creation site would be compiled 
into an `invokedynamic` instruction, with its "bootstrap method" pointing to 
the same one used by Java 8 for its serializable lambdas, and with the 
bootstrap method arguments pointing to the implementation method.
   
   ### Does this PR introduce _any_ user-facing change?
   
   Yes. Before this PR, Spark 2.4 / 3.0 / master on Scala 2.12 would not 
support Scala closures declared in a Scala REPL that captures anything from the 
REPL line objects. After this PR, such scenario is supported.
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
   If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
   -->
   
   WIP/TBD


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to