Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/21495#discussion_r192961905
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoopInterpreter.scala
---
@@ -21,8 +21,22 @@ import scala.collection.mutable
import scala.tools.nsc.Settings
import scala.tools.nsc.interpreter._
-class SparkILoopInterpreter(settings: Settings, out: JPrintWriter) extends
IMain(settings, out) {
- self =>
+class SparkILoopInterpreter(settings: Settings, out: JPrintWriter,
initializeSpark: () => Unit)
+ extends IMain(settings, out) { self =>
+
+ /**
+ * We override `initializeSynchronous` to initialize Spark *after*
`intp` is properly initialized
+ * and *before* the REPL sees any files in the private `loadInitFiles`
functions, so that
+ * the Spark context is visible in those files.
+ *
+ * This is a bit of a hack, but there isn't another hook available to us
at this point.
+ *
+ * See the discussion in Scala community
https://github.com/scala/bug/issues/10913 for detail.
+ */
+ override def initializeSynchronous(): Unit = {
+ super.initializeSynchronous()
+ initializeSpark()
--- End diff --
@som-snytt It's working, but I'm wondering if I'm doing it correctly.
In this case, I'll use `$intp` without checking
```scala
if (intp.reporter.hasErrors) {
echo("Interpreter encountered errors during initialization!")
null
}
```
in `iLoop.scala`.
And `intp.quietBind(NamedParam[IMain]("$intp", intp)(tagOfIMain,
classTag[IMain]))` will not be executed before our custom Spark initialization
code.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]