[GitHub] spark pull request #21495: [SPARK-24418][Build] Upgrade Scala to 2.11.12 and...

2018-06-10 Thread som-snytt
Github user som-snytt commented on a diff in the pull request:

https://github.com/apache/spark/pull/21495#discussion_r194294485
  
--- Diff: 
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoopInterpreter.scala
 ---
@@ -21,8 +21,22 @@ import scala.collection.mutable
 import scala.tools.nsc.Settings
 import scala.tools.nsc.interpreter._
 
-class SparkILoopInterpreter(settings: Settings, out: JPrintWriter) extends 
IMain(settings, out) {
-  self =>
+class SparkILoopInterpreter(settings: Settings, out: JPrintWriter, 
initializeSpark: () => Unit)
+extends IMain(settings, out) { self =>
--- End diff --

It's definitely two spaces after a period. I've been wanting to make that 
joke, but held off.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21495: [SPARK-24418][Build] Upgrade Scala to 2.11.12 and 2.12.6

2018-06-07 Thread som-snytt
Github user som-snytt commented on the issue:

https://github.com/apache/spark/pull/21495
  
@dbtsai 2.11 looks similar to 2.12. Do you mean you want the same technique 
on 2.10? I would not expect to find a single hook for all versions.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21495: [SPARK-24418][Build] Upgrade Scala to 2.11.12 and...

2018-06-07 Thread som-snytt
Github user som-snytt commented on a diff in the pull request:

https://github.com/apache/spark/pull/21495#discussion_r193938805
  
--- Diff: 
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoopInterpreter.scala
 ---
@@ -21,8 +21,22 @@ import scala.collection.mutable
 import scala.tools.nsc.Settings
 import scala.tools.nsc.interpreter._
 
-class SparkILoopInterpreter(settings: Settings, out: JPrintWriter) extends 
IMain(settings, out) {
-  self =>
+class SparkILoopInterpreter(settings: Settings, out: JPrintWriter, 
initializeSpark: () => Unit)
+extends IMain(settings, out) { self =>
+
+  /**
+   * We override `initializeSynchronous` to initialize Spark *after* 
`intp` is properly initialized
+   * and *before* the REPL sees any files in the private `loadInitFiles` 
functions, so that
+   * the Spark context is visible in those files.
+   *
+   * This is a bit of a hack, but there isn't another hook available to us 
at this point.
+   *
+   * See the discussion in Scala community 
https://github.com/scala/bug/issues/10913 for detail.
+   */
+  override def initializeSynchronous(): Unit = {
+super.initializeSynchronous()
+initializeSpark()
--- End diff --

Completion was upgraded since the old days; also, other bugs required 
updating jline. There is interest in upgrading to jline 3.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21495: [SPARK-24418][Build] Upgrade Scala to 2.11.12 and 2.12.6

2018-06-05 Thread som-snytt
Github user som-snytt commented on the issue:

https://github.com/apache/spark/pull/21495
  
The Scala REPL change at startup was to read user input while the 
single-threaded compiler was initializing on the main thread.

There is a `SplashReader` that collects that input; its last act is to 
replace itself with the real interactive reader. (I see that `-e` is 
implemented as a reader of the expression string.)

Since you can forgo the illusion of a "snappy start-up", I think your 
`printWelcome` could just block on a latch, waiting for your custom init to 
complete. (Your init code could either just print stuff, or it could stash a 
string for your printWelcome to append.)

Never mind, that doesn't work because printWelcome is on the main thread, 
not the splash thread; since printWelcome is expensive I/O, I ought to have had 
the foresight to move it to the splash thread.

So, your best bet for synchronous startup is to do everything in 
printWelcome: createInterpreter, your init commands that produce output. 
`IMain` has a compiler that is initialized lazily, so I don't think you have to 
explicitly `intp.initializeSynchronous`.

But `createInterpreter` will be called again; you'll want to detect that 
and make it a no-op; the method is also called for `:replay`, I don't know if 
that is useful in the spark environment?

Unfortunately, there is no option for specifying a custom splash reader 
that might do these things.

As a final irony, I did some janitorial work to deprecate `-Yrepl-sync`, 
which was unused (I think it was put in because the old threading was broken); 
maybe I will revive it for this use case, to skip the splash reader which 
prints the prompt you don't want.




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21369: [SPARK-22713][CORE] ExternalAppendOnlyMap leaks w...

2018-05-26 Thread som-snytt
Github user som-snytt commented on a diff in the pull request:

https://github.com/apache/spark/pull/21369#discussion_r191064975
  
--- Diff: 
core/src/main/scala/org/apache/spark/util/collection/ExternalAppendOnlyMap.scala
 ---
@@ -585,17 +591,25 @@ class ExternalAppendOnlyMap[K, V, C](
   } else {
 logInfo(s"Task ${context.taskAttemptId} force spilling in-memory 
map to disk and " +
   s"it will release 
${org.apache.spark.util.Utils.bytesToString(getUsed())} memory")
-nextUpstream = spillMemoryIteratorToDisk(upstream)
+val nextUpstream = spillMemoryIteratorToDisk(upstream)
+assert(!upstream.hasNext)
 hasSpilled = true
+upstream = nextUpstream
 true
   }
 }
 
+private def destroy() : Unit = {
+  freeCurrentMap()
+  upstream = Iterator.empty
--- End diff --

+1


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #17982: [SPARK-20395][BUILD] Update Scala to 2.11.11 and zinc to...

2017-05-20 Thread som-snytt
Github user som-snytt commented on the issue:

https://github.com/apache/spark/pull/17982
  
Thanks for the effort. I'll take a hack soon. If it's hopeless, I'll at 
least try to track developments with the new REPL API.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #17982: [SPARK-20395][BUILD] Update Scala to 2.11.11 and zinc to...

2017-05-15 Thread som-snytt
Github user som-snytt commented on the issue:

https://github.com/apache/spark/pull/17982
  
The contract was never well-defined.

`sbt` overrides `createInterpreter` and runs its `initialCommands` before 
returning. So that runs before REPL finishes init in `loopPostInit`: no 
`$intp`, power mode not entered, start code and `-i` not run yet.

I can't tell just by looking if that suits your needs here, but it's the 
most ancient hook enshrined by sbt.


https://github.com/sbt/sbt/blob/0.13/compile/interface/src/main/scala/xsbt/ConsoleInterface.scala#L27

@adriaanm is taking executive action to modularize/make API for REPL reuse. 
Possibly you're aware or he's only just started and is in stealth mode. That 
will address the issue in future.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [ SPARK-1812] Adjust build system and tests to...

2014-10-22 Thread som-snytt
Github user som-snytt commented on the pull request:

https://github.com/apache/spark/pull/2615#issuecomment-60188566
  
I had a version for that SI-7747 that moved the scripting logic to a 
separate compiler phase which could be replaced by a user plugin.  (But that 
also moved the repl away from text-based templating to tree transforms, so it 
was broader scope.  Also, not sure if a plugin satisfies decoupled from 
compiler details.) The work for the PR was only to satisfy Spark's needs as a 
repl consumer, so why not reopen the issue.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Added support for :cp jar that was broken in...

2014-08-19 Thread som-snytt
Github user som-snytt commented on the pull request:

https://github.com/apache/spark/pull/1929#issuecomment-52670339
  
FWIW, I remembered my fix included overriding the ClassPath (in the 
JavaPlatform) of the REPL compiler.  So it was a bit more nuanced in handling 
the replacement; it was a long time ago now, in Internet or dog years.  But 
having an additive solution is better than nothing, the perfect being the 
enemy of the good and all that.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: Added support for :cp jar that was broken in...

2014-08-14 Thread som-snytt
Github user som-snytt commented on the pull request:

https://github.com/apache/spark/pull/1929#issuecomment-52230440
  
This is similar to what I tried last year, but I recently saw:

https://github.com/scala/scala/pull/3884

which says the mechanisms are in flux in 2.11.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org