[ 
https://issues.apache.org/jira/browse/SPARK-25783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16657435#comment-16657435
 ] 

Marcelo Vanzin commented on SPARK-25783:
----------------------------------------

I think the problem that the "without hadoop" profiles end up excluding the 
things that depend on jline, and jline ends up not being packaged. So it uses 
the jline that is in the Hadoop distro, which in this case is too old.

(I don't know which version of jline is used by Hadoop or Hive, but the CDH 
install in my cluster has 2.11, which is older than what Spark needs.)

The repl/ project should probably explicitly depend on jline, so that it's 
always packaged with Spark regadless of the Hadoop-related profiles, even 
though the code doesn't call it directly.

> Spark shell fails because of jline incompatibility
> --------------------------------------------------
>
>                 Key: SPARK-25783
>                 URL: https://issues.apache.org/jira/browse/SPARK-25783
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 2.4.0
>         Environment: spark 2.4.0-rc3 on hadoop 2.6.0 (cdh 5.15.1) with 
> -Phadoop-provided
>            Reporter: koert kuipers
>            Priority: Minor
>
> error i get when launching spark-shell is:
> {code:bash}
> Spark context Web UI available at http://client:4040
> Spark context available as 'sc' (master = yarn, app id = application_xxx).
> Spark session available as 'spark'.
> Exception in thread "main" java.lang.NoSuchMethodError: 
> jline.console.completer.CandidateListCompletionHandler.setPrintSpaceAfterFullCompletion(Z)V
>       at 
> scala.tools.nsc.interpreter.jline.JLineConsoleReader.initCompletion(JLineReader.scala:139)
>       at 
> scala.tools.nsc.interpreter.jline.InteractiveReader.postInit(JLineReader.scala:54)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$1.apply(SparkILoop.scala:190)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$1.apply(SparkILoop.scala:188)
>       at 
> scala.tools.nsc.interpreter.SplashReader.postInit(InteractiveReader.scala:130)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(SparkILoop.scala:214)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1$1.apply(SparkILoop.scala:199)
>       at 
> scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
>       at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
>       at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1.org$apache$spark$repl$SparkILoop$$anonfun$$loopPostInit$1(SparkILoop.scala:199)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:267)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(SparkILoop.scala:247)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1.withSuppressedSettings$1(SparkILoop.scala:235)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1.startup$1(SparkILoop.scala:247)
>       at 
> org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:282)
>       at org.apache.spark.repl.SparkILoop.runClosure(SparkILoop.scala:159)
>       at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:182)
>       at org.apache.spark.repl.Main$.doMain(Main.scala:78)
>       at org.apache.spark.repl.Main$.main(Main.scala:58)
>       at org.apache.spark.repl.Main.main(Main.scala)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
>       at 
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
>       at 
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
>       at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
>       at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
>       at 
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
>       at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)
>       at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {code}
> spark 2.4.0-rc3 which i build with:
> {code:bash}
> dev/make-distribution.sh --name provided --tgz -Phadoop-2.6 
> -Dhadoop.version=2.6.0 -Pyarn -Phadoop-provided
> {code}
> and deployed with in spark-env.sh:
> {code:bash}
> export SPARK_DIST_CLASSPATH=$(hadoop classpath)
> {code}
> hadoop version is 2.6.0 (CDH 5.15.1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to