Kun Deng created ZEPPELIN-4897:
----------------------------------

             Summary: 0.9.0 preview doesn't work with the newly released spark 
3.0
                 Key: ZEPPELIN-4897
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4897
             Project: Zeppelin
          Issue Type: Bug
          Components: spark
    Affects Versions: 0.9.0
         Environment: |SPARK_HOME| /usr/local/Cellar/apache-spark/3.0.0/libexec 
|Location of spark distribution|
|master| local[*] |Spark master uri. local \| yarn-client \| yarn-cluster \| 
spark master address of standalone mode, ex) spark://master_host:7077|
|spark.app.name| Zeppelin |The name of spark application.|
|spark.driver.cores| 1 |Number of cores to use for the driver process, only in 
cluster mode.|
|spark.driver.memory| 1g |Amount of memory to use for the driver process, i.e. 
where SparkContext is initialized, in the same format as JVM memory strings 
with a size unit suffix ("k", "m", "g" or "t") (e.g. 512m, 2g).|
|spark.executor.cores| 1 |The number of cores to use on each executor|
|spark.executor.memory| 1g |Executor memory per worker instance. ex) 512m, 32g|
|spark.files| |Comma-separated list of files to be placed in the working 
directory of each executor. Globs are allowed.|
|spark.jars| |Comma-separated list of jars to include on the driver and 
executor classpaths. Globs are allowed.|
|spark.jars.packages| |Comma-separated list of Maven coordinates of jars to 
include on the driver and executor classpaths. The coordinates should be 
groupId:artifactId:version. If spark.jars.ivySettings is given artifacts will 
be resolved according to the configuration in the file, otherwise artifacts 
will be searched for in the local maven repo, then maven central and finally 
any additional remote repositories given by the command-line option 
--repositories.|
|zeppelin.spark.useHiveContext| true |Use HiveContext instead of SQLContext if 
it is true. Enable hive for SparkSession.|
|zeppelin.spark.printREPLOutput| true |Print REPL output|
|zeppelin.spark.maxResult| 1000 |Max number of result to display.|
|zeppelin.spark.enableSupportedVersionCheck| true |Whether checking supported 
spark version. Developer only setting, not for production use|
|zeppelin.spark.uiWebUrl| |Override Spark UI default URL|
|zeppelin.spark.ui.hidden| false |Whether hide spark ui in zeppelin ui|
|spark.webui.yarn.useProxy| false |whether use yarn proxy url as spark weburl, 
e.g. http://localhost:8088/proxy/application_1583396598068_0004|
|zeppelin.spark.scala.color| true |Whether enable color output of spark scala 
interpreter|
|zeppelin.spark.deprecatedMsg.show| true |Whether show the spark deprecated 
message, spark 2.2 and before are deprecated. Zeppelin will display warning 
message by default|
|zeppelin.spark.concurrentSQL| false |Execute multiple SQL concurrently if set 
true.|
|zeppelin.spark.concurrentSQL.max| 10 |Max number of SQL concurrently executed|
|zeppelin.spark.sql.stacktrace| false |Show full exception stacktrace for SQL 
queries if set to true.|
|zeppelin.spark.sql.interpolation| false |Enable ZeppelinContext variable 
interpolation into spark sql|
|PYSPARK_PYTHON| /Users/kundeng/miniconda3/bin/python |Python binary executable 
to use for PySpark in both driver and workers (default is python2.7 if 
available, otherwise python). Property `spark.pyspark.python` take precedence 
if it is set|
|PYSPARK_DRIVER_PYTHON| python |Python binary executable to use for PySpark in 
driver only (default is `PYSPARK_PYTHON`). Property 
`spark.pyspark.driver.python` take precedence if it is set|
|zeppelin.pyspark.useIPython| true |Whether use IPython when it is available|
|zeppelin.R.knitr| true |Whether use knitr or not|
|zeppelin.R.cmd| R |R binary executable path|
|zeppelin.R.image.width| 100% |Image width of R plotting|
|zeppelin.R.render.options| out.format = 'html', comment = NA, echo = FALSE, 
results = 'asis', message = F, warning = F, fig.retina = 2 | |
|zeppelin.kotlin.shortenTypes| true |Show short types instead of full, e.g. 
List<String> or kotlin.collections.List<kotlin.String>|
            Reporter: Kun Deng


macbook pro with the official spark 3.0 installed using brew.

 

Running zeppelin 0.9.0 preview1 gives the following error:

org.apache.zeppelin.interpreter.InterpreterException: 
org.apache.zeppelin.interpreter.InterpreterException: Fail to open 
SparkInterpreter
 at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:76)
 at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668)
 at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:577)
 at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
 at 
org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130)
 at 
org.apache.zeppelin.scheduler.FIFOScheduler.lambda$runJobInScheduler$0(FIFOScheduler.java:39)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.zeppelin.interpreter.InterpreterException: Fail to open 
SparkInterpreter
 at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:114)
 at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
 ... 8 more
Caused by: java.lang.reflect.InvocationTargetException
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
 at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566)
 at 
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:292)
 at 
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:223)
 at 
org.apache.zeppelin.spark.SparkScala212Interpreter.open(SparkScala212Interpreter.scala:90)
 at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:98)
 ... 9 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.spark.storage.StorageUtils$
 at 
org.apache.spark.storage.BlockManagerMasterEndpoint.<init>(BlockManagerMasterEndpoint.scala:93)
 at org.apache.spark.SparkEnv$.$anonfun$create$9(SparkEnv.scala:370)
 at org.apache.spark.SparkEnv$.registerOrLookupEndpoint$1(SparkEnv.scala:311)
 at org.apache.spark.SparkEnv$.create(SparkEnv.scala:359)
 at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:189)
 at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:267)
 at org.apache.spark.SparkContext.<init>(SparkContext.scala:442)
 at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2555)
 at 
org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$1(SparkSession.scala:930)
 at scala.Option.getOrElse(Option.scala:189)
 at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
 ... 17 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to