Hello all,

I am trying to install Zeppelin 0.7.1 on my CDH 5.7 Cluster.  I have been
following the instructions here:

https://zeppelin.apache.org/docs/0.7.1/install/install.html
https://zeppelin.apache.org/docs/0.7.1/install/configuration.html
https://zeppelin.apache.org/docs/0.7.1/interpreter/spark.html

I copied the zeppelin-env.sh.template into zeppelin-env.sh and made the
following changes:
export JAVA_HOME=/usr/java/latest
export MASTER=yarn-client

export ZEPPELIN_LOG_DIR=/var/log/services/zeppelin
export ZEPPELIN_PID_DIR=/services/zeppelin/data
export ZEPPELIN_WAR_TEMPDIR=/services/zeppelin/data/jetty_tmp
export ZEPPELIN_NOTEBOOK_DIR=/services/zeppelin/data/notebooks
export ZEPPELIN_NOTEBOOK_PUBLIC=true

export SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark
export HADOOP_CONF_DIR=/etc/spark/conf/yarn-conf
export PYSPARK_PYTHON=/usr/lib/python

I then start Zeppelin and hit the UI in my browser and create a spark note:

%spark
sqlContext.sql("select 1+1").collect().foreach(println)

And I get this error:

org.apache.spark.SparkException: Could not parse Master URL: 'yarn'
at
org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2746)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:533)
at
org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_1(SparkInterpreter.java:484)
at
org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:382)
at
org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146)
at
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:828)
at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:483)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

I specified "yarn-client" as indicated by the instructions so I'm not sure
where it is getting "yarn" from.  In my spark-defaults.conf it
spark.master=yarn-client as well.

Help would be greatly appreciated.

Thanks,
-- 
*BENJAMIN VOGAN* | Data Platform Team Lead

<http://www.shopkick.com/>
<https://www.facebook.com/shopkick> <https://www.instagram.com/shopkick/>
<https://www.pinterest.com/shopkick/> <https://twitter.com/shopkickbiz>
<https://www.linkedin.com/company-beta/831240/?pathWildcard=831240>

Reply via email to