Hi,

I followed http://zeppelin.apache.org/docs/latest/interpreter/spark.html,
and set up SPARK_HOME, HADOOP_CONF_DIR.

My SPARK build is 2.0. My Zeppelin build is the 0.6.1 binary from the web.

After I start Zeppelin, I went to the interpreter setting page and changed
Spark interpreter settings as follow:

master: yarn
deploy-mode: client

Then, I created a new notebook and executed:

%spark

spark.version

The block never finishes...no error either.

In the ./logs/zeppelin-interpreter-spark-*.log, I found the following *which
I think is the cause of my problem*

 INFO [2016-10-09 06:28:37,074] ({pool-2-thread-2}
Logging.scala[logInfo]:54) - Added JAR
*file:/opt/zeppelin-0.6.1-bin-all/interpreter/spark/zeppelin-spark_2.11-0.6.1.jar*
at spark://172.17.0.3:38775/jars/zeppelin-spark_2.11-0.6.1.jar with
timestamp 1475994517073
 INFO [2016-10-09 06:28:37,150] ({pool-2-thread-2}
Logging.scala[logInfo]:54) - Created default pool default, schedulingMode:
FIFO, minShare: 0, weight: 1
 INFO [2016-10-09 06:28:38,205] ({pool-2-thread-2}
RMProxy.java[createRMProxy]:98) - *Connecting to ResourceManager at
/0.0.0.0:8032 <http://0.0.0.0:8032>*

Looks like Zeppelin is not using my Spark binary, and is not using my
hadoop configuration.

What did I miss?

-- 


Thanks,
David S.

Reply via email to