>>> It appears that during execution time on the yarn hosts, the native CDH
>>> spark1.5 jars are loaded before the new spark2 jars. I've tried using
>>> spark.yarn.archive to specify the spark2 jars in hdfs as well as using
>>> other spark options, none of which seems to make a difference.
Thanks. I can reach out to Cloudera, although the same commands seem to be
work via Spak-Shell (see below). So, the issue seems unique to Zeppelin.
Spark context available as 'sc' (master = yarn, app id =
application_1472496315722_481416).
Spark session available as 'spark'.
Welcome to
Hello Apache Zeppelin team,
For our open-source project we built a web component to visualize time
series data. As I like to develop some demo on Zeppelin I developed a
Zeppelin interpreter to communicate with it.
Right now, I have to rebuild the web-app to integrate this component (add a
line
Hello Apache Zeppelin team,
For our open-source project we built a web component to visualize time
series data. As I like to develop some demo on Zeppelin I developed a
Zeppelin interpreter to communicate with it.
Right now, I have to rebuild the web-app to integrate this component (add a
line