All,
I'm still facing this issue. Any thoughts on how I can fix this?
NR
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-ExceptionInInitializerError-Unable-to-load-YARN-support-tp20775p21143.html
Sent from the Apache Spark User List mailing list
Sean,
Thanks for your response. My MapReduce and Spark 1.0 (prepackaged in CDH5)
jobs are running fine. It's only Spark1.2 jobs that I'm unable to run
NR
On Dec 19, 2014 5:03 AM, "Sean Owen" wrote:
> You've got Kerberos enabled, and it's complaining that it YARN doesn't
> like the Kerberos conf
You've got Kerberos enabled, and it's complaining that it YARN doesn't
like the Kerberos config. Have you verified this should be otherwise
working, sans Spark?
On Fri, Dec 19, 2014 at 3:50 AM, maven wrote:
> All,
>
> I just built Spark-1.2 on my enterprise server (which has Hadoop 2.3 with
> YAR
All,
I just built Spark-1.2 on my enterprise server (which has Hadoop 2.3 with
YARN). Here're the steps I followed for the build:
$ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package
$ export SPARK_HOME=/path/to/spark/folder
$ export HADOOP_CONF_DIR=/etc/hadoop/conf
Ho
I noticed that when I unset HADOOP_CONF_DIR, I'm able to work in the local
mode without any errors.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-ExceptionInInitializerError-Unable-to-load-YARN-support-tp20560p20561.html
Sent from the Apache Spa
All,
I just built Spark-1.2 on my enterprise server (which has Hadoop 2.3 with
YARN). Here're the steps I followed for the build:
$ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package
$ export SPARK_HOME=/path/to/spark/folder
$ export HADOOP_CONF_DIR=/etc/hadoop/conf
However