Hi all,
 I have make HADOOP_CONF_DIR or YARN_CONF_DIR points to the directory which
contains the (client side) configuration files for the hadoop cluster. 
The command to launch the YARN Client which I run is like this:

#
SPARK_JAR=./~/spark-0.9.1/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar
./bin/spark-class org.apache.spark.deploy.yarn.Client\--jar
examples/target/scala-2.10/spark-examples_2.10-assembly-0.9.1.jar\--class
org.apache.spark.examples.SparkPi\--args yarn-standalone \--num-workers 3
\--master-memory 2g \--worker-memory 2g \--worker-cores 1
./bin/spark-class: line 152: /usr/lib/jvm/java-7-sun/bin/java: No such file
or directory
./bin/spark-class: line 152: exec: /usr/lib/jvm/java-7-sun/bin/java: cannot
execute: No such file or directory
How to make it runs well?




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/run-spark0-9-1-on-yarn-with-hadoop-CDH4-tp5426.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to