My setup ---
I have a private cluster running on 4 nodes. I want to use the spark-submit
script to execute spark applications on the cluster. I am using Mesos to
manage the cluster. 

This is the command I ran on local mode, which ran successfully ---
 ./bin/spark-submit --master local --class org.apache.spark.examples.SparkPi
/opt/spark-examples-1.0.0-hadoop2.4.0.jar 100

This is the command I ran on cluster mode, which did failed to run ---
./bin/spark-submit --master mesos://<ip-addr>:5050 --class
org.apache.spark.examples.SparkPi /opt/spark-examples-1.0.0-hadoop2.4.0.jar
100

(storing jar on hdfs mode)
./bin/spark-submit --master mesos://<ip-addr>:5050 --class
org.apache.spark.examples.SparkPi
hdfs://<ip-addr>/jars/spark-examples-1.0.0-hadoop2.4.0.jar 100

This is the stderr displayed in mesos sandbox --- 
sh: /home/<user>/spark-1.0.0/sbin/spark-executor: No such file or directory

My SPARK_EXECUTOR_URI location is at
'hdfs://<ip-addr>:9000/new/spark-1.0.0-hadoop-2.4.0.tgz'. How do I give the
parameter to spark-submit to use the executor location. Does spark-submit
not refer to spark-env.sh file? because that is what spark-shell uses and
runs smoothly.

Tell me if my command is incorrect or where I'm going wrong with the usage.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-failing-on-cluster-tp8369.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to