Hi,
I am configuring a standalone setup for spark cluster using
spark-0.9.1-bin-hadoop2 binary.
Started the master and slave(localhost) using start-master and start-slaves
sh.I can see the master and worker started in web ui.
Now i am running a sample poc java jar file which connects to the master
url. But the application is failing with log  as given below
14/04/29 14:15:12 INFO scheduler.DAGScheduler: Submitting Stage 0
(FilteredRDD[2] at filter at SparkPOC.java:16), which has no missing parents
14/04/29 14:15:12 INFO client.AppClient$ClientActor: Executor updated:
app-20140429141512-0000/2 is now FAILED (class java.io.IOException: Cannot
run program "java" (in directory
"/u01/app/spark-0.9.1-bin-hadoop2/work/app-20140429141512-0000/2"): error=2,
No such file or directory)


I have attached the full log . 

Running the application using java -jar command  and the dependencies are
taken from relative classpath folder which has all the required jars.
spark-logs.txt
<http://apache-spark-user-list.1001560.n3.nabble.com/file/n5060/spark-logs.txt> 
 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-cluster-standalone-setup-tp5060.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to