Hi , 

Just a Quick Update , After trying for a while , i rebooted all the Three
machines used in the Cluster and formatted namenode and ZKFC . Then i
started every Daemon in the Cluster.

After all the Daemons were up and Running i tried to issue the same command
as earlier 

<http://apache-spark-user-list.1001560.n3.nabble.com/file/n26713/Sprk-error1.jpg>
 

As you can see the SparkContext is started , But i still some ERROR entries
in there 
"ERROR YarnClientSchedulerBackend: Yarn application has already exited with
state FAILED!"

Also , if i Type exit() in the end and then Again try to re-issue the same
command to start Spark on Yarn-Client then it does not even start and takes
me back to the error message posted earlier.

<http://apache-spark-user-list.1001560.n3.nabble.com/file/n26713/Spark-error2.jpg>
 

I have no idea on what is causing this.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Spark-on-Yarn-Client-Cluster-mode-tp26691p26713.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to