Previous message seems to be a problem with the timestamp of each file.Before
I was copying the jar file to each slave node, so I left the jar only on the
master node. I rerun the applications but now I get the following INFO
messages:
16/02/18 11:22:58 INFO Client: Source and destination file systems are the
same. Not copying file:/opt/spark/lib/spark-assembly-1.6.0-hadoop2.6.0.jar
16/02/18 11:22:59 INFO Client: Source and destination file systems are the
same. Not copying file:/opt/spark/BenchMark-1.0-SNAPSHOT.jar
16/02/18 11:22:59 INFO Client: Source and destination file systems are the
same. Not copying
file:/tmp/spark-313ee0f0-6a30-4eb7-a3ce-b2a0deeff6f4/__spark_conf__8462363500845960489.zip
And the error:
Diagnostics: java.io.FileNotFoundException: File
file:/opt/spark/BenchMark-1.0-SNAPSHOT.jar does not exist
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1455794579804
final status: FAILED
tracking URL:
http://stremi-14.reims.grid5000.fr:8088/cluster/app/application_1455792361051_0011
user: abrandon
Exception in thread "main" org.apache.spark.SparkException: Application
application_1455792361051_0011 finished with failed status
As far as I know the master should send the jar file to the slave nodes. How
comes it cannot find the file?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-executing-Spark-application-on-YARN-tp26248p26265.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]