Hi,

I need some help for running Spark over Yarn:

I set up a cluster running HDP 2.0.6 with 6 nodes, and then installed the
spark-1.0.1-bin-hadoop2 on each node. When running the SparkPi example with
the following command:

./bin/spark-submit --class org.apache.spark.examples.SparkPi     --master
yarn-cluster     --num-executors 5 --driver-memory 4g     --executor-memory
2g     --executor-cores 1  --jars lib/spark-assembly-1.0.1-hadoop2.2.0.jar  
lib/spark-examples*.jar     10

The job failed with the following error:

INFO yarn.Client: Application report from ASM: 
         application identifier: application_1405545630872_0023
         appId: 23
         clientToAMToken: null
         appDiagnostics: Application application_1405545630872_0023 failed 2 
times
due to AM Container for appattempt_1405545630872_0023_000002 exited with 
exitCode: -1000 due to: Resource
hdfs://ip-172-31-9-187.us-west-1.compute.internal:8020/user/hdfs/.sparkStaging/application_1405545630872_0023/spark-assembly-1.0.1-hadoop2.2.0.jar
changed on src filesystem (expected 1405574940411, was 1405574941940
.Failing this attempt.. Failing the application.

I searched online for solutions and tried to sync up ntp but it doesn't seem
to work. 

Can someone help? Your help is highly appreciated!

Thanks,

Jian



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/jar-changed-on-src-filesystem-tp10011.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to