Hi Jian,

In yarn-cluster mode, Spark submit automatically uploads the assembly jar
to a distributed cache that all executor containers read from, so there is
no need to manually copy the assembly jar to all nodes (or pass it through
--jars).
It seems there are two versions of the same jar in your HDFS. Try removing
all old jars from your .sparkStaging directory and try again?

Let me know if that does the job,
Andrew


2014-07-16 23:42 GMT-07:00 cmti95035 <cmti95...@gmail.com>:

> They're all the same version. Actually even without the "--jars" parameter
> it
> got the same error. Looks like it needs to copy the assembly jar for
> running
> the example jar anyway during the staging.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/jar-changed-on-src-filesystem-tp10011p10017.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to