I'm having a really bad dependency conflict right now with Guava versions
between my Spark application in Yarn and (I believe) Hadoop's version.

The problem is, my driver has the version of Guava which my application is
expecting (15.0) while it appears the Spark executors that are working on
my RDDs have a much older version (assuming it's the old version on the
Hadoop classpath).

Is there a property like "mapreduce.job.user.classpath.first' that I can
set to make sure my own classpath is extablished first on the executors?

Reply via email to