On Wed, Aug 20, 2014 at 8:54 AM, Matt Narrell <matt.narr...@gmail.com> wrote:
> An “unaccepted” reply to this thread from Dean Chen suggested to build Spark
> with a newer version of Hadoop (2.4.1) and this has worked to some extent.
> I’m now able to submit jobs (omitting an explicit
> “yarn.resourcemanager.address” property) and the
> ConfiguredRMFailoverProxyProvider seems to submit this to the arbitrary,
> active resource manager.  Thanks Dean!

That may point to the actual issue, I think. Your build of Spark
probably has an older version of the Yarn classes included in the jar
(btw I still think it's sort of a bad idea to include the Hadoop
classes in the Spark assembly jar, but well).

What you can try without resorting to a rebuild is force your Hadoop
jars to come before the Spark jar in the classpath (by using
--driver-class-path with spark-submit). Something like:

   spark-submit --driver-class-path $(hadoop classpath)

Note: totally untested, but a quick look at the scripts seem to
indicate --driver-class-path comes before the assembly in the final
classpath.

-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to