[
https://issues.apache.org/jira/browse/OOZIE-2277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14682330#comment-14682330
]
Thomas Graves commented on OOZIE-2277:
--------------------------------------
Hello, I work on Spark and the --jars should have been what you want to use if
you need to ship these jars to the other yarn containers spark is starting.
The "*.extraClassPath" configs are only for adding stuff already on the node to
the classpath. They will not ship the extra jars/files to the executors.
You also do not want to use SPARK_DIST_CLASSPATH. it is for pointing spark at
the hadoop distribution if its not packaged with spark already.
What error did you get trying to use the --jars options?
fyi You can also use spark.jars configuration value (its equivalent to the
--jars) but the config is only picked up with the --jars option isn't present.
> Honor oozie.action.sharelib.for.spark in Spark jobs
> ---------------------------------------------------
>
> Key: OOZIE-2277
> URL: https://issues.apache.org/jira/browse/OOZIE-2277
> Project: Oozie
> Issue Type: Improvement
> Reporter: Ryan Brush
> Assignee: Robert Kanter
> Priority: Minor
> Attachments: OOZIE-2277.001.patch
>
>
> Shared libraries specified by oozie.action.sharelib.for.spark are not visible
> in the Spark job itself. For instance, setting
> oozie.action.sharelib.for.spark to "spark,hcat" will not make the hcat jars
> usable in the Spark job. This is inconsistent with other actions (such as
> Java and MapReduce actions).
> Since the Spark action just calls SparkSubmit, it looks like we would need to
> explicitly pass the jars for the specified sharelibs into the SparkSubmit
> operation so they are available to the Spark operation itself.
> One option: we can just pass the HDFS URLs to that command via the --jars
> parameter. This is actually what I've done to work around this issue; it
> makes for a long SparkSubmit command but works.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)