This structure is not specific to Hadoop, but in theory works in any
JAR file. You can put JARs in JARs and refer to them with Class-Path
entries in META-INF/MANIFEST.MF.

It works but I have found it can cause trouble with programs that
query the JARs on the classpath to find other classes. When they're in
a JAR-within-a-JAR these approaches can fail. But, it's worth trying.

You can always repackage all the JARs together, or include them
individually on a classpath too, as alternatives that can also work.

On Tue, Sep 9, 2014 at 2:56 AM, Steve Lewis <lordjoe2...@gmail.com> wrote:
>
>
>  In a Hadoop jar there is a directory called lib and all non-provided third
> party jars go there and are included in the class path of the code. Do jars
> for Spark have the same structure - another way to ask the question is if I
> have code to execute Spark and a jar build for Hadoop can I simply use that
> jar?

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to