Michel Lemay created SPARK-14956:
------------------------------------
Summary: Spark dependencies conflicts
Key: SPARK-14956
URL: https://issues.apache.org/jira/browse/SPARK-14956
Project: Spark
Issue Type: Brainstorming
Components: Spark Core
Affects Versions: 1.6.1
Reporter: Michel Lemay
Priority: Minor
Since Spark lives with ages-old dependencies, it's all too often a problem that
we must downgrade one of our dependencies just to make it "not explode" in
Spark. And most importantly, this is something we only encounter at runtime and
that makes it an even worse problem.
So the usual solution when we depend directly on a package (but not one of our
transitive dependencies) is to relocate with a shade plugin like this:
{code}
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
...
<configuration>
<relocations>
<relocation>
<pattern>scopt</pattern>
<shadedPattern>hidden.scopt</shadedPattern>
</relocation>
{code}
Other times, we must exclude transitive dependencies like that:
{code}
<dependency>
<groupId>com.twitter.penguin</groupId>
<artifactId>korean-text</artifactId>
<version>4.1.2</version>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-nop</artifactId>
</exclusion>
</exclusions>
</dependency>
{code}
Everything related to Guava, Log4j, Databind, scopt and even Java8 fall into
this category.
I wonder if that would be possible to use OSGi style of plugins when running
code inside Spark.. That would shield us from all theses things.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]