[ 
https://issues.apache.org/jira/browse/SPARK-2420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14076468#comment-14076468
 ] 

Sean Owen commented on SPARK-2420:
----------------------------------

I'm sure shading just means moving the packages, and references in the byte 
code, with maven-shade-plugin. assembly takes very little of the total build 
time. Nothing else I can see except Hadoop has a Guava dependency. But yeah, 
there is gonna have to be a teensy fork of a Guava class maintained then. It 
can go in the source tree, so doesn't necessarily need more assembly surgery. 
Does it change your calculus? I remain slightly grossed out by all options.

> Change Spark build to minimize library conflicts
> ------------------------------------------------
>
>                 Key: SPARK-2420
>                 URL: https://issues.apache.org/jira/browse/SPARK-2420
>             Project: Spark
>          Issue Type: Wish
>          Components: Build
>    Affects Versions: 1.0.0
>            Reporter: Xuefu Zhang
>         Attachments: spark_1.0.0.patch
>
>
> During the prototyping of HIVE-7292, many library conflicts showed up because 
> Spark build contains versions of libraries that's vastly different from 
> current major Hadoop version. It would be nice if we can choose versions 
> that's in line with Hadoop or shading them in the assembly. Here are the wish 
> list:
> 1. Upgrade protobuf version to 2.5.0 from current 2.4.1
> 2. Shading Spark's jetty and servlet dependency in the assembly.
> 3. guava version difference. Spark is using a higher version. I'm not sure 
> what's the best solution for this.
> The list may grow as HIVE-7292 proceeds.
> For information only, the attached is a patch that we applied on Spark in 
> order to make Spark work with Hive. It gives an idea of the scope of changes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to