[
https://issues.apache.org/jira/browse/MAHOUT-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260394#comment-14260394
]
ASF GitHub Bot commented on MAHOUT-1636:
----------------------------------------
Github user dlyubimov commented on a diff in the pull request:
https://github.com/apache/mahout/pull/69#discussion_r22326490
--- Diff: spark/src/main/assembly/dependencies.xml ---
@@ -38,9 +38,34 @@
<outputDirectory>/</outputDirectory>
<useTransitiveFiltering>true</useTransitiveFiltering>
<excludes>
+ <!-- MAHOUT-1636 -->
+ <!-- add any projects that are included in the spark environment
or are in mrlegacy
+ but not used in spark drivers -->
<exclude>org.apache.hadoop:hadoop-core</exclude>
+ <exclude>org.apache.spark:spark-core_${scala.major}</exclude>
+ <exclude>org.scala-lang:scala-library</exclude>
+ <exclude>jackson-core-asl</exclude>
+ <exclude>jackson-mapper-asl</exclude>
+ <exclude>xstream</exclude>
+ <exclude>lucene-core</exclude>
+ <exclude>lucene-analyzers-common</exclude>
</excludes>
</dependencySet>
--- End diff --
instead of filtering what is _exluded_ (opt-out) i'd rather determine
what's the minimum opt-in (in the assembly plugin source file). That's
common practice, excludes are tedious, and, most importantly, don't tell
you a bit what exactly you are ending up with
On Mon, Dec 29, 2014 at 12:03 PM, Pat Ferrel <[email protected]>
wrote:
> In spark/src/main/assembly/dependencies.xml
> <https://github.com/apache/mahout/pull/69#discussion-diff-22326191>:
>
> > <exclude>org.apache.hadoop:hadoop-core</exclude>
> > + <exclude>org.apache.spark:spark-core_${scala.major}</exclude>
> > + <exclude>org.scala-lang:scala-library</exclude>
> > + <exclude>jackson-core-asl</exclude>
> > + <exclude>jackson-mapper-asl</exclude>
> > + <exclude>xstream</exclude>
> > + <exclude>lucene-core</exclude>
> > + <exclude>lucene-analyzers-common</exclude>
> > </excludes>
> > </dependencySet>
>
> This is as many as seem safe. Lots inside mrlegacy that could be excluded
> but its all in the same artifact so leaving in unless someone knows how to
> exclude particular partial packages.
>
> Won't change the code to trim things from the classpath in this commit but
> I suspect the dependencies.jar may be all that is needed for spark-shell
> and drivers.
>
> @andrewpalumbo <https://github.com/andrewpalumbo> there's little chance
> this will mess up your drivers so I may push this after some more testing
> on my side.
>
> —
> Reply to this email directly or view it on GitHub
> <https://github.com/apache/mahout/pull/69/files#r22326191>.
>
> Class dependencies for the spark module are put in a job.jar, which is very
> inefficient
> ---------------------------------------------------------------------------------------
>
> Key: MAHOUT-1636
> URL: https://issues.apache.org/jira/browse/MAHOUT-1636
> Project: Mahout
> Issue Type: Bug
> Components: spark
> Affects Versions: 1.0-snapshot
> Reporter: Pat Ferrel
> Assignee: Ted Dunning
> Fix For: 1.0-snapshot
>
>
> using a maven plugin and an assembly job.xml a job.jar is created with all
> dependencies including transitive ones. This job.jar is in
> mahout/spark/target and is included in the classpath when a Spark job is run.
> This allows dependency classes to be found at runtime but the job.jar include
> a great deal of things not needed that are duplicates of classes found in the
> main mrlegacy job.jar. If the job.jar is removed, drivers will not find
> needed classes. A better way needs to be implemented for including class
> dependencies.
> I'm not sure what that better way is so am leaving the assembly alone for
> now. Whoever picks up this Jira will have to remove it after deciding on a
> better method.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)