[ https://issues.apache.org/jira/browse/SPARK-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14011350#comment-14011350 ]
Colin Patrick McCabe commented on SPARK-1518: --------------------------------------------- bq. Re: versioning one more time, really supporting a bunch of versions may get costly. It's already tricky to manage two builds times YARN-or-not, Hive-or-not, times 4 flavors of Hadoop. I doubt the assemblies are yet problem-free in all cases. I think in this particular case, we can use reflection to support both Hadoop 1.X and newer stuff. bq. I am not sure Spark should contain a CDH-specific distribution? realizing it's really a proxy for a particular Hadoop combo. Same goes for a MapR profile, which is really for vendors to maintain) I agree 100%. We should keep vendor stuff out of the Apache release. Vendors can create their own build setups (that's what they get paid to do, after all.) bq. There is no suggested action here; if anything I suggest that the right thing is to add Maven artifacts with classifiers, add a few binary artifacts, subtract a few vendor artifacts, but this is a different action. If you have some ideas for how to improve the Maven build, it could be worth creating a JIRA. I think you're right that we need to make it more flexible so that people can build against more versions without editing the pom. It might be helpful to look at how HBase handles this in its {{pom.xml}} files. > Spark master doesn't compile against hadoop-common trunk > -------------------------------------------------------- > > Key: SPARK-1518 > URL: https://issues.apache.org/jira/browse/SPARK-1518 > Project: Spark > Issue Type: Bug > Reporter: Marcelo Vanzin > Assignee: Colin Patrick McCabe > Priority: Critical > > FSDataOutputStream::sync() has disappeared from trunk in Hadoop; > FileLogger.scala is calling it. > I've changed it locally to hsync() so I can compile the code, but haven't > checked yet whether those are equivalent. hsync() seems to have been there > forever, so it hopefully works with all versions Spark cares about. -- This message was sent by Atlassian JIRA (v6.2#6252)