[ 
https://issues.apache.org/jira/browse/SPARK-1518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14012552#comment-14012552
 ] 

Sean Owen commented on SPARK-1518:
----------------------------------

Heh, I think the essence is: at least one more separate Maven artifact, under a 
different classifier, for Hadoop 2.x builds. If you package that, you get Spark 
and everything it needs to work against a Hadoop 2 cluster. Yeah I see that 
you're suggesting various ways to push the app to the cluster, where it can 
bind to the right version of things, and that may be the right-est way to think 
about this. I had envisioned running a stand-alone app on a machine that is not 
part of the cluster, that is a client of it, and this means packaging in the 
right Hadoop client dependencies, and Spark already declares how it wants to 
include these various Hadoop client versions -- it's more than just including 
hadoop-client -- so wanted to leverage that. Let's see if this actually turns 
out to be a broader request though.

> Spark master doesn't compile against hadoop-common trunk
> --------------------------------------------------------
>
>                 Key: SPARK-1518
>                 URL: https://issues.apache.org/jira/browse/SPARK-1518
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: Marcelo Vanzin
>            Assignee: Colin Patrick McCabe
>            Priority: Critical
>
> FSDataOutputStream::sync() has disappeared from trunk in Hadoop; 
> FileLogger.scala is calling it.
> I've changed it locally to hsync() so I can compile the code, but haven't 
> checked yet whether those are equivalent. hsync() seems to have been there 
> forever, so it hopefully works with all versions Spark cares about.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to