[ 
https://issues.apache.org/jira/browse/SPARK-5134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352317#comment-14352317
 ] 

Shivaram Venkataraman commented on SPARK-5134:
----------------------------------------------

Yeah so this did change in 1.2 and I think I mentioned it to Patrick when it 
affected a couple of other projects of mine. The main problem there was that 
even if you have an explicit Hadoop 1 dependency in your project, SBT picks up 
the highest version required while building an assembly jar for the project -- 
Thus with Spark linked against Hadoop 2.2, one would require an exclusion rule 
to use Hadoop 1. It might be good to add this to the docs or to some of the 
example Quick Start documentation we have

> Bump default Hadoop version to 2+
> ---------------------------------
>
>                 Key: SPARK-5134
>                 URL: https://issues.apache.org/jira/browse/SPARK-5134
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 1.2.0
>            Reporter: Ryan Williams
>            Priority: Minor
>
> [~srowen] and I discussed bumping [the default hadoop version in the parent 
> POM|https://github.com/apache/spark/blob/bb38ebb1abd26b57525d7d29703fd449e40cd6de/pom.xml#L122]
>  from {{1.0.4}} to something more recent.
> There doesn't seem to be a good reason that it was set/kept at {{1.0.4}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to