[ 
https://issues.apache.org/jira/browse/SPARK-5134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14352374#comment-14352374
 ] 

Shivaram Venkataraman commented on SPARK-5134:
----------------------------------------------

Yeah if you exclude Spark's Hadoop dependency things work correctly for 
Hadoop1. There are some additional issues that come up in 1.2 if due to the 
Guava changes, but those are not related to the default Hadoop version change. 
I think the documentation to update would be [1] but I am thinking it would be 
good to mention this in the Quick Start guide [2] as well

[1] 
https://github.com/apache/spark/blob/55b1b32dc8b9b25deea8e5864b53fe802bb92741/docs/hadoop-third-party-distributions.md#linking-applications-to-the-hadoop-version
[2] 
https://github.com/apache/spark/blob/55b1b32dc8b9b25deea8e5864b53fe802bb92741/docs/quick-start.md#self-contained-applications

> Bump default Hadoop version to 2+
> ---------------------------------
>
>                 Key: SPARK-5134
>                 URL: https://issues.apache.org/jira/browse/SPARK-5134
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 1.2.0
>            Reporter: Ryan Williams
>            Priority: Minor
>
> [~srowen] and I discussed bumping [the default hadoop version in the parent 
> POM|https://github.com/apache/spark/blob/bb38ebb1abd26b57525d7d29703fd449e40cd6de/pom.xml#L122]
>  from {{1.0.4}} to something more recent.
> There doesn't seem to be a good reason that it was set/kept at {{1.0.4}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to