[ 
https://issues.apache.org/jira/browse/SPARK-5134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14353047#comment-14353047
 ] 

Sean Owen commented on SPARK-5134:
----------------------------------

Yep, I confirmed that ...

{code}
[INFO] \- org.apache.spark:spark-core_2.10:jar:1.2.1:compile
...
[INFO]    +- org.apache.hadoop:hadoop-client:jar:2.2.0:compile
[INFO]    |  +- org.apache.hadoop:hadoop-common:jar:2.2.0:compile
[INFO]    |  |  +- commons-cli:commons-cli:jar:1.2:compile
...
{code}

Well, FWIW, although unintentional I do think there are upsides to this change. 
It would be good to codify that in the build, I suppose, by updating the 
default version number. How about updating to 2.2.0 to match what has actually 
happened? This would not entail activating the Hadoop build profiles by default 
or anything.

[~rdub] would you care to do the honors?

> Bump default Hadoop version to 2+
> ---------------------------------
>
>                 Key: SPARK-5134
>                 URL: https://issues.apache.org/jira/browse/SPARK-5134
>             Project: Spark
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 1.2.0
>            Reporter: Ryan Williams
>            Priority: Minor
>
> [~srowen] and I discussed bumping [the default hadoop version in the parent 
> POM|https://github.com/apache/spark/blob/bb38ebb1abd26b57525d7d29703fd449e40cd6de/pom.xml#L122]
>  from {{1.0.4}} to something more recent.
> There doesn't seem to be a good reason that it was set/kept at {{1.0.4}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to