[ 
https://issues.apache.org/jira/browse/SPARK-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336961#comment-14336961
 ] 

Sean Owen commented on SPARK-5978:
----------------------------------

Ah right, the key is 2.0.x. This is a subset of the discussion at 
http://apache-spark-developers-list.1001551.n3.nabble.com/The-default-CDH4-build-uses-avro-mapred-hadoop1-td10699.html
 really. If I may, I'm going to update the title to broaden it, since I think 
there is more that doesn't work with Hadoop 2.0.x versions. Although as you can 
see my own opinion was that it maybe not worth supporting at this point, I 
think it's still open for discussion.

> Spark examples cannot compile with Hadoop 2
> -------------------------------------------
>
>                 Key: SPARK-5978
>                 URL: https://issues.apache.org/jira/browse/SPARK-5978
>             Project: Spark
>          Issue Type: Bug
>          Components: Examples, PySpark
>    Affects Versions: 1.2.0, 1.2.1
>            Reporter: Michael Nazario
>              Labels: hadoop-version
>
> This is a regression from Spark 1.1.1.
> The Spark Examples includes an example for an avro converter for PySpark. 
> When I was trying to debug a problem, I discovered that even though you can 
> build with Hadoop 2 for Spark 1.2.0, an hbase dependency depends on Hadoop 1 
> somewhere else in the examples code.
> An easy fix would be to separate the examples into hadoop specific versions. 
> Another way would be to fix the hbase dependencies so that they don't rely on 
> hadoop 1 specific code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to