I proposed dropping support for Hadoop 1.x in the Spark 2.0 email, and I
think everybody is for that.

https://issues.apache.org/jira/browse/SPARK-11807

Sean suggested also dropping support for Hadoop 2.2, 2.3, and 2.4. That is
to say, keep only Hadoop 2.6 and greater.

What are the community's thoughts on that?

Reply via email to