Mostly just shedding the extra build complexity, and builds. The primary
little annoyance is it's 2x the number of flaky build failures to examine.
I suppose it allows using a 2.7+-only feature, but outside of YARN, not
sure there is anything compelling.

It's something that probably gains us virtually nothing now, but isn't too
painful either.
I think it will not make sense to distinguish them once any Hadoop
3-related support comes into the picture, and maybe that will start soon;
there were some more pings on related JIRAs this week. You could view it as
early setup for that move.


On Thu, Feb 8, 2018 at 12:57 PM Reynold Xin <r...@databricks.com> wrote:

> Does it gain us anything to drop 2.6?
>
> > On Feb 8, 2018, at 10:50 AM, Sean Owen <so...@cloudera.com> wrote:
> >
> > At this point, with Hadoop 3 on deck, I think hadoop 2.6 is both fairly
> old, and actually, not different from 2.7 with respect to Spark. That is, I
> don't know if we are actually maintaining anything here but a separate
> profile and 2x the number of test builds.
> >
> > The cost is, by the same token, low. However I'm floating the idea of
> removing the 2.6 profile and just requiring 2.7+ as of Spark 2.4?
>

Reply via email to