At this point, with Hadoop 3 on deck, I think hadoop 2.6 is both fairly
old, and actually, not different from 2.7 with respect to Spark. That is, I
don't know if we are actually maintaining anything here but a separate
profile and 2x the number of test builds.

The cost is, by the same token, low. However I'm floating the idea of
removing the 2.6 profile and just requiring 2.7+ as of Spark 2.4?

Reply via email to