On 20 Nov 2015, at 14:28, ches...@alpinenow.com<mailto:ches...@alpinenow.com> 
wrote:

Assuming we have 1.6 and 1.7 releases, then spark 2.0 is about 9 months away.

customer will need to upgrade the new Hadoop clusters to Apache 2.6 or later to 
leverage new spark 2.0 in one year. I think this possible as latest release on 
cdh5.x,  HDP 2.x are both on Apache 2.6.0 already. Company will have enough 
time to upgrade cluster.

+1 for me as well

Chester


now, if you are looking that far ahead, the other big issue is "when to retire 
Java 7 support".?

That's a tough decision for all projects. Hadoop 3.x will be Java 8 only, but 
nobody has committed the patch to the trunk codebase to force a java 8 build; + 
most of *todays* hadoop clusters are Java 7. But as you can't even download a 
Java 7 JDK for the desktop from oracle any more today, 2016 is a time to look 
at the language support and decide what is the baseline version

Commentary from Twitter here -as they point out, it's not just the server farm 
that matters, it's all the apps that talk to it


http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201503.mbox/%3ccab7mwte+kefcxsr6n46-ztcs19ed7cwc9vobtr1jqewdkye...@mail.gmail.com%3E<http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201503.mbox/<cab7mwte+kefcxsr6n46-ztcs19ed7cwc9vobtr1jqewdkye...@mail.gmail.com>>

-Steve

Reply via email to