They work.

On Tue, Mar 29, 2016 at 10:01 AM, Koert Kuipers <ko...@tresata.com> wrote:

> if scala prior to sbt 2.10.4 didn't support java 8, does that mean that
> 3rd party scala libraries compiled with a scala version < 2.10.4 might not
> work on java 8?
>
>
> On Mon, Mar 28, 2016 at 7:06 PM, Kostas Sakellis <kos...@cloudera.com>
> wrote:
>
>> Also, +1 on dropping jdk7 in Spark 2.0.
>>
>> Kostas
>>
>> On Mon, Mar 28, 2016 at 2:01 PM, Marcelo Vanzin <van...@cloudera.com>
>> wrote:
>>
>>> Finally got some internal feedback on this, and we're ok with
>>> requiring people to deploy jdk8 for 2.0, so +1 too.
>>>
>>> On Mon, Mar 28, 2016 at 1:15 PM, Luciano Resende <luckbr1...@gmail.com>
>>> wrote:
>>> > +1, I also checked with few projects inside IBM that consume Spark and
>>> they
>>> > seem to be ok with the direction of droping JDK 7.
>>> >
>>> > On Mon, Mar 28, 2016 at 11:24 AM, Michael Gummelt <
>>> mgumm...@mesosphere.io>
>>> > wrote:
>>> >>
>>> >> +1 from Mesosphere
>>> >>
>>> >> On Mon, Mar 28, 2016 at 5:12 AM, Steve Loughran <
>>> ste...@hortonworks.com>
>>> >> wrote:
>>> >>>
>>> >>>
>>> >>> > On 25 Mar 2016, at 01:59, Mridul Muralidharan <mri...@gmail.com>
>>> wrote:
>>> >>> >
>>> >>> > Removing compatibility (with jdk, etc) can be done with a major
>>> >>> > release- given that 7 has been EOLed a while back and is now
>>> unsupported, we
>>> >>> > have to decide if we drop support for it in 2.0 or 3.0 (2+ years
>>> from now).
>>> >>> >
>>> >>> > Given the functionality & performance benefits of going to jdk8,
>>> future
>>> >>> > enhancements relevant in 2.x timeframe ( scala, dependencies)
>>> which requires
>>> >>> > it, and simplicity wrt code, test & support it looks like a good
>>> checkpoint
>>> >>> > to drop jdk7 support.
>>> >>> >
>>> >>> > As already mentioned in the thread, existing yarn clusters are
>>> >>> > unaffected if they want to continue running jdk7 and yet use
>>> spark2 (install
>>> >>> > jdk8 on all nodes and use it via JAVA_HOME, or worst case
>>> distribute jdk8 as
>>> >>> > archive - suboptimal).
>>> >>>
>>> >>> you wouldn't want to dist it as an archive; it's not just the
>>> binaries,
>>> >>> it's the install phase. And you'd better remember to put the JCE jar
>>> in on
>>> >>> top of the JDK for kerberos to work.
>>> >>>
>>> >>> setting up environment vars to point to JDK8 in the launched
>>> >>> app/container avoids that. Yes, the ops team do need to install
>>> java, but if
>>> >>> you offer them the choice of "installing a centrally managed Java"
>>> and
>>> >>> "having my code try and install it", they should go for the managed
>>> option.
>>> >>>
>>> >>> One thing to consider for 2.0 is to make it easier to set up those
>>> env
>>> >>> vars for both python and java. And, as the techniques for mixing JDK
>>> >>> versions is clearly not that well known, documenting it.
>>> >>>
>>> >>> (FWIW I've done code which even uploads it's own hadoop-* JAR, but
>>> what
>>> >>> gets you is changes in the hadoop-native libs; you do need to get
>>> the PATH
>>> >>> var spot on)
>>> >>>
>>> >>>
>>> >>> > I am unsure about mesos (standalone might be easier upgrade I
>>> guess ?).
>>> >>> >
>>> >>> >
>>> >>> > Proposal is for 1.6x line to continue to be supported with critical
>>> >>> > fixes; newer features will require 2.x and so jdk8
>>> >>> >
>>> >>> > Regards
>>> >>> > Mridul
>>> >>> >
>>> >>> >
>>> >>>
>>> >>>
>>> >>> ---------------------------------------------------------------------
>>> >>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> >>> For additional commands, e-mail: dev-h...@spark.apache.org
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Michael Gummelt
>>> >> Software Engineer
>>> >> Mesosphere
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Luciano Resende
>>> > http://twitter.com/lresende1975
>>> > http://lresende.blogspot.com/
>>>
>>>
>>>
>>> --
>>> Marcelo
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>
>>>
>>
>

Reply via email to