maybe in the future, but not right now as the hadoop 2.7 build is broken.

also, i busted dev/run-tests.py in my changes to support java11 in PRBs:
https://github.com/apache/spark/pull/25585

quick fix, testing now.

On Mon, Aug 26, 2019 at 10:23 AM Reynold Xin <r...@databricks.com> wrote:

> Would it be possible to have one build that works for both?
>
> On Mon, Aug 26, 2019 at 10:22 AM Dongjoon Hyun <dongjoon.h...@gmail.com>
> wrote:
>
>> Thank you all!
>>
>> Let me add more explanation on the current status.
>>
>>     - If you want to run on JDK8, you need to build on JDK8
>>     - If you want to run on JDK11, you need to build on JDK11.
>>
>> The other combinations will not work.
>>
>> Currently, we have two Jenkins jobs. (1) is the one I pointed, and (2) is
>> the one for the remaining community work.
>>
>>     1) Build and test on JDK11 (spark-master-test-maven-hadoop-3.2-jdk-11)
>>     2) Build on JDK8 and test on JDK11
>> (spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing)
>>
>> To keep JDK11 compatibility, the following is merged today.
>>
>>     [SPARK-28701][TEST-HADOOP3.2][TEST-JAVA11][K8S] adding java11
>> support for pull request builds
>>
>> But, we still have many stuffs to do for Jenkins/Release and we need your
>> support about JDK11. :)
>>
>> Bests,
>> Dongjoon.
>>
>>
>> On Sun, Aug 25, 2019 at 10:30 PM Takeshi Yamamuro <linguin....@gmail.com>
>> wrote:
>>
>>> Cool, congrats!
>>>
>>> Bests,
>>> Takeshi
>>>
>>> On Mon, Aug 26, 2019 at 1:01 PM Hichame El Khalfi <hich...@elkhalfi.com>
>>> wrote:
>>>
>>>> That's Awesome !!!
>>>>
>>>> Thanks to everyone that made this possible :cheers:
>>>>
>>>> Hichame
>>>>
>>>> *From:* cloud0...@gmail.com
>>>> *Sent:* August 25, 2019 10:43 PM
>>>> *To:* lix...@databricks.com
>>>> *Cc:* felixcheun...@hotmail.com; ravishankar.n...@gmail.com;
>>>> dongjoon.h...@gmail.com; dev@spark.apache.org; u...@spark.apache.org
>>>> *Subject:* Re: JDK11 Support in Apache Spark
>>>>
>>>> Great work!
>>>>
>>>> On Sun, Aug 25, 2019 at 6:03 AM Xiao Li <lix...@databricks.com> wrote:
>>>>
>>>>> Thank you for your contributions! This is a great feature for Spark
>>>>> 3.0! We finally achieve it!
>>>>>
>>>>> Xiao
>>>>>
>>>>> On Sat, Aug 24, 2019 at 12:18 PM Felix Cheung <
>>>>> felixcheun...@hotmail.com> wrote:
>>>>>
>>>>>> That’s great!
>>>>>>
>>>>>> ------------------------------
>>>>>> *From:* ☼ R Nair <ravishankar.n...@gmail.com>
>>>>>> *Sent:* Saturday, August 24, 2019 10:57:31 AM
>>>>>> *To:* Dongjoon Hyun <dongjoon.h...@gmail.com>
>>>>>> *Cc:* dev@spark.apache.org <dev@spark.apache.org>; user @spark/'user
>>>>>> @spark'/spark users/user@spark <u...@spark.apache.org>
>>>>>> *Subject:* Re: JDK11 Support in Apache Spark
>>>>>>
>>>>>> Finally!!! Congrats
>>>>>>
>>>>>> On Sat, Aug 24, 2019, 11:11 AM Dongjoon Hyun <dongjoon.h...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi, All.
>>>>>>>
>>>>>>> Thanks to your many many contributions,
>>>>>>> Apache Spark master branch starts to pass on JDK11 as of today.
>>>>>>> (with `hadoop-3.2` profile: Apache Hadoop 3.2 and Hive 2.3.6)
>>>>>>>
>>>>>>>
>>>>>>> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-3.2-jdk-11/326/
>>>>>>>     (JDK11 is used for building and testing.)
>>>>>>>
>>>>>>> We already verified all UTs (including PySpark/SparkR) before.
>>>>>>>
>>>>>>> Please feel free to use JDK11 in order to build/test/run `master`
>>>>>>> branch and
>>>>>>> share your experience including any issues. It will help Apache
>>>>>>> Spark 3.0.0 release.
>>>>>>>
>>>>>>> For the follow-ups, please follow
>>>>>>> https://issues.apache.org/jira/browse/SPARK-24417 .
>>>>>>> The next step is `how to support JDK8/JDK11 together in a single
>>>>>>> artifact`.
>>>>>>>
>>>>>>> Bests,
>>>>>>> Dongjoon.
>>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> [image: Databricks Summit - Watch the talks]
>>>>> <https://databricks.com/sparkaisummit/north-america>
>>>>>
>>>>
>>>
>>> --
>>> ---
>>> Takeshi Yamamuro
>>>
>>

-- 
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu

Reply via email to