Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-05-30 Thread Tamir Sagi
Hey Chesnay,

I'm sending a follow up email regarding JDK 17 support.

I see the Epic[1] is in progress and frequently updated. I'm curios if there is 
an ETA or any plan to release a major release with JDK 17 support that is not 
backward compatible.

Thanks,
Tamir


[1] https://issues.apache.org/jira/browse/FLINK-15736

From: Alexey Novakov 
Sent: Friday, April 28, 2023 11:41 AM
To: Chesnay Schepler 
Cc: Thomas Weise ; d...@flink.apache.org 
; Jing Ge ; Tamir Sagi 
; Piotr Nowojski ; Alexis 
Sarda-Espinosa ; user 
Subject: Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)


EXTERNAL EMAIL


If breaking savepoint compatibility will be eventually an option, I would 
recommend to try to upgrade Flink's Scala even to 2.13

Best regards,
Alexey

On Fri, Apr 28, 2023 at 10:22 AM Chesnay Schepler 
mailto:ches...@apache.org>> wrote:
We don't know yet. I wanted to run some more experiments to see if I cant get 
Scala 2.12.7 working on Java 17.

If that doesn't work, then it would also be an option to bump Scala in the Java 
17 builds (breaking savepoint compatibility), and users should just only use 
the Java APIs.

The alternative to _that_ is doing this when we drop the Scala API.

On 28/04/2023 01:11, Thomas Weise wrote:
Is the intention to bump the Flink major version and only support Java 17+? If 
so, can Scala not be upgraded at the same time?

Thanks,
Thomas


On Thu, Apr 27, 2023 at 4:53 PM Martijn Visser 
mailto:martijnvis...@apache.org>> wrote:
Scala 2.12.7 doesn't compile on Java 17, see
https://issues.apache.org/jira/browse/FLINK-25000.

On Thu, Apr 27, 2023 at 3:11 PM Jing Ge 
mailto:j...@ververica.com>> wrote:

> Thanks Tamir for the information. According to the latest comment of the
> task FLINK-24998, this bug should be gone while using the latest JDK 17. I
> was wondering whether it means that there are no more issues to stop us
> releasing a major Flink version to support Java 17? Did I miss something?
>
> Best regards,
> Jing
>
> On Thu, Apr 27, 2023 at 8:18 AM Tamir Sagi 
> mailto:tamir.s...@niceactimize.com>>
> wrote:
>
>> More details about the JDK bug here
>> https://bugs.openjdk.org/browse/JDK-8277529
>>
>> Related Jira ticket
>> https://issues.apache.org/jira/browse/FLINK-24998
>>
>> --
>> *From:* Jing Ge via user 
>> mailto:user@flink.apache.org>>
>> *Sent:* Monday, April 24, 2023 11:15 PM
>> *To:* Chesnay Schepler mailto:ches...@apache.org>>
>> *Cc:* Piotr Nowojski mailto:pnowoj...@apache.org>>; 
>> Alexis Sarda-Espinosa <
>> sarda.espin...@gmail.com<mailto:sarda.espin...@gmail.com>>; Martijn Visser 
>> mailto:martijnvis...@apache.org>>;
>> d...@flink.apache.org<mailto:d...@flink.apache.org> 
>> mailto:d...@flink.apache.org>>; user 
>> mailto:user@flink.apache.org>>
>> *Subject:* Re: [Discussion] - Release major Flink version to support JDK
>> 17 (LTS)
>>
>>
>> *EXTERNAL EMAIL*
>>
>>
>> Thanks Chesnay for working on this. Would you like to share more info
>> about the JDK bug?
>>
>> Best regards,
>> Jing
>>
>> On Mon, Apr 24, 2023 at 11:39 AM Chesnay Schepler 
>> mailto:ches...@apache.org>>
>> wrote:
>>
>> As it turns out Kryo isn't a blocker; we ran into a JDK bug.
>>
>> On 31/03/2023 08:57, Chesnay Schepler wrote:
>>
>>
>> https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide
>>
>> Kroy themselves state that v5 likely can't read v2 data.
>>
>> However, both versions can be on the classpath without classpath as v5
>> offers a versioned artifact that includes the version in the package.
>>
>> It probably wouldn't be difficult to migrate a savepoint to Kryo v5,
>> purely from a read/write perspective.
>>
>> The bigger question is how we expose this new Kryo version in the API. If
>> we stick to the versioned jar we need to either duplicate all current
>> Kryo-related APIs or find a better way to integrate other serialization
>> stacks.
>> On 30/03/2023 17:50, Piotr Nowojski wrote:
>>
>> Hey,
>>
>> > 1. The Flink community agrees that we upgrade Kryo to a later version,
>> which means breaking all checkpoint/savepoint compatibility and releasing a
>> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
>> dropped. This is probably the quickest way, but would still mean that we
>> expose Kryo in the Flink APIs, which is the main reason why we haven't been
>> able to upgrade Kryo at all.
>>
>> This sounds pretty bad to me.
>>
>> H

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-04-28 Thread Alexey Novakov via user
If breaking savepoint compatibility will be eventually an option, I would
recommend to try to upgrade Flink's Scala even to *2.13*

Best regards,
Alexey

On Fri, Apr 28, 2023 at 10:22 AM Chesnay Schepler 
wrote:

> We don't know yet. I wanted to run some more experiments to see if I cant
> get Scala 2.12.7 working on Java 17.
>
> If that doesn't work, then it would also be an option to bump Scala in the
> Java 17 builds (breaking savepoint compatibility), and users should just
> only use the Java APIs.
>
> The alternative to _that_ is doing this when we drop the Scala API.
>
> On 28/04/2023 01:11, Thomas Weise wrote:
>
> Is the intention to bump the Flink major version and only support Java
> 17+? If so, can Scala not be upgraded at the same time?
>
> Thanks,
> Thomas
>
>
> On Thu, Apr 27, 2023 at 4:53 PM Martijn Visser 
> wrote:
>
>> Scala 2.12.7 doesn't compile on Java 17, see
>> https://issues.apache.org/jira/browse/FLINK-25000.
>>
>> On Thu, Apr 27, 2023 at 3:11 PM Jing Ge  wrote:
>>
>> > Thanks Tamir for the information. According to the latest comment of the
>> > task FLINK-24998, this bug should be gone while using the latest JDK
>> 17. I
>> > was wondering whether it means that there are no more issues to stop us
>> > releasing a major Flink version to support Java 17? Did I miss
>> something?
>> >
>> > Best regards,
>> > Jing
>> >
>> > On Thu, Apr 27, 2023 at 8:18 AM Tamir Sagi > >
>> > wrote:
>> >
>> >> More details about the JDK bug here
>> >> https://bugs.openjdk.org/browse/JDK-8277529
>> >>
>> >> Related Jira ticket
>> >> https://issues.apache.org/jira/browse/FLINK-24998
>> >>
>> >> --
>> >> *From:* Jing Ge via user 
>> >> *Sent:* Monday, April 24, 2023 11:15 PM
>> >> *To:* Chesnay Schepler 
>> >> *Cc:* Piotr Nowojski ; Alexis Sarda-Espinosa <
>> >> sarda.espin...@gmail.com>; Martijn Visser ;
>> >> d...@flink.apache.org ; user <
>> user@flink.apache.org>
>> >> *Subject:* Re: [Discussion] - Release major Flink version to support
>> JDK
>> >> 17 (LTS)
>> >>
>> >>
>> >> *EXTERNAL EMAIL*
>> >>
>> >>
>> >> Thanks Chesnay for working on this. Would you like to share more info
>> >> about the JDK bug?
>> >>
>> >> Best regards,
>> >> Jing
>> >>
>> >> On Mon, Apr 24, 2023 at 11:39 AM Chesnay Schepler 
>> >> wrote:
>> >>
>> >> As it turns out Kryo isn't a blocker; we ran into a JDK bug.
>> >>
>> >> On 31/03/2023 08:57, Chesnay Schepler wrote:
>> >>
>> >>
>> >>
>> https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide
>> >>
>> >> Kroy themselves state that v5 likely can't read v2 data.
>> >>
>> >> However, both versions can be on the classpath without classpath as v5
>> >> offers a versioned artifact that includes the version in the package.
>> >>
>> >> It probably wouldn't be difficult to migrate a savepoint to Kryo v5,
>> >> purely from a read/write perspective.
>> >>
>> >> The bigger question is how we expose this new Kryo version in the API.
>> If
>> >> we stick to the versioned jar we need to either duplicate all current
>> >> Kryo-related APIs or find a better way to integrate other serialization
>> >> stacks.
>> >> On 30/03/2023 17:50, Piotr Nowojski wrote:
>> >>
>> >> Hey,
>> >>
>> >> > 1. The Flink community agrees that we upgrade Kryo to a later
>> version,
>> >> which means breaking all checkpoint/savepoint compatibility and
>> releasing a
>> >> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API
>> support
>> >> dropped. This is probably the quickest way, but would still mean that
>> we
>> >> expose Kryo in the Flink APIs, which is the main reason why we haven't
>> been
>> >> able to upgrade Kryo at all.
>> >>
>> >> This sounds pretty bad to me.
>> >>
>> >> Has anyone looked into what it would take to provide a smooth migration
>> >> from Kryo2 -> Kryo5?
>> >>
>> >> Best,
>> >> Piotrek
>> >>
>> >> czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa <

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-04-28 Thread Chesnay Schepler
We don't know yet. I wanted to run some more experiments to see if I 
cant get Scala 2.12.7 working on Java 17.


If that doesn't work, then it would also be an option to bump Scala in 
the Java 17 builds (breaking savepoint compatibility), and users should 
just only use the Java APIs.


The alternative to _that_ is doing this when we drop the Scala API.

On 28/04/2023 01:11, Thomas Weise wrote:
Is the intention to bump the Flink major version and only support Java 
17+? If so, can Scala not be upgraded at the same time?


Thanks,
Thomas


On Thu, Apr 27, 2023 at 4:53 PM Martijn Visser 
 wrote:


Scala 2.12.7 doesn't compile on Java 17, see
https://issues.apache.org/jira/browse/FLINK-25000.

On Thu, Apr 27, 2023 at 3:11 PM Jing Ge  wrote:

> Thanks Tamir for the information. According to the latest
comment of the
> task FLINK-24998, this bug should be gone while using the latest
JDK 17. I
> was wondering whether it means that there are no more issues to
stop us
> releasing a major Flink version to support Java 17? Did I miss
something?
>
> Best regards,
> Jing
>
> On Thu, Apr 27, 2023 at 8:18 AM Tamir Sagi

> wrote:
>
>> More details about the JDK bug here
>> https://bugs.openjdk.org/browse/JDK-8277529
>>
>> Related Jira ticket
>> https://issues.apache.org/jira/browse/FLINK-24998
>>
>> --
>> *From:* Jing Ge via user 
>> *Sent:* Monday, April 24, 2023 11:15 PM
>> *To:* Chesnay Schepler 
>> *Cc:* Piotr Nowojski ; Alexis
Sarda-Espinosa <
>> sarda.espin...@gmail.com>; Martijn Visser
;
    >> d...@flink.apache.org ; user

>> *Subject:* Re: [Discussion] - Release major Flink version to
support JDK
>> 17 (LTS)
>>
>>
>> *EXTERNAL EMAIL*
>>
>>
>> Thanks Chesnay for working on this. Would you like to share
more info
>> about the JDK bug?
>>
>> Best regards,
>> Jing
>>
>> On Mon, Apr 24, 2023 at 11:39 AM Chesnay Schepler

>> wrote:
>>
>> As it turns out Kryo isn't a blocker; we ran into a JDK bug.
>>
>> On 31/03/2023 08:57, Chesnay Schepler wrote:
>>
>>
>>

https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide
>>
>> Kroy themselves state that v5 likely can't read v2 data.
>>
>> However, both versions can be on the classpath without
classpath as v5
>> offers a versioned artifact that includes the version in the
package.
>>
>> It probably wouldn't be difficult to migrate a savepoint to
Kryo v5,
>> purely from a read/write perspective.
>>
>> The bigger question is how we expose this new Kryo version in
the API. If
>> we stick to the versioned jar we need to either duplicate all
current
>> Kryo-related APIs or find a better way to integrate other
serialization
>> stacks.
>> On 30/03/2023 17:50, Piotr Nowojski wrote:
>>
>> Hey,
>>
>> > 1. The Flink community agrees that we upgrade Kryo to a later
version,
>> which means breaking all checkpoint/savepoint compatibility and
releasing a
>> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala
API support
>> dropped. This is probably the quickest way, but would still
mean that we
>> expose Kryo in the Flink APIs, which is the main reason why we
haven't been
>> able to upgrade Kryo at all.
>>
>> This sounds pretty bad to me.
>>
>> Has anyone looked into what it would take to provide a smooth
migration
>> from Kryo2 -> Kryo5?
>>
>> Best,
>> Piotrek
>>
>> czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa

>> napisał(a):
>>
>> Hi Martijn,
>>
>> just to be sure, if all state-related classes use a POJO
serializer, Kryo
>> will never come into play, right? Given FLINK-16686 [1], I
wonder how many
>> users actually have jobs with Kryo and RocksDB, but even if
there aren't
>> many, that still leaves those who don't use RocksDB for
>> checkpoints/savepoints.
>>
>> If Kryo were to stay in the Flink APIs in v1.X, is it
impossible to let
>> users choose between v2/v5 jars by separating them like log4j2
jars?
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-16686
>>
>> Regards,
 

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-04-27 Thread Thomas Weise
Is the intention to bump the Flink major version and only support Java 17+?
If so, can Scala not be upgraded at the same time?

Thanks,
Thomas


On Thu, Apr 27, 2023 at 4:53 PM Martijn Visser 
wrote:

> Scala 2.12.7 doesn't compile on Java 17, see
> https://issues.apache.org/jira/browse/FLINK-25000.
>
> On Thu, Apr 27, 2023 at 3:11 PM Jing Ge  wrote:
>
> > Thanks Tamir for the information. According to the latest comment of the
> > task FLINK-24998, this bug should be gone while using the latest JDK 17.
> I
> > was wondering whether it means that there are no more issues to stop us
> > releasing a major Flink version to support Java 17? Did I miss something?
> >
> > Best regards,
> > Jing
> >
> > On Thu, Apr 27, 2023 at 8:18 AM Tamir Sagi 
> > wrote:
> >
> >> More details about the JDK bug here
> >> https://bugs.openjdk.org/browse/JDK-8277529
> >>
> >> Related Jira ticket
> >> https://issues.apache.org/jira/browse/FLINK-24998
> >>
> >> --
> >> *From:* Jing Ge via user 
> >> *Sent:* Monday, April 24, 2023 11:15 PM
> >> *To:* Chesnay Schepler 
> >> *Cc:* Piotr Nowojski ; Alexis Sarda-Espinosa <
> >> sarda.espin...@gmail.com>; Martijn Visser ;
> >> d...@flink.apache.org ; user <
> user@flink.apache.org>
> >> *Subject:* Re: [Discussion] - Release major Flink version to support JDK
> >> 17 (LTS)
> >>
> >>
> >> *EXTERNAL EMAIL*
> >>
> >>
> >> Thanks Chesnay for working on this. Would you like to share more info
> >> about the JDK bug?
> >>
> >> Best regards,
> >> Jing
> >>
> >> On Mon, Apr 24, 2023 at 11:39 AM Chesnay Schepler 
> >> wrote:
> >>
> >> As it turns out Kryo isn't a blocker; we ran into a JDK bug.
> >>
> >> On 31/03/2023 08:57, Chesnay Schepler wrote:
> >>
> >>
> >>
> https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide
> >>
> >> Kroy themselves state that v5 likely can't read v2 data.
> >>
> >> However, both versions can be on the classpath without classpath as v5
> >> offers a versioned artifact that includes the version in the package.
> >>
> >> It probably wouldn't be difficult to migrate a savepoint to Kryo v5,
> >> purely from a read/write perspective.
> >>
> >> The bigger question is how we expose this new Kryo version in the API.
> If
> >> we stick to the versioned jar we need to either duplicate all current
> >> Kryo-related APIs or find a better way to integrate other serialization
> >> stacks.
> >> On 30/03/2023 17:50, Piotr Nowojski wrote:
> >>
> >> Hey,
> >>
> >> > 1. The Flink community agrees that we upgrade Kryo to a later version,
> >> which means breaking all checkpoint/savepoint compatibility and
> releasing a
> >> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API
> support
> >> dropped. This is probably the quickest way, but would still mean that we
> >> expose Kryo in the Flink APIs, which is the main reason why we haven't
> been
> >> able to upgrade Kryo at all.
> >>
> >> This sounds pretty bad to me.
> >>
> >> Has anyone looked into what it would take to provide a smooth migration
> >> from Kryo2 -> Kryo5?
> >>
> >> Best,
> >> Piotrek
> >>
> >> czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa <
> sarda.espin...@gmail.com>
> >> napisał(a):
> >>
> >> Hi Martijn,
> >>
> >> just to be sure, if all state-related classes use a POJO serializer,
> Kryo
> >> will never come into play, right? Given FLINK-16686 [1], I wonder how
> many
> >> users actually have jobs with Kryo and RocksDB, but even if there aren't
> >> many, that still leaves those who don't use RocksDB for
> >> checkpoints/savepoints.
> >>
> >> If Kryo were to stay in the Flink APIs in v1.X, is it impossible to let
> >> users choose between v2/v5 jars by separating them like log4j2 jars?
> >>
> >> [1] https://issues.apache.org/jira/browse/FLINK-16686
> >>
> >> Regards,
> >> Alexis.
> >>
> >> Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser <
> >> martijnvis...@apache.org>:
> >>
> >> Hi all,
> >>
> >> I also saw a thread on this topic from Clayton Wohl [1] on this topic,
> >> which I'm 

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-04-27 Thread Martijn Visser
Scala 2.12.7 doesn't compile on Java 17, see
https://issues.apache.org/jira/browse/FLINK-25000.

On Thu, Apr 27, 2023 at 3:11 PM Jing Ge  wrote:

> Thanks Tamir for the information. According to the latest comment of the
> task FLINK-24998, this bug should be gone while using the latest JDK 17. I
> was wondering whether it means that there are no more issues to stop us
> releasing a major Flink version to support Java 17? Did I miss something?
>
> Best regards,
> Jing
>
> On Thu, Apr 27, 2023 at 8:18 AM Tamir Sagi 
> wrote:
>
>> More details about the JDK bug here
>> https://bugs.openjdk.org/browse/JDK-8277529
>>
>> Related Jira ticket
>> https://issues.apache.org/jira/browse/FLINK-24998
>>
>> --
>> *From:* Jing Ge via user 
>> *Sent:* Monday, April 24, 2023 11:15 PM
>> *To:* Chesnay Schepler 
>> *Cc:* Piotr Nowojski ; Alexis Sarda-Espinosa <
>> sarda.espin...@gmail.com>; Martijn Visser ;
>> d...@flink.apache.org ; user 
>> *Subject:* Re: [Discussion] - Release major Flink version to support JDK
>> 17 (LTS)
>>
>>
>> *EXTERNAL EMAIL*
>>
>>
>> Thanks Chesnay for working on this. Would you like to share more info
>> about the JDK bug?
>>
>> Best regards,
>> Jing
>>
>> On Mon, Apr 24, 2023 at 11:39 AM Chesnay Schepler 
>> wrote:
>>
>> As it turns out Kryo isn't a blocker; we ran into a JDK bug.
>>
>> On 31/03/2023 08:57, Chesnay Schepler wrote:
>>
>>
>> https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide
>>
>> Kroy themselves state that v5 likely can't read v2 data.
>>
>> However, both versions can be on the classpath without classpath as v5
>> offers a versioned artifact that includes the version in the package.
>>
>> It probably wouldn't be difficult to migrate a savepoint to Kryo v5,
>> purely from a read/write perspective.
>>
>> The bigger question is how we expose this new Kryo version in the API. If
>> we stick to the versioned jar we need to either duplicate all current
>> Kryo-related APIs or find a better way to integrate other serialization
>> stacks.
>> On 30/03/2023 17:50, Piotr Nowojski wrote:
>>
>> Hey,
>>
>> > 1. The Flink community agrees that we upgrade Kryo to a later version,
>> which means breaking all checkpoint/savepoint compatibility and releasing a
>> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
>> dropped. This is probably the quickest way, but would still mean that we
>> expose Kryo in the Flink APIs, which is the main reason why we haven't been
>> able to upgrade Kryo at all.
>>
>> This sounds pretty bad to me.
>>
>> Has anyone looked into what it would take to provide a smooth migration
>> from Kryo2 -> Kryo5?
>>
>> Best,
>> Piotrek
>>
>> czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa 
>> napisał(a):
>>
>> Hi Martijn,
>>
>> just to be sure, if all state-related classes use a POJO serializer, Kryo
>> will never come into play, right? Given FLINK-16686 [1], I wonder how many
>> users actually have jobs with Kryo and RocksDB, but even if there aren't
>> many, that still leaves those who don't use RocksDB for
>> checkpoints/savepoints.
>>
>> If Kryo were to stay in the Flink APIs in v1.X, is it impossible to let
>> users choose between v2/v5 jars by separating them like log4j2 jars?
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-16686
>>
>> Regards,
>> Alexis.
>>
>> Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser <
>> martijnvis...@apache.org>:
>>
>> Hi all,
>>
>> I also saw a thread on this topic from Clayton Wohl [1] on this topic,
>> which I'm including in this discussion thread to avoid that it gets lost.
>>
>> From my perspective, there's two main ways to get to Java 17:
>>
>> 1. The Flink community agrees that we upgrade Kryo to a later version,
>> which means breaking all checkpoint/savepoint compatibility and releasing a
>> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
>> dropped. This is probably the quickest way, but would still mean that we
>> expose Kryo in the Flink APIs, which is the main reason why we haven't been
>> able to upgrade Kryo at all.
>> 2. There's a contributor who makes a contribution that bumps Kryo, but
>> either a) automagically reads in all old checkpoints/savepoints in using
>> Kryo v2 and writes them to new snapshots using Kryo v5 (like is mentioned

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-04-27 Thread Jing Ge via user
Thanks Tamir for the information. According to the latest comment of the
task FLINK-24998, this bug should be gone while using the latest JDK 17. I
was wondering whether it means that there are no more issues to stop us
releasing a major Flink version to support Java 17? Did I miss something?

Best regards,
Jing

On Thu, Apr 27, 2023 at 8:18 AM Tamir Sagi 
wrote:

> More details about the JDK bug here
> https://bugs.openjdk.org/browse/JDK-8277529
>
> Related Jira ticket
> https://issues.apache.org/jira/browse/FLINK-24998
>
> --
> *From:* Jing Ge via user 
> *Sent:* Monday, April 24, 2023 11:15 PM
> *To:* Chesnay Schepler 
> *Cc:* Piotr Nowojski ; Alexis Sarda-Espinosa <
> sarda.espin...@gmail.com>; Martijn Visser ;
> d...@flink.apache.org ; user 
> *Subject:* Re: [Discussion] - Release major Flink version to support JDK
> 17 (LTS)
>
>
> *EXTERNAL EMAIL*
>
>
> Thanks Chesnay for working on this. Would you like to share more info
> about the JDK bug?
>
> Best regards,
> Jing
>
> On Mon, Apr 24, 2023 at 11:39 AM Chesnay Schepler 
> wrote:
>
> As it turns out Kryo isn't a blocker; we ran into a JDK bug.
>
> On 31/03/2023 08:57, Chesnay Schepler wrote:
>
>
> https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide
>
> Kroy themselves state that v5 likely can't read v2 data.
>
> However, both versions can be on the classpath without classpath as v5
> offers a versioned artifact that includes the version in the package.
>
> It probably wouldn't be difficult to migrate a savepoint to Kryo v5,
> purely from a read/write perspective.
>
> The bigger question is how we expose this new Kryo version in the API. If
> we stick to the versioned jar we need to either duplicate all current
> Kryo-related APIs or find a better way to integrate other serialization
> stacks.
> On 30/03/2023 17:50, Piotr Nowojski wrote:
>
> Hey,
>
> > 1. The Flink community agrees that we upgrade Kryo to a later version,
> which means breaking all checkpoint/savepoint compatibility and releasing a
> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
> dropped. This is probably the quickest way, but would still mean that we
> expose Kryo in the Flink APIs, which is the main reason why we haven't been
> able to upgrade Kryo at all.
>
> This sounds pretty bad to me.
>
> Has anyone looked into what it would take to provide a smooth migration
> from Kryo2 -> Kryo5?
>
> Best,
> Piotrek
>
> czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa 
> napisał(a):
>
> Hi Martijn,
>
> just to be sure, if all state-related classes use a POJO serializer, Kryo
> will never come into play, right? Given FLINK-16686 [1], I wonder how many
> users actually have jobs with Kryo and RocksDB, but even if there aren't
> many, that still leaves those who don't use RocksDB for
> checkpoints/savepoints.
>
> If Kryo were to stay in the Flink APIs in v1.X, is it impossible to let
> users choose between v2/v5 jars by separating them like log4j2 jars?
>
> [1] https://issues.apache.org/jira/browse/FLINK-16686
>
> Regards,
> Alexis.
>
> Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser <
> martijnvis...@apache.org>:
>
> Hi all,
>
> I also saw a thread on this topic from Clayton Wohl [1] on this topic,
> which I'm including in this discussion thread to avoid that it gets lost.
>
> From my perspective, there's two main ways to get to Java 17:
>
> 1. The Flink community agrees that we upgrade Kryo to a later version,
> which means breaking all checkpoint/savepoint compatibility and releasing a
> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
> dropped. This is probably the quickest way, but would still mean that we
> expose Kryo in the Flink APIs, which is the main reason why we haven't been
> able to upgrade Kryo at all.
> 2. There's a contributor who makes a contribution that bumps Kryo, but
> either a) automagically reads in all old checkpoints/savepoints in using
> Kryo v2 and writes them to new snapshots using Kryo v5 (like is mentioned
> in the Kryo migration guide [2][3] or b) provides an offline tool that
> allows users that are interested in migrating their snapshots manually
> before starting from a newer version. That potentially could prevent the
> need to introduce a new Flink major version. In both scenarios, ideally the
> contributor would also help with avoiding the exposure of Kryo so that we
> will be in a better shape in the future.
>
> It would be good to get the opinion of the community for either of these
> two options, or potentially for another one that I haven't mentioned. If it
> appears that t

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-04-27 Thread Tamir Sagi
More details about the JDK bug here
https://bugs.openjdk.org/browse/JDK-8277529

Related Jira ticket
https://issues.apache.org/jira/browse/FLINK-24998


From: Jing Ge via user 
Sent: Monday, April 24, 2023 11:15 PM
To: Chesnay Schepler 
Cc: Piotr Nowojski ; Alexis Sarda-Espinosa 
; Martijn Visser ; 
d...@flink.apache.org ; user 
Subject: Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)


EXTERNAL EMAIL


Thanks Chesnay for working on this. Would you like to share more info about the 
JDK bug?

Best regards,
Jing

On Mon, Apr 24, 2023 at 11:39 AM Chesnay Schepler 
mailto:ches...@apache.org>> wrote:
As it turns out Kryo isn't a blocker; we ran into a JDK bug.

On 31/03/2023 08:57, Chesnay Schepler wrote:
https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide

Kroy themselves state that v5 likely can't read v2 data.

However, both versions can be on the classpath without classpath as v5 offers a 
versioned artifact that includes the version in the package.

It probably wouldn't be difficult to migrate a savepoint to Kryo v5, purely 
from a read/write perspective.

The bigger question is how we expose this new Kryo version in the API. If we 
stick to the versioned jar we need to either duplicate all current Kryo-related 
APIs or find a better way to integrate other serialization stacks.

On 30/03/2023 17:50, Piotr Nowojski wrote:
Hey,

> 1. The Flink community agrees that we upgrade Kryo to a later version, which 
> means breaking all checkpoint/savepoint compatibility and releasing a Flink 
> 2.0 with Java 17 support added and Java 8 and Flink Scala API support 
> dropped. This is probably the quickest way, but would still mean that we 
> expose Kryo in the Flink APIs, which is the main reason why we haven't been 
> able to upgrade Kryo at all.

This sounds pretty bad to me.

Has anyone looked into what it would take to provide a smooth migration from 
Kryo2 -> Kryo5?

Best,
Piotrek

czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa 
mailto:sarda.espin...@gmail.com>> napisał(a):
Hi Martijn,

just to be sure, if all state-related classes use a POJO serializer, Kryo will 
never come into play, right? Given FLINK-16686 [1], I wonder how many users 
actually have jobs with Kryo and RocksDB, but even if there aren't many, that 
still leaves those who don't use RocksDB for checkpoints/savepoints.

If Kryo were to stay in the Flink APIs in v1.X, is it impossible to let users 
choose between v2/v5 jars by separating them like log4j2 jars?

[1] https://issues.apache.org/jira/browse/FLINK-16686

Regards,
Alexis.

Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser 
mailto:martijnvis...@apache.org>>:
Hi all,

I also saw a thread on this topic from Clayton Wohl [1] on this topic, which 
I'm including in this discussion thread to avoid that it gets lost.

From my perspective, there's two main ways to get to Java 17:

1. The Flink community agrees that we upgrade Kryo to a later version, which 
means breaking all checkpoint/savepoint compatibility and releasing a Flink 2.0 
with Java 17 support added and Java 8 and Flink Scala API support dropped. This 
is probably the quickest way, but would still mean that we expose Kryo in the 
Flink APIs, which is the main reason why we haven't been able to upgrade Kryo 
at all.
2. There's a contributor who makes a contribution that bumps Kryo, but either 
a) automagically reads in all old checkpoints/savepoints in using Kryo v2 and 
writes them to new snapshots using Kryo v5 (like is mentioned in the Kryo 
migration guide [2][3] or b) provides an offline tool that allows users that 
are interested in migrating their snapshots manually before starting from a 
newer version. That potentially could prevent the need to introduce a new Flink 
major version. In both scenarios, ideally the contributor would also help with 
avoiding the exposure of Kryo so that we will be in a better shape in the 
future.

It would be good to get the opinion of the community for either of these two 
options, or potentially for another one that I haven't mentioned. If it appears 
that there's an overall agreement on the direction, I would propose that a FLIP 
gets created which describes the entire process.

Looking forward to the thoughts of others, including the Users (therefore 
including the User ML).

Best regards,

Martijn

[1]  https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
[2] https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
[3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5

On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi 
mailto:tamir.s...@niceactimize.com>> wrote:
I agree, there are several options to mitigate the migration from v2 to v5.
yet, Oracle roadmap is to end JDK 11 support in September this year.




From: ConradJam mailto:jam.gz...@gmail.com>>
Sent: Thursday, March 

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-04-24 Thread Jing Ge via user
Thanks Chesnay for working on this. Would you like to share more info about
the JDK bug?

Best regards,
Jing

On Mon, Apr 24, 2023 at 11:39 AM Chesnay Schepler 
wrote:

> As it turns out Kryo isn't a blocker; we ran into a JDK bug.
>
> On 31/03/2023 08:57, Chesnay Schepler wrote:
>
>
> https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide
>
> Kroy themselves state that v5 likely can't read v2 data.
>
> However, both versions can be on the classpath without classpath as v5
> offers a versioned artifact that includes the version in the package.
>
> It probably wouldn't be difficult to migrate a savepoint to Kryo v5,
> purely from a read/write perspective.
>
> The bigger question is how we expose this new Kryo version in the API. If
> we stick to the versioned jar we need to either duplicate all current
> Kryo-related APIs or find a better way to integrate other serialization
> stacks.
> On 30/03/2023 17:50, Piotr Nowojski wrote:
>
> Hey,
>
> > 1. The Flink community agrees that we upgrade Kryo to a later version,
> which means breaking all checkpoint/savepoint compatibility and releasing a
> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
> dropped. This is probably the quickest way, but would still mean that we
> expose Kryo in the Flink APIs, which is the main reason why we haven't been
> able to upgrade Kryo at all.
>
> This sounds pretty bad to me.
>
> Has anyone looked into what it would take to provide a smooth migration
> from Kryo2 -> Kryo5?
>
> Best,
> Piotrek
>
> czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa 
> napisał(a):
>
>> Hi Martijn,
>>
>> just to be sure, if all state-related classes use a POJO serializer, Kryo
>> will never come into play, right? Given FLINK-16686 [1], I wonder how many
>> users actually have jobs with Kryo and RocksDB, but even if there aren't
>> many, that still leaves those who don't use RocksDB for
>> checkpoints/savepoints.
>>
>> If Kryo were to stay in the Flink APIs in v1.X, is it impossible to let
>> users choose between v2/v5 jars by separating them like log4j2 jars?
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-16686
>>
>> Regards,
>> Alexis.
>>
>> Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser <
>> martijnvis...@apache.org>:
>>
>>> Hi all,
>>>
>>> I also saw a thread on this topic from Clayton Wohl [1] on this topic,
>>> which I'm including in this discussion thread to avoid that it gets lost.
>>>
>>> From my perspective, there's two main ways to get to Java 17:
>>>
>>> 1. The Flink community agrees that we upgrade Kryo to a later version,
>>> which means breaking all checkpoint/savepoint compatibility and releasing a
>>> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
>>> dropped. This is probably the quickest way, but would still mean that we
>>> expose Kryo in the Flink APIs, which is the main reason why we haven't been
>>> able to upgrade Kryo at all.
>>> 2. There's a contributor who makes a contribution that bumps Kryo, but
>>> either a) automagically reads in all old checkpoints/savepoints in using
>>> Kryo v2 and writes them to new snapshots using Kryo v5 (like is mentioned
>>> in the Kryo migration guide [2][3] or b) provides an offline tool that
>>> allows users that are interested in migrating their snapshots manually
>>> before starting from a newer version. That potentially could prevent the
>>> need to introduce a new Flink major version. In both scenarios, ideally the
>>> contributor would also help with avoiding the exposure of Kryo so that we
>>> will be in a better shape in the future.
>>>
>>> It would be good to get the opinion of the community for either of these
>>> two options, or potentially for another one that I haven't mentioned. If it
>>> appears that there's an overall agreement on the direction, I would propose
>>> that a FLIP gets created which describes the entire process.
>>>
>>> Looking forward to the thoughts of others, including the Users
>>> (therefore including the User ML).
>>>
>>> Best regards,
>>>
>>> Martijn
>>>
>>> [1]  https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
>>> [2] https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
>>> [3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5
>>>
>>> On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi 
>>> wrote:
>>>
>>&g

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-04-24 Thread Chesnay Schepler

As it turns out Kryo isn't a blocker; we ran into a JDK bug.

On 31/03/2023 08:57, Chesnay Schepler wrote:

https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide

Kroy themselves state that v5 likely can't read v2 data.

However, both versions can be on the classpath without classpath as v5 
offers a versioned artifact that includes the version in the package.


It probably wouldn't be difficult to migrate a savepoint to Kryo v5, 
purely from a read/write perspective.


The bigger question is how we expose this new Kryo version in the API. 
If we stick to the versioned jar we need to either duplicate all 
current Kryo-related APIs or find a better way to integrate other 
serialization stacks.


On 30/03/2023 17:50, Piotr Nowojski wrote:

Hey,

> 1. The Flink community agrees that we upgrade Kryo to a later 
version, which means breaking all checkpoint/savepoint compatibility 
and releasing a Flink 2.0 with Java 17 support added and Java 8 and 
Flink Scala API support dropped. This is probably the quickest way, 
but would still mean that we expose Kryo in the Flink APIs, which is 
the main reason why we haven't been able to upgrade Kryo at all.


This sounds pretty bad to me.

Has anyone looked into what it would take to provide a smooth 
migration from Kryo2 -> Kryo5?


Best,
Piotrek

czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa 
 napisał(a):


Hi Martijn,

just to be sure, if all state-related classes use a POJO
serializer, Kryo will never come into play, right? Given
FLINK-16686 [1], I wonder how many users actually have jobs with
Kryo and RocksDB, but even if there aren't many, that still
leaves those who don't use RocksDB for checkpoints/savepoints.

If Kryo were to stay in the Flink APIs in v1.X, is it impossible
to let users choose between v2/v5 jars by separating them like
log4j2 jars?

[1] https://issues.apache.org/jira/browse/FLINK-16686

Regards,
Alexis.

Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser
:

Hi all,

I also saw a thread on this topic from Clayton Wohl [1] on
this topic, which I'm including in this discussion thread to
avoid that it gets lost.

From my perspective, there's two main ways to get to Java 17:

1. The Flink community agrees that we upgrade Kryo to a later
version, which means breaking all checkpoint/savepoint
compatibility and releasing a Flink 2.0 with Java 17 support
added and Java 8 and Flink Scala API support dropped. This is
probably the quickest way, but would still mean that we
expose Kryo in the Flink APIs, which is the main reason why
we haven't been able to upgrade Kryo at all.
2. There's a contributor who makes a contribution that bumps
Kryo, but either a) automagically reads in all old
checkpoints/savepoints in using Kryo v2 and writes them to
new snapshots using Kryo v5 (like is mentioned in the Kryo
migration guide [2][3] or b) provides an offline tool that
allows users that are interested in migrating their snapshots
manually before starting from a newer version. That
potentially could prevent the need to introduce a new Flink
major version. In both scenarios, ideally the contributor
would also help with avoiding the exposure of Kryo so that we
will be in a better shape in the future.

It would be good to get the opinion of the community for
either of these two options, or potentially for another one
that I haven't mentioned. If it appears that there's an
overall agreement on the direction, I would propose that a
FLIP gets created which describes the entire process.

Looking forward to the thoughts of others, including the
Users (therefore including the User ML).

Best regards,

Martijn

[1]
https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
[2]
https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
[3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5

On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi
 wrote:

I agree, there are several options to mitigate the
migration from v2 to v5.
yet, Oracle roadmap is to end JDK 11 support in September
this year.




From: ConradJam 
Sent: Thursday, March 16, 2023 4:36 AM
To: d...@flink.apache.org 
Subject: Re: [Discussion] - Release major Flink version
    to support JDK 17 (LTS)

EXTERNAL EMAIL



Thanks for your start this discuss


I have been tracking this problem for a long time, until
I saw a
conversation in ISSUSE a few days ago and learned that
the Kryo v

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-03-31 Thread Chesnay Schepler

https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5#migration-guide

Kroy themselves state that v5 likely can't read v2 data.

However, both versions can be on the classpath without classpath as v5 
offers a versioned artifact that includes the version in the package.


It probably wouldn't be difficult to migrate a savepoint to Kryo v5, 
purely from a read/write perspective.


The bigger question is how we expose this new Kryo version in the API. 
If we stick to the versioned jar we need to either duplicate all current 
Kryo-related APIs or find a better way to integrate other serialization 
stacks.


On 30/03/2023 17:50, Piotr Nowojski wrote:

Hey,

> 1. The Flink community agrees that we upgrade Kryo to a later 
version, which means breaking all checkpoint/savepoint compatibility 
and releasing a Flink 2.0 with Java 17 support added and Java 8 and 
Flink Scala API support dropped. This is probably the quickest way, 
but would still mean that we expose Kryo in the Flink APIs, which is 
the main reason why we haven't been able to upgrade Kryo at all.


This sounds pretty bad to me.

Has anyone looked into what it would take to provide a smooth 
migration from Kryo2 -> Kryo5?


Best,
Piotrek

czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa 
 napisał(a):


Hi Martijn,

just to be sure, if all state-related classes use a POJO
serializer, Kryo will never come into play, right? Given
FLINK-16686 [1], I wonder how many users actually have jobs with
Kryo and RocksDB, but even if there aren't many, that still leaves
those who don't use RocksDB for checkpoints/savepoints.

If Kryo were to stay in the Flink APIs in v1.X, is it impossible
to let users choose between v2/v5 jars by separating them like
log4j2 jars?

[1] https://issues.apache.org/jira/browse/FLINK-16686

Regards,
Alexis.

Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser
:

Hi all,

I also saw a thread on this topic from Clayton Wohl [1] on
this topic, which I'm including in this discussion thread to
avoid that it gets lost.

From my perspective, there's two main ways to get to Java 17:

1. The Flink community agrees that we upgrade Kryo to a later
version, which means breaking all checkpoint/savepoint
compatibility and releasing a Flink 2.0 with Java 17 support
added and Java 8 and Flink Scala API support dropped. This is
probably the quickest way, but would still mean that we expose
Kryo in the Flink APIs, which is the main reason why we
haven't been able to upgrade Kryo at all.
2. There's a contributor who makes a contribution that bumps
Kryo, but either a) automagically reads in all old
checkpoints/savepoints in using Kryo v2 and writes them to new
snapshots using Kryo v5 (like is mentioned in the Kryo
migration guide [2][3] or b) provides an offline tool that
allows users that are interested in migrating their snapshots
manually before starting from a newer version. That
potentially could prevent the need to introduce a new Flink
major version. In both scenarios, ideally the contributor
would also help with avoiding the exposure of Kryo so that we
will be in a better shape in the future.

It would be good to get the opinion of the community for
either of these two options, or potentially for another one
that I haven't mentioned. If it appears that there's an
overall agreement on the direction, I would propose that a
FLIP gets created which describes the entire process.

Looking forward to the thoughts of others, including the Users
(therefore including the User ML).

Best regards,

Martijn

[1]
https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
[2]
https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
[3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5

On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi
 wrote:

I agree, there are several options to mitigate the
migration from v2 to v5.
yet, Oracle roadmap is to end JDK 11 support in September
this year.




From: ConradJam 
Sent: Thursday, March 16, 2023 4:36 AM
To: d...@flink.apache.org 
Subject: Re: [Discussion] - Release major Flink version to
    support JDK 17 (LTS)

EXTERNAL EMAIL



Thanks for your start this discuss


I have been tracking this problem for a long time, until I
saw a
conversation in ISSUSE a few days ago and learned that the
Kryo version
problem will affect the JDK17 compilation of snapshots [1]
FLINK

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-03-30 Thread Piotr Nowojski
Hey,

> 1. The Flink community agrees that we upgrade Kryo to a later version,
which means breaking all checkpoint/savepoint compatibility and releasing a
Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
dropped. This is probably the quickest way, but would still mean that we
expose Kryo in the Flink APIs, which is the main reason why we haven't been
able to upgrade Kryo at all.

This sounds pretty bad to me.

Has anyone looked into what it would take to provide a smooth migration
from Kryo2 -> Kryo5?

Best,
Piotrek

czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa 
napisał(a):

> Hi Martijn,
>
> just to be sure, if all state-related classes use a POJO serializer, Kryo
> will never come into play, right? Given FLINK-16686 [1], I wonder how many
> users actually have jobs with Kryo and RocksDB, but even if there aren't
> many, that still leaves those who don't use RocksDB for
> checkpoints/savepoints.
>
> If Kryo were to stay in the Flink APIs in v1.X, is it impossible to let
> users choose between v2/v5 jars by separating them like log4j2 jars?
>
> [1] https://issues.apache.org/jira/browse/FLINK-16686
>
> Regards,
> Alexis.
>
> Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser <
> martijnvis...@apache.org>:
>
>> Hi all,
>>
>> I also saw a thread on this topic from Clayton Wohl [1] on this topic,
>> which I'm including in this discussion thread to avoid that it gets lost.
>>
>> From my perspective, there's two main ways to get to Java 17:
>>
>> 1. The Flink community agrees that we upgrade Kryo to a later version,
>> which means breaking all checkpoint/savepoint compatibility and releasing a
>> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
>> dropped. This is probably the quickest way, but would still mean that we
>> expose Kryo in the Flink APIs, which is the main reason why we haven't been
>> able to upgrade Kryo at all.
>> 2. There's a contributor who makes a contribution that bumps Kryo, but
>> either a) automagically reads in all old checkpoints/savepoints in using
>> Kryo v2 and writes them to new snapshots using Kryo v5 (like is mentioned
>> in the Kryo migration guide [2][3] or b) provides an offline tool that
>> allows users that are interested in migrating their snapshots manually
>> before starting from a newer version. That potentially could prevent the
>> need to introduce a new Flink major version. In both scenarios, ideally the
>> contributor would also help with avoiding the exposure of Kryo so that we
>> will be in a better shape in the future.
>>
>> It would be good to get the opinion of the community for either of these
>> two options, or potentially for another one that I haven't mentioned. If it
>> appears that there's an overall agreement on the direction, I would propose
>> that a FLIP gets created which describes the entire process.
>>
>> Looking forward to the thoughts of others, including the Users (therefore
>> including the User ML).
>>
>> Best regards,
>>
>> Martijn
>>
>> [1]  https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
>> [2] https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
>> [3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5
>>
>> On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi 
>> wrote:
>>
>>> I agree, there are several options to mitigate the migration from v2 to
>>> v5.
>>> yet, Oracle roadmap is to end JDK 11 support in September this year.
>>>
>>>
>>>
>>> 
>>> From: ConradJam 
>>> Sent: Thursday, March 16, 2023 4:36 AM
>>> To: d...@flink.apache.org 
>>> Subject: Re: [Discussion] - Release major Flink version to support JDK
>>> 17 (LTS)
>>>
>>> EXTERNAL EMAIL
>>>
>>>
>>>
>>> Thanks for your start this discuss
>>>
>>>
>>> I have been tracking this problem for a long time, until I saw a
>>> conversation in ISSUSE a few days ago and learned that the Kryo version
>>> problem will affect the JDK17 compilation of snapshots [1] FLINK-24998 ,
>>>
>>> As @cherry said it ruined our whole effort towards JDK17
>>>
>>> I am in favor of providing an external tool to migrate from Kryo old
>>> version checkpoint to the new Kryo new checkpoint at one time (Maybe this
>>> tool start in flink 2.0 ?), does this tool currently have any plans or
>>> ideas worth discuss
>>>
>>>
>>> I think it should not be difficult to be compatible with JDK11 an

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-03-30 Thread Alexis Sarda-Espinosa
Hi Martijn,

just to be sure, if all state-related classes use a POJO serializer, Kryo
will never come into play, right? Given FLINK-16686 [1], I wonder how many
users actually have jobs with Kryo and RocksDB, but even if there aren't
many, that still leaves those who don't use RocksDB for
checkpoints/savepoints.

If Kryo were to stay in the Flink APIs in v1.X, is it impossible to let
users choose between v2/v5 jars by separating them like log4j2 jars?

[1] https://issues.apache.org/jira/browse/FLINK-16686

Regards,
Alexis.

Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser <
martijnvis...@apache.org>:

> Hi all,
>
> I also saw a thread on this topic from Clayton Wohl [1] on this topic,
> which I'm including in this discussion thread to avoid that it gets lost.
>
> From my perspective, there's two main ways to get to Java 17:
>
> 1. The Flink community agrees that we upgrade Kryo to a later version,
> which means breaking all checkpoint/savepoint compatibility and releasing a
> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
> dropped. This is probably the quickest way, but would still mean that we
> expose Kryo in the Flink APIs, which is the main reason why we haven't been
> able to upgrade Kryo at all.
> 2. There's a contributor who makes a contribution that bumps Kryo, but
> either a) automagically reads in all old checkpoints/savepoints in using
> Kryo v2 and writes them to new snapshots using Kryo v5 (like is mentioned
> in the Kryo migration guide [2][3] or b) provides an offline tool that
> allows users that are interested in migrating their snapshots manually
> before starting from a newer version. That potentially could prevent the
> need to introduce a new Flink major version. In both scenarios, ideally the
> contributor would also help with avoiding the exposure of Kryo so that we
> will be in a better shape in the future.
>
> It would be good to get the opinion of the community for either of these
> two options, or potentially for another one that I haven't mentioned. If it
> appears that there's an overall agreement on the direction, I would propose
> that a FLIP gets created which describes the entire process.
>
> Looking forward to the thoughts of others, including the Users (therefore
> including the User ML).
>
> Best regards,
>
> Martijn
>
> [1]  https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
> [2] https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
> [3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5
>
> On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi 
> wrote:
>
>> I agree, there are several options to mitigate the migration from v2 to
>> v5.
>> yet, Oracle roadmap is to end JDK 11 support in September this year.
>>
>>
>>
>> ________________
>> From: ConradJam 
>> Sent: Thursday, March 16, 2023 4:36 AM
>> To: d...@flink.apache.org 
>> Subject: Re: [Discussion] - Release major Flink version to support JDK 17
>> (LTS)
>>
>> EXTERNAL EMAIL
>>
>>
>>
>> Thanks for your start this discuss
>>
>>
>> I have been tracking this problem for a long time, until I saw a
>> conversation in ISSUSE a few days ago and learned that the Kryo version
>> problem will affect the JDK17 compilation of snapshots [1] FLINK-24998 ,
>>
>> As @cherry said it ruined our whole effort towards JDK17
>>
>> I am in favor of providing an external tool to migrate from Kryo old
>> version checkpoint to the new Kryo new checkpoint at one time (Maybe this
>> tool start in flink 2.0 ?), does this tool currently have any plans or
>> ideas worth discuss
>>
>>
>> I think it should not be difficult to be compatible with JDK11 and JDK17.
>> We should indeed abandon JDK8 in 2.0.0. It is also mentioned in the doc
>> that it is marked as Deprecated [2]
>>
>>
>> Here I add that we need to pay attention to the version of Scala and the
>> version of JDK17
>>
>>
>> [1] FLINK-24998  IGSEGV in Kryo / C2 CompilerThread on Java 17
>> https://issues.apache.org/jira/browse/FLINK-24998
>>
>> [2] FLINK-30501 Update Flink build instruction to deprecate Java 8 instead
>> of requiring Java 11  https://issues.apache.org/jira/browse/FLINK-30501
>>
>> Tamir Sagi  于2023年3月16日周四 00:54写道:
>>
>> > Hey dev community,
>> >
>> > I'm writing this email to kick off a discussion following this epic:
>> > FLINK-15736<https://issues.apache.org/jira/browse/FLINK-15736>.
>> >
>> > We are moving towards JDK 17 (LTS) , the only blocker now is Flink which
>> > currently remains on JDK 1

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-03-30 Thread Martijn Visser
Hi all,

I also saw a thread on this topic from Clayton Wohl [1] on this topic,
which I'm including in this discussion thread to avoid that it gets lost.

>From my perspective, there's two main ways to get to Java 17:

1. The Flink community agrees that we upgrade Kryo to a later version,
which means breaking all checkpoint/savepoint compatibility and releasing a
Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
dropped. This is probably the quickest way, but would still mean that we
expose Kryo in the Flink APIs, which is the main reason why we haven't been
able to upgrade Kryo at all.
2. There's a contributor who makes a contribution that bumps Kryo, but
either a) automagically reads in all old checkpoints/savepoints in using
Kryo v2 and writes them to new snapshots using Kryo v5 (like is mentioned
in the Kryo migration guide [2][3] or b) provides an offline tool that
allows users that are interested in migrating their snapshots manually
before starting from a newer version. That potentially could prevent the
need to introduce a new Flink major version. In both scenarios, ideally the
contributor would also help with avoiding the exposure of Kryo so that we
will be in a better shape in the future.

It would be good to get the opinion of the community for either of these
two options, or potentially for another one that I haven't mentioned. If it
appears that there's an overall agreement on the direction, I would propose
that a FLIP gets created which describes the entire process.

Looking forward to the thoughts of others, including the Users (therefore
including the User ML).

Best regards,

Martijn

[1]  https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
[2] https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
[3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5

On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi 
wrote:

> I agree, there are several options to mitigate the migration from v2 to v5.
> yet, Oracle roadmap is to end JDK 11 support in September this year.
>
>
>
> 
> From: ConradJam 
> Sent: Thursday, March 16, 2023 4:36 AM
> To: d...@flink.apache.org 
> Subject: Re: [Discussion] - Release major Flink version to support JDK 17
> (LTS)
>
> EXTERNAL EMAIL
>
>
>
> Thanks for your start this discuss
>
>
> I have been tracking this problem for a long time, until I saw a
> conversation in ISSUSE a few days ago and learned that the Kryo version
> problem will affect the JDK17 compilation of snapshots [1] FLINK-24998 ,
>
> As @cherry said it ruined our whole effort towards JDK17
>
> I am in favor of providing an external tool to migrate from Kryo old
> version checkpoint to the new Kryo new checkpoint at one time (Maybe this
> tool start in flink 2.0 ?), does this tool currently have any plans or
> ideas worth discuss
>
>
> I think it should not be difficult to be compatible with JDK11 and JDK17.
> We should indeed abandon JDK8 in 2.0.0. It is also mentioned in the doc
> that it is marked as Deprecated [2]
>
>
> Here I add that we need to pay attention to the version of Scala and the
> version of JDK17
>
>
> [1] FLINK-24998  IGSEGV in Kryo / C2 CompilerThread on Java 17
> https://issues.apache.org/jira/browse/FLINK-24998
>
> [2] FLINK-30501 Update Flink build instruction to deprecate Java 8 instead
> of requiring Java 11  https://issues.apache.org/jira/browse/FLINK-30501
>
> Tamir Sagi  于2023年3月16日周四 00:54写道:
>
> > Hey dev community,
> >
> > I'm writing this email to kick off a discussion following this epic:
> > FLINK-15736<https://issues.apache.org/jira/browse/FLINK-15736>.
> >
> > We are moving towards JDK 17 (LTS) , the only blocker now is Flink which
> > currently remains on JDK 11 (LTS). Flink does not support JDK 17 yet,
> with
> > no timeline,  the reason, based on the aforementioned ticket is the
> > following tickets
> >
> >   1.  FLINK-24998 - SIGSEGV in Kryo / C2 CompilerThread on Java 17<
> > https://issues.apache.org/jira/browse/FLINK-24998>.
> >   2.  FLINK-3154 - Update Kryo version from 2.24.0 to latest Kryo LTS
> > version<https://issues.apache.org/jira/browse/FLINK-3154>
> >
> > My question is whether it is possible to release a major version (Flink
> > 2.0.0) using the latest Kryo version for those who don't need to restore
> > old savepoints/checkpoints in newer format.
> >
> >   1.  Leverage JDK 17 features within JVM
> >   2.  Moving from the old format to the newer one will be handled only
> > once - a mitigation can be achieved by a conversion tool or external
> > serializers, both can be provided later on.
> >
> > I'd like to emphasize that the next JDK LTS (21) will be re