flink 1.17 connector adapt

2023-03-30 Thread tian tian
Hi, Flink 1.17 has been released, but elasticsearch and rabbitmq have not
been adapted to 1.17. Is there any plan to adapt it?


Re: Unexpected behaviour when configuring both `withWatermarkAlignment` & `withIdleness`

2023-03-30 Thread Reem Razak via user
This sounds very much like our issue, thank you! We will follow along with
the bug.

Much appreciated,
- Reem

On Thu, Mar 30, 2023 at 9:20 AM Martijn Visser 
wrote:

> Hi Reem
>
> My thinking is that this might be related to recently reported
> https://issues.apache.org/jira/browse/FLINK-31632.
>
> Best regards,
>
> Martijn
>
> On Wed, Mar 29, 2023 at 7:07 PM Reem Razak via user 
> wrote:
>
>> Hey Martijn,
>>
>> The version is 1.16.0
>>
>> On Wed, Mar 29, 2023 at 5:43 PM Martijn Visser 
>> wrote:
>>
>>> Hi Reem,
>>>
>>> What's the Flink version where you're encountering this issue?
>>>
>>> Best regards,
>>>
>>> Martijn
>>>
>>> On Wed, Mar 29, 2023 at 5:18 PM Reem Razak via user <
>>> user@flink.apache.org> wrote:
>>>
 Hey there!

 We are seeing a second Flink pipeline encountering similar issues when
 configuring both `withWatermarkAlignment` and `withIdleness`. The
 unexpected behaviour gets triggered after a Kafka cluster failover. Any
 thoughts on there being an incompatibility between the two?

 Thanks!

 On Wed, Nov 9, 2022 at 6:42 PM Reem Razak 
 wrote:

> Hi there,
>
> We are integrating the watermark alignment feature into a pipeline
> with a Kafka source during a "backfill"- i.e. playing from an earlier 
> Kafka
> offset. While testing, we noticed some unexpected behaviour in the
> watermark advancement which was resolved by removing `withIdleness` from
> our watermark strategy.
>
>
> val watermarkStrategy = WatermarkStrategy
>   .forBoundedOutOfOrderness(Duration.ofMinutes(1))
>   .withTimestampAssigner(new
> TimestampedEventTimestampAssigner[Event])
>   .withWatermarkAlignment("alignment-group-1",
> Duration.ofMinutes(1))
>   .withIdleness(Duration.ofMinutes(5))
>
> I have attached a couple of screenshots of the watermarkAlignmentDrift
> metric. As you can see, the behaviour seems normal until a sudden drop in
> the value to ~ Long.MIN_VALUE, which causes the pipeline to stop emitting
> records completely from the source. Furthermore, the logs originating from
> from
> https://github.com/dawidwys/flink/blob/888162083fbb5f62f809e5270ad968db58cc9c5b/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinator.java#L174
> also indicate that the `maxAllowedWatermark` switches to ~ Long.MIN_VALUE.
>
> We found that modifying the `updateInterval` passed into the alignment
> parameters seemed to correlate with how long the pipeline would operate
> before stopping - a larger interval of 20 minutes would encounter the 
> issue
> later than an interval of 1 second.
>
> We are wondering if a bug exists when using both `withIdleness` and
> `withWatermarkAlignment`. Might it be related to
> https://issues.apache.org/jira/browse/FLINK-28975, or is there
> possibly a race condition in the watermark emission? We do not necessarily
> need to have both configured at the same time, but we were also surprised
> by the behaviour of the application. Has anyone run into a similar issue 
> or
> have further insight?
>
> Much Appreciated,
> - Reem
>
>
>
>


Re: KafkaSource consumer group

2023-03-30 Thread Tzu-Li (Gordon) Tai
Sorry, I meant to say "Hi Ben" :-)

On Thu, Mar 30, 2023 at 9:52 AM Tzu-Li (Gordon) Tai 
wrote:

> Hi Robert,
>
> This is a design choice. Flink's KafkaSource doesn't rely on consumer
> groups for assigning partitions / rebalancing / offset tracking. It
> manually assigns whatever partitions are in the specified topic across its
> consumer instances, and rebalances only when the Flink job / KafkaSink is
> rescaled.
>
> Is there a specific reason that you need two Flink jobs for this? I
> believe the Flink-way of doing this would be to have one job read the
> topic, and then you'd do a stream split if you want to have two different
> branches of processing business logic.
>
> Thanks,
> Gordon
>
> On Thu, Mar 30, 2023 at 9:34 AM Roberts, Ben (Senior Developer) via user <
> user@flink.apache.org> wrote:
>
>> Hi,
>>
>>
>>
>> Is there a way to run multiple flink jobs with the same Kafka group.id
>> and have them join the same consumer group?
>>
>>
>>
>> It seems that setting the group.id using
>> KafkaSource.builder().set_group_id() does not have the effect of creating
>> an actual consumer group in Kafka.
>>
>>
>>
>> Running the same flink job with the same group.id, consuming from the
>> same topic, will result in both flink jobs receiving the same messages from
>> the topic, rather than only one of the jobs receiving the messages (as
>> would be expected for consumers in a consumer group normally with Kafka).
>>
>>
>>
>> Is this a design choice, and is there a way to configure it so messages
>> can be split across two jobs using the same “group.id”?
>>
>>
>>
>> Thanks in advance,
>>
>> Ben
>>
>>
>> Information in this email including any attachments may be privileged,
>> confidential and is intended exclusively for the addressee. The views
>> expressed may not be official policy, but the personal views of the
>> originator. If you have received it in error, please notify the sender by
>> return e-mail and delete it from your system. You should not reproduce,
>> distribute, store, retransmit, use or disclose its contents to anyone.
>> Please note we reserve the right to monitor all e-mail communication
>> through our internal and external networks. SKY and the SKY marks are
>> trademarks of Sky Limited and Sky International AG and are used under
>> licence.
>>
>> Sky UK Limited (Registration No. 2906991), Sky-In-Home Service Limited
>> (Registration No. 2067075), Sky Subscribers Services Limited (Registration
>> No. 2340150) and Sky CP Limited (Registration No. 9513259) are direct or
>> indirect subsidiaries of Sky Limited (Registration No. 2247735). All of the
>> companies mentioned in this paragraph are incorporated in England and Wales
>> and share the same registered office at Grant Way, Isleworth, Middlesex TW7
>> 5QD
>>
>


Re: KafkaSource consumer group

2023-03-30 Thread Tzu-Li (Gordon) Tai
Hi Robert,

This is a design choice. Flink's KafkaSource doesn't rely on consumer
groups for assigning partitions / rebalancing / offset tracking. It
manually assigns whatever partitions are in the specified topic across its
consumer instances, and rebalances only when the Flink job / KafkaSink is
rescaled.

Is there a specific reason that you need two Flink jobs for this? I believe
the Flink-way of doing this would be to have one job read the topic, and
then you'd do a stream split if you want to have two different branches of
processing business logic.

Thanks,
Gordon

On Thu, Mar 30, 2023 at 9:34 AM Roberts, Ben (Senior Developer) via user <
user@flink.apache.org> wrote:

> Hi,
>
>
>
> Is there a way to run multiple flink jobs with the same Kafka group.id
> and have them join the same consumer group?
>
>
>
> It seems that setting the group.id using
> KafkaSource.builder().set_group_id() does not have the effect of creating
> an actual consumer group in Kafka.
>
>
>
> Running the same flink job with the same group.id, consuming from the
> same topic, will result in both flink jobs receiving the same messages from
> the topic, rather than only one of the jobs receiving the messages (as
> would be expected for consumers in a consumer group normally with Kafka).
>
>
>
> Is this a design choice, and is there a way to configure it so messages
> can be split across two jobs using the same “group.id”?
>
>
>
> Thanks in advance,
>
> Ben
>
>
> Information in this email including any attachments may be privileged,
> confidential and is intended exclusively for the addressee. The views
> expressed may not be official policy, but the personal views of the
> originator. If you have received it in error, please notify the sender by
> return e-mail and delete it from your system. You should not reproduce,
> distribute, store, retransmit, use or disclose its contents to anyone.
> Please note we reserve the right to monitor all e-mail communication
> through our internal and external networks. SKY and the SKY marks are
> trademarks of Sky Limited and Sky International AG and are used under
> licence.
>
> Sky UK Limited (Registration No. 2906991), Sky-In-Home Service Limited
> (Registration No. 2067075), Sky Subscribers Services Limited (Registration
> No. 2340150) and Sky CP Limited (Registration No. 9513259) are direct or
> indirect subsidiaries of Sky Limited (Registration No. 2247735). All of the
> companies mentioned in this paragraph are incorporated in England and Wales
> and share the same registered office at Grant Way, Isleworth, Middlesex TW7
> 5QD
>


KafkaSource consumer group

2023-03-30 Thread Roberts, Ben (Senior Developer) via user
Hi,

Is there a way to run multiple flink jobs with the same Kafka group.id and have 
them join the same consumer group?

It seems that setting the group.id using KafkaSource.builder().set_group_id() 
does not have the effect of creating an actual consumer group in Kafka.

Running the same flink job with the same group.id, consuming from the same 
topic, will result in both flink jobs receiving the same messages from the 
topic, rather than only one of the jobs receiving the messages (as would be 
expected for consumers in a consumer group normally with Kafka).

Is this a design choice, and is there a way to configure it so messages can be 
split across two jobs using the same “group.id”?

Thanks in advance,
Ben

Information in this email including any attachments may be privileged, 
confidential and is intended exclusively for the addressee. The views expressed 
may not be official policy, but the personal views of the originator. If you 
have received it in error, please notify the sender by return e-mail and delete 
it from your system. You should not reproduce, distribute, store, retransmit, 
use or disclose its contents to anyone. Please note we reserve the right to 
monitor all e-mail communication through our internal and external networks. 
SKY and the SKY marks are trademarks of Sky Limited and Sky International AG 
and are used under licence.

Sky UK Limited (Registration No. 2906991), Sky-In-Home Service Limited 
(Registration No. 2067075), Sky Subscribers Services Limited (Registration No. 
2340150) and Sky CP Limited (Registration No. 9513259) are direct or indirect 
subsidiaries of Sky Limited (Registration No. 2247735). All of the companies 
mentioned in this paragraph are incorporated in England and Wales and share the 
same registered office at Grant Way, Isleworth, Middlesex TW7 5QD


Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-03-30 Thread Piotr Nowojski
Hey,

> 1. The Flink community agrees that we upgrade Kryo to a later version,
which means breaking all checkpoint/savepoint compatibility and releasing a
Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
dropped. This is probably the quickest way, but would still mean that we
expose Kryo in the Flink APIs, which is the main reason why we haven't been
able to upgrade Kryo at all.

This sounds pretty bad to me.

Has anyone looked into what it would take to provide a smooth migration
from Kryo2 -> Kryo5?

Best,
Piotrek

czw., 30 mar 2023 o 16:54 Alexis Sarda-Espinosa 
napisał(a):

> Hi Martijn,
>
> just to be sure, if all state-related classes use a POJO serializer, Kryo
> will never come into play, right? Given FLINK-16686 [1], I wonder how many
> users actually have jobs with Kryo and RocksDB, but even if there aren't
> many, that still leaves those who don't use RocksDB for
> checkpoints/savepoints.
>
> If Kryo were to stay in the Flink APIs in v1.X, is it impossible to let
> users choose between v2/v5 jars by separating them like log4j2 jars?
>
> [1] https://issues.apache.org/jira/browse/FLINK-16686
>
> Regards,
> Alexis.
>
> Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser <
> martijnvis...@apache.org>:
>
>> Hi all,
>>
>> I also saw a thread on this topic from Clayton Wohl [1] on this topic,
>> which I'm including in this discussion thread to avoid that it gets lost.
>>
>> From my perspective, there's two main ways to get to Java 17:
>>
>> 1. The Flink community agrees that we upgrade Kryo to a later version,
>> which means breaking all checkpoint/savepoint compatibility and releasing a
>> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
>> dropped. This is probably the quickest way, but would still mean that we
>> expose Kryo in the Flink APIs, which is the main reason why we haven't been
>> able to upgrade Kryo at all.
>> 2. There's a contributor who makes a contribution that bumps Kryo, but
>> either a) automagically reads in all old checkpoints/savepoints in using
>> Kryo v2 and writes them to new snapshots using Kryo v5 (like is mentioned
>> in the Kryo migration guide [2][3] or b) provides an offline tool that
>> allows users that are interested in migrating their snapshots manually
>> before starting from a newer version. That potentially could prevent the
>> need to introduce a new Flink major version. In both scenarios, ideally the
>> contributor would also help with avoiding the exposure of Kryo so that we
>> will be in a better shape in the future.
>>
>> It would be good to get the opinion of the community for either of these
>> two options, or potentially for another one that I haven't mentioned. If it
>> appears that there's an overall agreement on the direction, I would propose
>> that a FLIP gets created which describes the entire process.
>>
>> Looking forward to the thoughts of others, including the Users (therefore
>> including the User ML).
>>
>> Best regards,
>>
>> Martijn
>>
>> [1]  https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
>> [2] https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
>> [3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5
>>
>> On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi 
>> wrote:
>>
>>> I agree, there are several options to mitigate the migration from v2 to
>>> v5.
>>> yet, Oracle roadmap is to end JDK 11 support in September this year.
>>>
>>>
>>>
>>> 
>>> From: ConradJam 
>>> Sent: Thursday, March 16, 2023 4:36 AM
>>> To: d...@flink.apache.org 
>>> Subject: Re: [Discussion] - Release major Flink version to support JDK
>>> 17 (LTS)
>>>
>>> EXTERNAL EMAIL
>>>
>>>
>>>
>>> Thanks for your start this discuss
>>>
>>>
>>> I have been tracking this problem for a long time, until I saw a
>>> conversation in ISSUSE a few days ago and learned that the Kryo version
>>> problem will affect the JDK17 compilation of snapshots [1] FLINK-24998 ,
>>>
>>> As @cherry said it ruined our whole effort towards JDK17
>>>
>>> I am in favor of providing an external tool to migrate from Kryo old
>>> version checkpoint to the new Kryo new checkpoint at one time (Maybe this
>>> tool start in flink 2.0 ?), does this tool currently have any plans or
>>> ideas worth discuss
>>>
>>>
>>> I think it should not be difficult to be compatible with JDK11 and JDK17.
>>> We should indeed abandon JDK8 in 2.0.0. It is also mentioned in the doc
>>> that it is marked as Deprecated [2]
>>>
>>>
>>> Here I add that we need to pay attention to the version of Scala and the
>>> version of JDK17
>>>
>>>
>>> [1] FLINK-24998  IGSEGV in Kryo / C2 CompilerThread on Java 17
>>> https://issues.apache.org/jira/browse/FLINK-24998
>>>
>>> [2] FLINK-30501 Update Flink build instruction to deprecate Java 8
>>> instead
>>> of requiring Java 11  https://issues.apache.org/jira/browse/FLINK-30501
>>>
>>> Tamir Sagi  于2023年3月16日周四 00:54写道:
>>>
>>> > Hey dev community,
>>> >
>>> > I'm writing 

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-03-30 Thread Alexis Sarda-Espinosa
Hi Martijn,

just to be sure, if all state-related classes use a POJO serializer, Kryo
will never come into play, right? Given FLINK-16686 [1], I wonder how many
users actually have jobs with Kryo and RocksDB, but even if there aren't
many, that still leaves those who don't use RocksDB for
checkpoints/savepoints.

If Kryo were to stay in the Flink APIs in v1.X, is it impossible to let
users choose between v2/v5 jars by separating them like log4j2 jars?

[1] https://issues.apache.org/jira/browse/FLINK-16686

Regards,
Alexis.

Am Do., 30. März 2023 um 14:26 Uhr schrieb Martijn Visser <
martijnvis...@apache.org>:

> Hi all,
>
> I also saw a thread on this topic from Clayton Wohl [1] on this topic,
> which I'm including in this discussion thread to avoid that it gets lost.
>
> From my perspective, there's two main ways to get to Java 17:
>
> 1. The Flink community agrees that we upgrade Kryo to a later version,
> which means breaking all checkpoint/savepoint compatibility and releasing a
> Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
> dropped. This is probably the quickest way, but would still mean that we
> expose Kryo in the Flink APIs, which is the main reason why we haven't been
> able to upgrade Kryo at all.
> 2. There's a contributor who makes a contribution that bumps Kryo, but
> either a) automagically reads in all old checkpoints/savepoints in using
> Kryo v2 and writes them to new snapshots using Kryo v5 (like is mentioned
> in the Kryo migration guide [2][3] or b) provides an offline tool that
> allows users that are interested in migrating their snapshots manually
> before starting from a newer version. That potentially could prevent the
> need to introduce a new Flink major version. In both scenarios, ideally the
> contributor would also help with avoiding the exposure of Kryo so that we
> will be in a better shape in the future.
>
> It would be good to get the opinion of the community for either of these
> two options, or potentially for another one that I haven't mentioned. If it
> appears that there's an overall agreement on the direction, I would propose
> that a FLIP gets created which describes the entire process.
>
> Looking forward to the thoughts of others, including the Users (therefore
> including the User ML).
>
> Best regards,
>
> Martijn
>
> [1]  https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
> [2] https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
> [3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5
>
> On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi 
> wrote:
>
>> I agree, there are several options to mitigate the migration from v2 to
>> v5.
>> yet, Oracle roadmap is to end JDK 11 support in September this year.
>>
>>
>>
>> 
>> From: ConradJam 
>> Sent: Thursday, March 16, 2023 4:36 AM
>> To: d...@flink.apache.org 
>> Subject: Re: [Discussion] - Release major Flink version to support JDK 17
>> (LTS)
>>
>> EXTERNAL EMAIL
>>
>>
>>
>> Thanks for your start this discuss
>>
>>
>> I have been tracking this problem for a long time, until I saw a
>> conversation in ISSUSE a few days ago and learned that the Kryo version
>> problem will affect the JDK17 compilation of snapshots [1] FLINK-24998 ,
>>
>> As @cherry said it ruined our whole effort towards JDK17
>>
>> I am in favor of providing an external tool to migrate from Kryo old
>> version checkpoint to the new Kryo new checkpoint at one time (Maybe this
>> tool start in flink 2.0 ?), does this tool currently have any plans or
>> ideas worth discuss
>>
>>
>> I think it should not be difficult to be compatible with JDK11 and JDK17.
>> We should indeed abandon JDK8 in 2.0.0. It is also mentioned in the doc
>> that it is marked as Deprecated [2]
>>
>>
>> Here I add that we need to pay attention to the version of Scala and the
>> version of JDK17
>>
>>
>> [1] FLINK-24998  IGSEGV in Kryo / C2 CompilerThread on Java 17
>> https://issues.apache.org/jira/browse/FLINK-24998
>>
>> [2] FLINK-30501 Update Flink build instruction to deprecate Java 8 instead
>> of requiring Java 11  https://issues.apache.org/jira/browse/FLINK-30501
>>
>> Tamir Sagi  于2023年3月16日周四 00:54写道:
>>
>> > Hey dev community,
>> >
>> > I'm writing this email to kick off a discussion following this epic:
>> > FLINK-15736.
>> >
>> > We are moving towards JDK 17 (LTS) , the only blocker now is Flink which
>> > currently remains on JDK 11 (LTS). Flink does not support JDK 17 yet,
>> with
>> > no timeline,  the reason, based on the aforementioned ticket is the
>> > following tickets
>> >
>> >   1.  FLINK-24998 - SIGSEGV in Kryo / C2 CompilerThread on Java 17<
>> > https://issues.apache.org/jira/browse/FLINK-24998>.
>> >   2.  FLINK-3154 - Update Kryo version from 2.24.0 to latest Kryo LTS
>> > version
>> >
>> > My question is whether it is possible to release a major version 

Re: [Discussion] - Release major Flink version to support JDK 17 (LTS)

2023-03-30 Thread Martijn Visser
Hi all,

I also saw a thread on this topic from Clayton Wohl [1] on this topic,
which I'm including in this discussion thread to avoid that it gets lost.

>From my perspective, there's two main ways to get to Java 17:

1. The Flink community agrees that we upgrade Kryo to a later version,
which means breaking all checkpoint/savepoint compatibility and releasing a
Flink 2.0 with Java 17 support added and Java 8 and Flink Scala API support
dropped. This is probably the quickest way, but would still mean that we
expose Kryo in the Flink APIs, which is the main reason why we haven't been
able to upgrade Kryo at all.
2. There's a contributor who makes a contribution that bumps Kryo, but
either a) automagically reads in all old checkpoints/savepoints in using
Kryo v2 and writes them to new snapshots using Kryo v5 (like is mentioned
in the Kryo migration guide [2][3] or b) provides an offline tool that
allows users that are interested in migrating their snapshots manually
before starting from a newer version. That potentially could prevent the
need to introduce a new Flink major version. In both scenarios, ideally the
contributor would also help with avoiding the exposure of Kryo so that we
will be in a better shape in the future.

It would be good to get the opinion of the community for either of these
two options, or potentially for another one that I haven't mentioned. If it
appears that there's an overall agreement on the direction, I would propose
that a FLIP gets created which describes the entire process.

Looking forward to the thoughts of others, including the Users (therefore
including the User ML).

Best regards,

Martijn

[1]  https://lists.apache.org/thread/qcw8wy9dv8szxx9bh49nz7jnth22p1v2
[2] https://lists.apache.org/thread/gv49jfkhmbshxdvzzozh017ntkst3sgq
[3] https://github.com/EsotericSoftware/kryo/wiki/Migration-to-v5

On Sun, Mar 19, 2023 at 8:16 AM Tamir Sagi 
wrote:

> I agree, there are several options to mitigate the migration from v2 to v5.
> yet, Oracle roadmap is to end JDK 11 support in September this year.
>
>
>
> 
> From: ConradJam 
> Sent: Thursday, March 16, 2023 4:36 AM
> To: d...@flink.apache.org 
> Subject: Re: [Discussion] - Release major Flink version to support JDK 17
> (LTS)
>
> EXTERNAL EMAIL
>
>
>
> Thanks for your start this discuss
>
>
> I have been tracking this problem for a long time, until I saw a
> conversation in ISSUSE a few days ago and learned that the Kryo version
> problem will affect the JDK17 compilation of snapshots [1] FLINK-24998 ,
>
> As @cherry said it ruined our whole effort towards JDK17
>
> I am in favor of providing an external tool to migrate from Kryo old
> version checkpoint to the new Kryo new checkpoint at one time (Maybe this
> tool start in flink 2.0 ?), does this tool currently have any plans or
> ideas worth discuss
>
>
> I think it should not be difficult to be compatible with JDK11 and JDK17.
> We should indeed abandon JDK8 in 2.0.0. It is also mentioned in the doc
> that it is marked as Deprecated [2]
>
>
> Here I add that we need to pay attention to the version of Scala and the
> version of JDK17
>
>
> [1] FLINK-24998  IGSEGV in Kryo / C2 CompilerThread on Java 17
> https://issues.apache.org/jira/browse/FLINK-24998
>
> [2] FLINK-30501 Update Flink build instruction to deprecate Java 8 instead
> of requiring Java 11  https://issues.apache.org/jira/browse/FLINK-30501
>
> Tamir Sagi  于2023年3月16日周四 00:54写道:
>
> > Hey dev community,
> >
> > I'm writing this email to kick off a discussion following this epic:
> > FLINK-15736.
> >
> > We are moving towards JDK 17 (LTS) , the only blocker now is Flink which
> > currently remains on JDK 11 (LTS). Flink does not support JDK 17 yet,
> with
> > no timeline,  the reason, based on the aforementioned ticket is the
> > following tickets
> >
> >   1.  FLINK-24998 - SIGSEGV in Kryo / C2 CompilerThread on Java 17<
> > https://issues.apache.org/jira/browse/FLINK-24998>.
> >   2.  FLINK-3154 - Update Kryo version from 2.24.0 to latest Kryo LTS
> > version
> >
> > My question is whether it is possible to release a major version (Flink
> > 2.0.0) using the latest Kryo version for those who don't need to restore
> > old savepoints/checkpoints in newer format.
> >
> >   1.  Leverage JDK 17 features within JVM
> >   2.  Moving from the old format to the newer one will be handled only
> > once - a mitigation can be achieved by a conversion tool or external
> > serializers, both can be provided later on.
> >
> > I'd like to emphasize that the next JDK LTS (21) will be released this
> > September.  furthermore, Flink already supports JDK 12-15, which is very
> > close to JDK 17 (LTS) - that was released in September 2021.  JDK 11 will
> > become a legacy soon, as more frameworks moving towards JDK 17 and are
> less
> > likely to support JDK 11 in the near future. (For example, Spring 

[ANNOUNCE] TAC supporting Berlin Buzzwords

2023-03-30 Thread Martijn Visser
Hi everyone,

I'm forwarding the following information from the ASF Travel Assistance
Committee (TAC):

---

Hi All,

The ASF Travel Assistance Committee is supporting taking up to six (6)
people
to attend Berlin Buzzwords [1] In June this year.

This includes Conference passes, and travel & accommodation as needed.

Please see our website at https://tac.apache.org for more information and
how to apply.

Applications close on 15th April.

Good luck to those that apply.

Gavin McDonald (VP TAC)

---

Best regards,

Martijn

[1] https://2023.berlinbuzzwords.de/


Re: [ANNOUNCE] Flink Table Store Joins Apache Incubator as Apache Paimon(incubating)

2023-03-30 Thread Jane Chan
Congratulations!

Best regards,
Jane

On Thu, Mar 30, 2023 at 1:38 PM Jiadong Lu  wrote:

> Congratulations !!!
>
> Best,
> Jiadong Lu
>
> On 2023/3/27 17:23, Yu Li wrote:
> > Dear Flinkers,
> >
> >
> >
> As you may have noticed, we are pleased to announce that Flink Table Store 
> has joined the Apache Incubator as a separate project called Apache 
> Paimon(incubating) [1] [2] [3]. The new project still aims at building a 
> streaming data lake platform for high-speed data ingestion, change data 
> tracking and efficient real-time analytics, with the vision of supporting a 
> larger ecosystem and establishing a vibrant and neutral open source community.
> >
> >
> >
> We would like to thank everyone for their great support and efforts for the 
> Flink Table Store project, and warmly welcome everyone to join the 
> development and activities of the new project. Apache Flink will continue to 
> be one of the first-class citizens supported by Paimon, and we believe that 
> the Flink and Paimon communities will maintain close cooperation.
> >
> >
> > 亲爱的Flinkers,
> >
> >
> > 正如您可能已经注意到的,我们很高兴地宣布,Flink Table Store 已经正式加入
> > Apache 孵化器独立孵化 [1] [2] [3]。新项目的名字是
> > Apache Paimon(incubating),仍致力于打造一个支持高速数据摄入、流式数据订
> > 阅和高效实时分析的新一代流式湖仓平台。此外,新项目将支持更加丰富的生态,
> > 并建立一个充满活力和中立的开源社区。
> >
> >
> > 在这里我们要感谢大家对 Flink Table Store 项目的大力支持和投入,并热烈欢
> > 迎大家加入新项目的开发和社区活动。Apache Flink 将继续作为 Paimon 支持的
> > 主力计算引擎之一,我们也相信 Flink 和 Paimon 社区将继续保持密切合作。
> >
> >
> > Best Regards,
> >
> > Yu (on behalf of the Apache Flink PMC and Apache Paimon PPMC)
> >
> >
> > 致礼,
> >
> > 李钰(谨代表 Apache Flink PMC 和 Apache Paimon PPMC)
> >
> >
> > [1] https://paimon.apache.org/ 
> >
> > [2] https://github.com/apache/incubator-paimon
> > 
> >
> > [3] https://cwiki.apache.org/confluence/display/INCUBATOR/PaimonProposal
> > 
> >
>


Re: Unexpected behaviour when configuring both `withWatermarkAlignment` & `withIdleness`

2023-03-30 Thread Martijn Visser
Hi Reem

My thinking is that this might be related to recently reported
https://issues.apache.org/jira/browse/FLINK-31632.

Best regards,

Martijn

On Wed, Mar 29, 2023 at 7:07 PM Reem Razak via user 
wrote:

> Hey Martijn,
>
> The version is 1.16.0
>
> On Wed, Mar 29, 2023 at 5:43 PM Martijn Visser 
> wrote:
>
>> Hi Reem,
>>
>> What's the Flink version where you're encountering this issue?
>>
>> Best regards,
>>
>> Martijn
>>
>> On Wed, Mar 29, 2023 at 5:18 PM Reem Razak via user <
>> user@flink.apache.org> wrote:
>>
>>> Hey there!
>>>
>>> We are seeing a second Flink pipeline encountering similar issues when
>>> configuring both `withWatermarkAlignment` and `withIdleness`. The
>>> unexpected behaviour gets triggered after a Kafka cluster failover. Any
>>> thoughts on there being an incompatibility between the two?
>>>
>>> Thanks!
>>>
>>> On Wed, Nov 9, 2022 at 6:42 PM Reem Razak 
>>> wrote:
>>>
 Hi there,

 We are integrating the watermark alignment feature into a pipeline with
 a Kafka source during a "backfill"- i.e. playing from an earlier Kafka
 offset. While testing, we noticed some unexpected behaviour in the
 watermark advancement which was resolved by removing `withIdleness` from
 our watermark strategy.


 val watermarkStrategy = WatermarkStrategy
   .forBoundedOutOfOrderness(Duration.ofMinutes(1))
   .withTimestampAssigner(new
 TimestampedEventTimestampAssigner[Event])
   .withWatermarkAlignment("alignment-group-1",
 Duration.ofMinutes(1))
   .withIdleness(Duration.ofMinutes(5))

 I have attached a couple of screenshots of the watermarkAlignmentDrift
 metric. As you can see, the behaviour seems normal until a sudden drop in
 the value to ~ Long.MIN_VALUE, which causes the pipeline to stop emitting
 records completely from the source. Furthermore, the logs originating from
 from
 https://github.com/dawidwys/flink/blob/888162083fbb5f62f809e5270ad968db58cc9c5b/flink-runtime/src/main/java/org/apache/flink/runtime/source/coordinator/SourceCoordinator.java#L174
 also indicate that the `maxAllowedWatermark` switches to ~ Long.MIN_VALUE.

 We found that modifying the `updateInterval` passed into the alignment
 parameters seemed to correlate with how long the pipeline would operate
 before stopping - a larger interval of 20 minutes would encounter the issue
 later than an interval of 1 second.

 We are wondering if a bug exists when using both `withIdleness` and
 `withWatermarkAlignment`. Might it be related to
 https://issues.apache.org/jira/browse/FLINK-28975, or is there
 possibly a race condition in the watermark emission? We do not necessarily
 need to have both configured at the same time, but we were also surprised
 by the behaviour of the application. Has anyone run into a similar issue or
 have further insight?

 Much Appreciated,
 - Reem