Re: Request for unsubscribing

2024-06-16 Thread Hang Ruan
Hi,

I think you have unsubscribed user mail list.
For the dev mail list, please send email to dev-unsubscr...@flink.apache.org if
you want to unsubscribe the mail from dev@flink.apache.org.

You can refer [1] for more details.

Best,
Hang

[1] https://flink.apache.org/community.html#mailing-lists

Harshodai kolluru  于2024年6月16日周日 04:04写道:

> Hey Admin, I am getting a bunch of emails from Apache flink , please remove
> my subscription.
>
> Thanks!
>


Re: [VOTE] FLIP-464: Merge "flink run" and "flink run-application"

2024-06-16 Thread Hang Ruan
Thanks for the FLIP.

+1 (non-binding)

Best,
Hang

Venkatakrishnan Sowrirajan  于2024年6月17日周一 02:00写道:

> +1. Thanks for driving this proposal, Ferenc!
>
> Regards
> Venkata krishnan
>
>
> On Thu, Jun 13, 2024 at 10:54 AM Jeyhun Karimov 
> wrote:
>
> > Thanks for driving this.
> > +1 (non-binding)
> >
> > Regards,
> > Jeyhun
> >
> > On Thu, Jun 13, 2024 at 5:23 PM Gabor Somogyi  >
> > wrote:
> >
> > > +1 (binding)
> > >
> > > G
> > >
> > >
> > > On Wed, Jun 12, 2024 at 5:23 PM Ferenc Csaky
>  > >
> > > wrote:
> > >
> > > > Hello devs,
> > > >
> > > > I would like to start a vote about FLIP-464 [1]. The FLIP is about to
> > > > merge back the
> > > > "flink run-application" functionality to "flink run", so the latter
> > will
> > > > be capable to deploy jobs in
> > > > all deployment modes. More details in the FLIP. Discussion thread
> [2].
> > > >
> > > > The vote will be open for at least 72 hours (until 2024 March 23
> 14:03
> > > > UTC) unless there
> > > > are any objections or insufficient votes.
> > > >
> > > > Thanks,Ferenc
> > > >
> > > > [1]
> > > >
> > >
> >
> https://urldefense.com/v3/__https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=311626179__;!!IKRxdwAv5BmarQ!fw7_SWUS3G8imqL4w4z0MejMShCR1pHlYxeTnLFJqIu6sI05EF1rM_n1kw8lESNgRzxPqstJC3ITNwDSp1Jf-aA$
> > > > [2]
> >
> https://urldefense.com/v3/__https://lists.apache.org/thread/xh58xs0y58kqjmfvd4yor79rv6dlcg5g__;!!IKRxdwAv5BmarQ!fw7_SWUS3G8imqL4w4z0MejMShCR1pHlYxeTnLFJqIu6sI05EF1rM_n1kw8lESNgRzxPqstJC3ITNwDSOIWCchM$
> > >
> >
>


Re: [VOTE] Release 1.19.1, release candidate #1

2024-06-11 Thread Hang Ruan
+1(non-binding)

- Verified signatures
- Verified hashsums
- Checked Github release tag
- Source archives with no binary files
- Reviewed the flink-web PR
- Checked the jar build with jdk 1.8

Best,
Hang

gongzhongqiang  于2024年6月11日周二 15:53写道:

> +1(non-binding)
>
> - Verified signatures and sha512
> - Checked Github release tag exsit
> - Source archives with no binary files
> - Build the source with jdk8 on ubuntu 22.04 succeed
> - Reviewed the flink-web PR
>
> Best,
> Zhongqiang Gong
>
> Hong Liang  于2024年6月6日周四 23:39写道:
>
> > Hi everyone,
> > Please review and vote on the release candidate #1 for the flink v1.19.1,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience releases to
> be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint B78A5EA1 [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.19.1-rc1" [5],
> > * website pull request listing the new release and adding announcement
> blog
> > post [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Hong
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354399
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.19.1-rc1/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1736/
> > [5] https://github.com/apache/flink/releases/tag/release-1.19.1-rc1
> > [6] https://github.com/apache/flink-web/pull/745
> >
>


Re: [ANNOUNCE] New Apache Flink PMC Member - Fan Rui

2024-06-05 Thread Hang Ruan
Congratulations, Rui!

Best,
Hang

Samrat Deb  于2024年6月6日周四 10:28写道:

> Congratulations Rui
>
> Bests,
> Samrat
>
> On Thu, 6 Jun 2024 at 7:45 AM, Yuxin Tan  wrote:
>
> > Congratulations, Rui!
> >
> > Best,
> > Yuxin
> >
> >
> > Xuannan Su  于2024年6月6日周四 09:58写道:
> >
> > > Congratulations!
> > >
> > > Best regards,
> > > Xuannan
> > >
> > > On Thu, Jun 6, 2024 at 9:53 AM Hangxiang Yu 
> wrote:
> > > >
> > > > Congratulations, Rui !
> > > >
> > > > On Thu, Jun 6, 2024 at 9:18 AM Lincoln Lee 
> > > wrote:
> > > >
> > > > > Congratulations, Rui!
> > > > >
> > > > > Best,
> > > > > Lincoln Lee
> > > > >
> > > > >
> > > > > Lijie Wang  于2024年6月6日周四 09:11写道:
> > > > >
> > > > > > Congratulations, Rui!
> > > > > >
> > > > > > Best,
> > > > > > Lijie
> > > > > >
> > > > > > Rodrigo Meneses  于2024年6月5日周三 21:35写道:
> > > > > >
> > > > > > > All the best
> > > > > > >
> > > > > > > On Wed, Jun 5, 2024 at 5:56 AM xiangyu feng <
> > xiangyu...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Congratulations, Rui!
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > > Xiangyu Feng
> > > > > > > >
> > > > > > > > Feng Jin  于2024年6月5日周三 20:42写道:
> > > > > > > >
> > > > > > > > > Congratulations, Rui!
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Feng Jin
> > > > > > > > >
> > > > > > > > > On Wed, Jun 5, 2024 at 8:23 PM Yanfei Lei <
> > fredia...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Congratulations, Rui!
> > > > > > > > > >
> > > > > > > > > > Best,
> > > > > > > > > > Yanfei
> > > > > > > > > >
> > > > > > > > > > Luke Chen  于2024年6月5日周三 20:08写道:
> > > > > > > > > > >
> > > > > > > > > > > Congrats, Rui!
> > > > > > > > > > >
> > > > > > > > > > > Luke
> > > > > > > > > > >
> > > > > > > > > > > On Wed, Jun 5, 2024 at 8:02 PM Jiabao Sun <
> > > > > jiabao...@apache.org>
> > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Congrats, Rui. Well-deserved!
> > > > > > > > > > > >
> > > > > > > > > > > > Best,
> > > > > > > > > > > > Jiabao
> > > > > > > > > > > >
> > > > > > > > > > > > Zhanghao Chen 
> 于2024年6月5日周三
> > > > > > 19:29写道:
> > > > > > > > > > > >
> > > > > > > > > > > > > Congrats, Rui!
> > > > > > > > > > > > >
> > > > > > > > > > > > > Best,
> > > > > > > > > > > > > Zhanghao Chen
> > > > > > > > > > > > > 
> > > > > > > > > > > > > From: Piotr Nowojski 
> > > > > > > > > > > > > Sent: Wednesday, June 5, 2024 18:01
> > > > > > > > > > > > > To: dev ; rui fan <
> > > > > > 1996fan...@gmail.com>
> > > > > > > > > > > > > Subject: [ANNOUNCE] New Apache Flink PMC Member -
> Fan
> > > Rui
> > > > > > > > > > > > >
> > > > > > > > > > > > > Hi everyone,
> > > > > > > > > > > > >
> > > > > > > > > > > > > On behalf of the PMC, I'm very happy to announce
> > > another
> > > > > new
> > > > > > > > Apache
> > > > > > > > > > Flink
> > > > > > > > > > > > > PMC Member - Fan Rui.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Rui has been active in the community since August
> > 2019.
> > > > > > During
> > > > > > > > this
> > > > > > > > > > time
> > > > > > > > > > > > he
> > > > > > > > > > > > > has contributed a lot of new features. Among
> others:
> > > > > > > > > > > > >   - Decoupling Autoscaler from Kubernetes Operator,
> > and
> > > > > > > > supporting
> > > > > > > > > > > > > Standalone Autoscaler
> > > > > > > > > > > > >   - Improvements to checkpointing, flamegraphs,
> > restart
> > > > > > > > strategies,
> > > > > > > > > > > > > watermark alignment, network shuffles
> > > > > > > > > > > > >   - Optimizing the memory and CPU usage of large
> > > operators,
> > > > > > > > greatly
> > > > > > > > > > > > > reducing the risk and probability of TaskManager
> OOM
> > > > > > > > > > > > >
> > > > > > > > > > > > > He reviewed a significant amount of PRs and has
> been
> > > active
> > > > > > > both
> > > > > > > > on
> > > > > > > > > > the
> > > > > > > > > > > > > mailing lists and in Jira helping to both maintain
> > and
> > > grow
> > > > > > > > Apache
> > > > > > > > > > > > Flink's
> > > > > > > > > > > > > community. He is also our current Flink 1.20
> release
> > > > > manager.
> > > > > > > > > > > > >
> > > > > > > > > > > > > In the last 12 months, Rui has been the most active
> > > > > > contributor
> > > > > > > > in
> > > > > > > > > > the
> > > > > > > > > > > > > Flink Kubernetes Operator project, while being the
> > 2nd
> > > most
> > > > > > > > active
> > > > > > > > > > Flink
> > > > > > > > > > > > > contributor at the same time.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Please join me in welcoming and congratulating Fan
> > Rui!
> > > > > > > > > > > > >
> > > > > > > > > > > > > Best,
> > > > > > > > > > > > > Piotrek (on behalf of the Flink PMC)
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Best,
> > > > 

Re: [ANNOUNCE] New Apache Flink PMC Member - Weijie Guo

2024-06-04 Thread Hang Ruan
Congratulations Weijie!

Best,
Hang

Yanfei Lei  于2024年6月4日周二 16:24写道:

> Congratulations!
>
> Best,
> Yanfei
>
> Leonard Xu  于2024年6月4日周二 16:20写道:
> >
> > Congratulations!
> >
> > Best,
> > Leonard
> >
> > > 2024年6月4日 下午4:02,Yangze Guo  写道:
> > >
> > > Congratulations!
> > >
> > > Best,
> > > Yangze Guo
> > >
> > > On Tue, Jun 4, 2024 at 4:00 PM Weihua Hu 
> wrote:
> > >>
> > >> Congratulations, Weijie!
> > >>
> > >> Best,
> > >> Weihua
> > >>
> > >>
> > >> On Tue, Jun 4, 2024 at 3:03 PM Yuxin Tan 
> wrote:
> > >>
> > >>> Congratulations, Weijie!
> > >>>
> > >>> Best,
> > >>> Yuxin
> > >>>
> > >>>
> > >>> Yuepeng Pan  于2024年6月4日周二 14:57写道:
> > >>>
> >  Congratulations !
> > 
> > 
> >  Best,
> >  Yuepeng Pan
> > 
> >  At 2024-06-04 14:45:45, "Xintong Song" 
> wrote:
> > > Hi everyone,
> > >
> > > On behalf of the PMC, I'm very happy to announce that Weijie Guo
> has
> >  joined
> > > the Flink PMC!
> > >
> > > Weijie has been an active member of the Apache Flink community for
> many
> > > years. He has made significant contributions in many components,
> > >>> including
> > > runtime, shuffle, sdk, connectors, etc. He has driven /
> participated in
> > > many FLIPs, authored and reviewed hundreds of PRs, been
> consistently
> >  active
> > > on mailing lists, and also helped with release management of 1.20
> and
> > > several other bugfix releases.
> > >
> > > Congratulations and welcome Weijie!
> > >
> > > Best,
> > >
> > > Xintong (on behalf of the Flink PMC)
> > 
> > >>>
> >
>


Re: [DISCUSS] Flink CDC 3.1.1 Release

2024-05-30 Thread Hang Ruan
Hi, Xiqian.

+1 for releasing 3.1.1. Thanks for the discussion.

Best,
Hang

gongzhongqiang  于2024年5月30日周四 09:07写道:

> +1
> Thanks Xiqian.
>
> Best,
> Zhongqiang Gong
>
> Xiqian YU  于2024年5月28日周二 19:44写道:
>
> > Hi devs,
> >
> > I would like to make a proposal about creating a new Flink CDC 3.1 patch
> > release (3.1.1). It’s been a week since the last CDC version 3.1.0 got
> > released [1], and since then, 7 tickets have been closed, 4 of them are
> of
> > high priority.
> >
> > Currently, there are 5 items open at the moment: 1 of them is a blocker,
> > which stops users from restoring with existed checkpoints after upgrading
> > [2]. There’s a PR ready and will be merged soon. Other 4 of them have
> > approved PRs, and will be merged soon [3][4][5][6]. I propose that a
> patch
> > version could be released after all pending tickets closed.
> >
> > Please reply if there are any unresolved blocking issues you’d like to
> > include in this release.
> >
> > Regards,
> > Xiqian
> >
> > [1]
> >
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
> > [2] https://issues.apache.org/jira/browse/FLINK-35464
> > [3] https://issues.apache.org/jira/browse/FLINK-35149
> > [4] https://issues.apache.org/jira/browse/FLINK-35323
> > [5] https://issues.apache.org/jira/browse/FLINK-35430
> > [6] https://issues.apache.org/jira/browse/FLINK-35447
> >
> >
>


Re: [DISCUSS] Merge "flink run" and "flink run-application" in Flink 2.0

2024-05-30 Thread Hang Ruan
Hi, Ferenc.

+1 for this proposal. This FLIP will help to make the CLI clearer for users.

I think we should better add an example in the FLIP about how to use the
application mode with the new CLI.
Besides that, we need to add some new tests for this change instead of only
using the existed tests.

Best,
Hang

Mate Czagany  于2024年5月29日周三 19:57写道:

> Hi Ferenc,
>
> Thanks for the FLIP, +1 from me for the proposal. I think these changes
> would be a great solution to all the confusion that comes from these two
> action parameters.
>
> Best regards,
> Mate
>
> Ferenc Csaky  ezt írta (időpont: 2024. máj.
> 28., K, 16:13):
>
> > Thank you Xintong for your input.
> >
> > I prepared a FLIP for this change [1], looking forward for any
> > other opinions.
> >
> > Thanks,
> > Ferenc
> >
> > [1]
> >
> https://docs.google.com/document/d/1EX74rFp9bMKdfoGkz1ASOM6Ibw32rRxIadX72zs2zoY/edit?usp=sharing
> >
> >
> >
> > On Friday, 17 May 2024 at 07:04, Xintong Song 
> > wrote:
> >
> > >
> > >
> > > AFAIK, the main purpose of having `run-application` was to make sure
> > > the user is aware that application mode is used, which executes the
> main
> > > method of the user program in JM rather than in client. This was
> > important
> > > at the time application mode was first introduced, but maybe not that
> > > important anymore, given that per-job mode is deprecated and likely
> > removed
> > > in 2.0. Therefore, +1 for the proposal.
> > >
> > > Best,
> > >
> > > Xintong
> > >
> > >
> > >
> > > On Thu, May 16, 2024 at 11:35 PM Ferenc Csaky
> ferenc.cs...@pm.me.invalid
> > >
> > > wrote:
> > >
> > > > Hello devs,
> > > >
> > > > I saw quite some examples when customers were confused about run, and
> > run-
> > > > application in the Flink CLI and I was wondering about the necessity
> of
> > > > deploying
> > > > Application Mode (AM) jobs with a different command, than Session and
> > > > Per-Job mode jobs.
> > > >
> > > > I can see a point that YarnDeploymentTarget [1] and
> > > > KubernetesDeploymentTarget
> > > > [2] are part of their own maven modules and not known in
> flink-clients,
> > > > so the
> > > > deployment mode validations are happening during cluster deployment
> in
> > > > their specific
> > > > ClusterDescriptor implementation [3]. Although these are
> implementation
> > > > details that
> > > > IMO should not define user-facing APIs.
> > > >
> > > > The command line setup is the same for both run and run-application,
> so
> > > > I think there
> > > > is a quite simple way to achieve a unified flink run experience, but
> I
> > > > might missed
> > > > something so I would appreciate any inputs on this topic.
> > > >
> > > > Based on my assumptions I think it would be possible to deprecate the
> > run-
> > > > application in Flink 1.20 and remove it completely in Flink 2.0. I
> > > > already put together a
> > > > PoC [4], and I was able to deploy AM jobs like this:
> > > >
> > > > flink run --target kubernetes-application ...
> > > >
> > > > If others also agree with this, I would be happy to open a FLIP.
> WDYT?
> > > >
> > > > Thanks,
> > > > Ferenc
> > > >
> > > > [1]
> > > >
> >
> https://github.com/apache/flink/blob/master/flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnDeploymentTarget.java
> > > > [2]
> > > >
> >
> https://github.com/apache/flink/blob/master/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/configuration/KubernetesDeploymentTarget.java
> > > > [3]
> > > >
> >
> https://github.com/apache/flink/blob/48e5a39c9558083afa7589d2d8b054b625f61ee9/flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesClusterDescriptor.java#L206
> > > > [4]
> > > >
> >
> https://github.com/ferenc-csaky/flink/commit/40b3e1b998c7a4273eaaff71d9162c9f1ee039c0
> >
>


Re: [DISCUSS] Add Flink CDC Channel to Apache Flink Slack Workspace

2024-05-28 Thread Hang Ruan
Hi, zhongqiang.

Thanks for the proposal. +1 for it.

Best,
Hang

Leonard Xu  于2024年5月28日周二 11:58写道:

>
> Thanks Zhongqiang for the proposal, we need the Channel and I should have
> been created it but not yet, +1 from my side.
>
> Best,
> Leonard
>
> > 2024年5月28日 上午11:54,gongzhongqiang  写道:
> >
> > Hi devs,
> >
> > I would like to propose adding a dedicated Flink CDC channel to the
> Apache
> > Flink Slack workspace.
> >
> > Creating a channel focused on Flink CDC will help community members
> easily
> > find previous discussions
> > and target new discussions and questions to the correct place. Flink CDC
> is
> > a sufficiently distinct component
> > within the Apache Flink ecosystem, and having a dedicated channel will
> make
> > it viable and useful for
> > those specifically working with or interested in this technology.
> >
> > Looking forward to your feedback and support on this proposal.
> >
> >
> > Best,
> > Zhongqiang Gong
>
>


Re: [VOTE] Release flink-connector-opensearch v1.2.0, release candidate #1

2024-05-27 Thread Hang Ruan
+1 (non-binding)

- verified signatures
- verified hashsums
- built from source code with JDK 1.8 succeeded
- checked release notes
- reviewed the web PR
- check the jar is built with JDK 1.8

Best,
Hang

Leonard Xu  于2024年5月22日周三 21:07写道:

> +1 (binding)
>
> - verified signatures
> - verified hashsums
> - built from source code with JDK 1.8 succeeded
> - checked Github release tag
> - checked release notes
> - reviewed the web PR
>
> Best,
> Leonard
>
> > 2024年5月16日 上午6:58,Andrey Redko  写道:
> >
> > +1 (non-binding), thanks Sergey!
> >
> > On Wed, May 15, 2024, 5:56 p.m. Sergey Nuyanzin 
> wrote:
> >
> >> Hi everyone,
> >> Please review and vote on release candidate #1 for
> >> flink-connector-opensearch v1.2.0, as follows:
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific comments)
> >>
> >>
> >> The complete staging area is available for your review, which includes:
> >> * JIRA release notes [1],
> >> * the official Apache source release to be deployed to dist.apache.org
> >> [2],
> >> which are signed with the key with fingerprint
> >> F7529FAE24811A5C0DF3CA741596BBF0726835D8 [3],
> >> * all artifacts to be deployed to the Maven Central Repository [4],
> >> * source code tag v1.2.0-rc1 [5],
> >> * website pull request listing the new release [6].
> >> * CI build of the tag [7].
> >>
> >> The vote will be open for at least 72 hours. It is adopted by majority
> >> approval, with at least 3 PMC affirmative votes.
> >>
> >> Note that this release is for Opensearch v1.x
> >>
> >> Thanks,
> >> Release Manager
> >>
> >> [1] https://issues.apache.org/jira/projects/FLINK/versions/12353812
> >> [2]
> >>
> >>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-1.2.0-rc1
> >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1734
> >> [5]
> >>
> >>
> https://github.com/apache/flink-connector-opensearch/releases/tag/v1.2.0-rc1
> >> [6] https://github.com/apache/flink-web/pull/740
> >> [7]
> >>
> >>
> https://github.com/apache/flink-connector-opensearch/actions/runs/9102334125
> >>
>
>


Re: [VOTE] Release flink-connector-opensearch v2.0.0, release candidate #1

2024-05-27 Thread Hang Ruan
+1 (non-binding)

- verified signatures
- verified hashsums
- built from source code with JDK 11 succeed
- checked release notes
- reviewed the web PR

Best,
Hang

Leonard Xu  于2024年5月22日周三 21:02写道:

>
> > +1 (binding)
> >
> > - verified signatures
> > - verified hashsums
> > - built from source code with JDK 1.8 succeeded
> > - checked Github release tag
> > - checked release notes
> > - reviewed the web PR
>
> Supply more information about build from source code with JDK 1.8
>
> > - built from source code with JDK 1.8 succeeded
> It’s correct as we don’t activate opensearch2 profile by default.
>
> - built from source code with JDK 1.8 and -Popensearch2 failed
> - built from source code with JDK 11 and -Popensearch2 succeeded
>
> Best,
> Leonard
>
>
> >
> >
> >> 2024年5月16日 上午6:58,Andrey Redko  写道:
> >>
> >> +1 (non-binding), thanks Sergey!
> >>
> >> On Wed, May 15, 2024, 6:00 p.m. Sergey Nuyanzin 
> wrote:
> >>
> >>> Hi everyone,
> >>> Please review and vote on release candidate #1 for
> >>> flink-connector-opensearch v2.0.0, as follows:
> >>> [ ] +1, Approve the release
> >>> [ ] -1, Do not approve the release (please provide specific comments)
> >>>
> >>>
> >>> The complete staging area is available for your review, which includes:
> >>> * JIRA release notes [1],
> >>> * the official Apache source release to be deployed to dist.apache.org
> >>> [2],
> >>> which are signed with the key with fingerprint
> >>> F7529FAE24811A5C0DF3CA741596BBF0726835D8 [3],
> >>> * all artifacts to be deployed to the Maven Central Repository [4],
> >>> * source code tag v2.0.0-rc1 [5],
> >>> * website pull request listing the new release [6].
> >>> * CI build of the tag [7].
> >>>
> >>> The vote will be open for at least 72 hours. It is adopted by majority
> >>> approval, with at least 3 PMC affirmative votes.
> >>>
> >>> Note that this release is for Opensearch v2.x
> >>>
> >>> Thanks,
> >>> Release Manager
> >>>
> >>> [1] https://issues.apache.org/jira/projects/FLINK/versions/12354674
> >>> [2]
> >>>
> >>>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-opensearch-2.0.0-rc1
> >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >>> [4]
> >>>
> https://repository.apache.org/content/repositories/orgapacheflink-1735/
> >>> [5]
> >>>
> >>>
> https://github.com/apache/flink-connector-opensearch/releases/tag/v2.0.0-rc1
> >>> [6] https://github.com/apache/flink-web/pull/741
> >>> [7]
> >>>
> >>>
> https://github.com/apache/flink-connector-opensearch/actions/runs/9102980808
> >>>
> >
>
>


Re: [VOTE] FLIP-457: Improve Table/SQL Configuration for Flink 2.0

2024-05-27 Thread Hang Ruan
+1 (non-binding)

Best,
Hang

gongzhongqiang  于2024年5月27日周一 14:16写道:

> +1 (non-binding)
>
> Best,
> Zhongqiang Gong
>
> Jane Chan  于2024年5月24日周五 09:52写道:
>
> > Hi all,
> >
> > I'd like to start a vote on FLIP-457[1] after reaching a consensus
> through
> > the discussion thread[2].
> >
> > The vote will be open for at least 72 hours unless there is an objection
> or
> > insufficient votes.
> >
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=307136992
> > [2] https://lists.apache.org/thread/1sthbv6q00sq52pp04n2p26d70w4fqj1
> >
> > Best,
> > Jane
> >
>


Re: [DISCUSS] Flink CDC Upgrade Debezium version to 2.x

2024-05-24 Thread Hang Ruan
Hi, zhongqiang.

Thanks for this discussion.

IMO, I agree to update the references in Flink CDC docs from Debezium 1.9
to 2.0 as the Debezium docs in 1.19 have been dropped.

Flink sources in Flink CDC still need to support JDK 1.8. Upgrading
Debezium to version 2.x will be a hard job.
Have you ever tried to use Debezium 2.0 and JDK 1.8 together? And will some
errors occur?

Best,
Hang

Leonard Xu  于2024年5月23日周四 14:06写道:

> Thanks zhongqiang for bringing this discussion.
>
> I also noticed you also sent a mail to Debezium’s dev mailing list, it
> will help us a lot if they can help maintain a LTS version for their 1.x
> serials.
>
> I can accept the proposal to reference DBZ 2.0’s doc as a temporary
> solution in current situation.
>
> About upgrade Debezium version and bump JDK version as well, we’ve to
> consider that flink’s default JDK version is still JDK1.8, it’s a hard
> decision
> to make at this moment, but I agree we need to bump DBZ version and JDK
> version finally.
>
>
> Best,
> Leonard
>
>
> > 2024年5月23日 下午1:24,gongzhongqiang  写道:
> >
> > Hi all,
> >
> > I would like to start a discussion about upgrading Debezium to version
> 2.x.
> >
> > Background:
> > Currently, the Debezium community no longer maintains versions prior to
> > 2.0,
> > and the website has taken down the documentation for versions before 2.0.
> > However, Flink CDC depends on Debezium version 1.9, and the documentation
> > references links to that version.
> >
> >
> > Problem:
> > - References to Debezium's documentation links report errors [1]
> > - The Debezium community will no longer maintain versions prior to 2.0.
> > Flink CDC
> > synchronizes bug fixes from Debezium 2.0 by overwriting classes, but the
> > classes differ significantly between 2.x and 1.9.
> >
> >
> > Compatibility and Deprecation:
> > - Debezium uses JDK 11 starting from version 2.0 [2]
> >
> >
> > Plan:
> > - Migrate references in Flink CDC documentation from Debezium 1.9 to 2.0
> > - Upgrade Debezium to version 2.x
> >
> > [1]
> >
> https://github.com/apache/flink-cdc/actions/runs/9192497396/job/25281283926#step:4:1148
> > [2] https://debezium.io/releases/2.0/
> >
> > Best,
> > Zhongqiang Gong
>
>


Re: [VOTE] Release flink-connector-cassandra v3.2.0, release candidate #1

2024-05-21 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk8
- Verified web PR
- Check that the jar is built by jdk8

Best,
Hang

Muhammet Orazov  于2024年5月22日周三 04:15写道:

> Hey all,
>
> Could we please get some more votes to proceed with the release?
>
> Thanks and best,
> Muhammet
>
> On 2024-04-22 13:04, Danny Cranmer wrote:
> > Hi everyone,
> >
> > Please review and vote on release candidate #1 for
> > flink-connector-cassandra v3.2.0, as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > This release supports Flink 1.18 and 1.19.
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> > [2],
> > which are signed with the key with fingerprint 125FD8DB [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v3.2.0-rc1 [5],
> > * website pull request listing the new release [6].
> > * CI build of the tag [7].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Danny
> >
> > [1]
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353148
> > [2]
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-cassandra-3.2.0-rc1
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1722
> > [5]
> >
> https://github.com/apache/flink-connector-cassandra/releases/tag/v3.2.0-rc1
> > [6] https://github.com/apache/flink-web/pull/737
> > [7]
> >
> https://github.com/apache/flink-connector-cassandra/actions/runs/8784310241
>


Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 Thread Hang Ruan
Congratulations!

Thanks for the great work.

Best,
Hang

Qingsheng Ren  于2024年5月17日周五 17:33写道:

> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
>
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch data, bringing the simplicity and elegance of data
> integration via YAML to describe the data movement and transformation
> in a data pipeline.
>
> Please check out the release blog post for an overview of the release:
>
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387
>
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
>
> Regards,
> Qingsheng Ren
>


Re: [VOTE] FLIP-453: Promote Unified Sink API V2 to Public and Deprecate SinkFunction

2024-05-17 Thread Hang Ruan
+1(non-binding)

Best,
Hang

Yuepeng Pan  于2024年5月17日周五 16:15写道:

> +1(non-binding)
>
>
> Best,
> Yuepeng Pan
>
>
> At 2024-05-15 21:09:04, "Jing Ge"  wrote:
> >+1(binding) Thanks Martijn!
> >
> >Best regards,
> >Jing
> >
> >On Wed, May 15, 2024 at 7:00 PM Muhammet Orazov
> > wrote:
> >
> >> Thanks Martijn driving this! +1 (non-binding)
> >>
> >> Best,
> >> Muhammet
> >>
> >> On 2024-05-14 06:43, Martijn Visser wrote:
> >> > Hi everyone,
> >> >
> >> > With no more discussions being open in the thread [1] I would like to
> >> > start
> >> > a vote on FLIP-453: Promote Unified Sink API V2 to Public and
> Deprecate
> >> > SinkFunction [2]
> >> >
> >> > The vote will be open for at least 72 hours unless there is an
> >> > objection or
> >> > insufficient votes.
> >> >
> >> > Best regards,
> >> >
> >> > Martijn
> >> >
> >> > [1] https://lists.apache.org/thread/hod6bg421bzwhbfv60lwsck7r81dvo59
> >> > [2]
> >> >
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-453%3A+Promote+Unified+Sink+API+V2+to+Public+and+Deprecate+SinkFunction
> >>
>


Re: [VOTE] Apache Flink CDC Release 3.1.0, release candidate #3

2024-05-12 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk8
- Verified web PR
- Check that the jar is built by jdk8
- Check synchronizing schemas and data from mysql to starrocks following
the quickstart

Best,
Hang

Qingsheng Ren  于2024年5月11日周六 10:10写道:

> Hi everyone,
>
> Please review and vote on the release candidate #3 for the version 3.1.0 of
> Apache Flink CDC, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> **Release Overview**
>
> As an overview, the release consists of the following:
> a) Flink CDC source release to be deployed to dist.apache.org
> b) Maven artifacts to be deployed to the Maven Central Repository
>
> **Staging Areas to Review**
>
> The staging areas containing the above mentioned artifacts are as follows,
> for your review:
> * All artifacts for a) can be found in the corresponding dev repository at
> dist.apache.org [1], which are signed with the key with fingerprint
> A1BD477F79D036D2C30CA7DBCA8AEEC2F6EB040B [2]
> * All artifacts for b) can be found at the Apache Nexus Repository [3]
>
> Other links for your review:
> * JIRA release notes [4]
> * Source code tag "release-3.1.0-rc3" with commit hash
> 5452f30b704942d0ede64ff3d4c8699d39c63863 [5]
> * PR for release announcement blog post of Flink CDC 3.1.0 in flink-web [6]
>
> **Vote Duration**
>
> The voting time will run for at least 72 hours, adopted by majority
> approval with at least 3 PMC affirmative votes.
>
> Thanks,
> Qingsheng Ren
>
> [1] https://dist.apache.org/repos/dist/dev/flink/flink-cdc-3.1.0-rc3/
> [2] https://dist.apache.org/repos/dist/release/flink/KEYS
> [3] https://repository.apache.org/content/repositories/orgapacheflink-1733
> [4]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387
> [5] https://github.com/apache/flink-cdc/releases/tag/release-3.1.0-rc3
> [6] https://github.com/apache/flink-web/pull/739
>


Re: flink-connector-kafka weekly CI job failing

2024-05-10 Thread Hang Ruan
Hi, all.

I see there is already an issue[1] about this problem.
We could copy the new class `TypeSerializerConditions` into kafka connector
like the issue[2], which fixed the failure for 1.18-SNAPSHOT.

I would like to help it.

Best,
Hang

[1] https://issues.apache.org/jira/browse/FLINK-35109
[2] https://issues.apache.org/jira/browse/FLINK-32455

Hang Ruan  于2024年5月11日周六 09:44写道:

> Hi, all.
>
> The class `TypeSerializerMatchers` has been deleted in Flink version
> 1.20-SNAPSHOT.
> If we need to compile kafka connector with both 1.19 and 1.20, I think we
> have to copy `TypeSerializerMatchers` to kafka connector. But it is not a
> good idea.
> Besides this, I find that the flink-core test-jar does not contain the
> classes like `TypeSerializer`. We have to add the flink-core with the
> provided scope.
>
> I am not sure what is the best way to fix this.
>
> Best,
> Hang
>
> Danny Cranmer  于2024年5月11日周六 04:30写道:
>
>> Hello,
>>
>> Is there a reason we cannot fix the code rather than disabling the test?
>> If
>> we skip the tests this will likely be missed and cause delays for 1.20
>> support down the road.
>>
>> Thanks,
>> Danny
>>
>> On Wed, 8 May 2024, 23:35 Robert Young,  wrote:
>>
>> > Hi,
>> >
>> > I noticed the flink-connector-kafka weekly CI job is failing:
>> >
>> > https://github.com/apache/flink-connector-kafka/actions/runs/8954222477
>> >
>> > Looks like flink-connector-kafka main has a compile error against flink
>> > 1.20-SNAPSHOT, I tried locally and get a different compile failure
>> >
>> > KafkaSerializerUpgradeTest.java:[23,45] cannot find symbol
>> > [ERROR]   symbol:   class TypeSerializerMatchers
>> > [ERROR]   location: package org.apache.flink.api.common.typeutils
>> >
>> > Should 1.20-SNAPSHOT be removed from the weekly tests for now?
>> >
>> > Thanks
>> > Rob
>> >
>>
>


Re: flink-connector-kafka weekly CI job failing

2024-05-10 Thread Hang Ruan
Hi, all.

The class `TypeSerializerMatchers` has been deleted in Flink version
1.20-SNAPSHOT.
If we need to compile kafka connector with both 1.19 and 1.20, I think we
have to copy `TypeSerializerMatchers` to kafka connector. But it is not a
good idea.
Besides this, I find that the flink-core test-jar does not contain the
classes like `TypeSerializer`. We have to add the flink-core with the
provided scope.

I am not sure what is the best way to fix this.

Best,
Hang

Danny Cranmer  于2024年5月11日周六 04:30写道:

> Hello,
>
> Is there a reason we cannot fix the code rather than disabling the test? If
> we skip the tests this will likely be missed and cause delays for 1.20
> support down the road.
>
> Thanks,
> Danny
>
> On Wed, 8 May 2024, 23:35 Robert Young,  wrote:
>
> > Hi,
> >
> > I noticed the flink-connector-kafka weekly CI job is failing:
> >
> > https://github.com/apache/flink-connector-kafka/actions/runs/8954222477
> >
> > Looks like flink-connector-kafka main has a compile error against flink
> > 1.20-SNAPSHOT, I tried locally and get a different compile failure
> >
> > KafkaSerializerUpgradeTest.java:[23,45] cannot find symbol
> > [ERROR]   symbol:   class TypeSerializerMatchers
> > [ERROR]   location: package org.apache.flink.api.common.typeutils
> >
> > Should 1.20-SNAPSHOT be removed from the weekly tests for now?
> >
> > Thanks
> > Rob
> >
>


Re: [DISCUSS] Flink CDC 3.2 Release Planning

2024-05-09 Thread Hang Ruan
Thanks Qinsheng for driving.

I would like to provide some helps for this verison too. +1.

Best,
Hang

Hongshun Wang  于2024年5月9日周四 14:16写道:

> Thanks Qinsheng for driving,
> +1 from my side.
>
> Besi,
> Hongshun
>
> On Wed, May 8, 2024 at 11:41 PM Leonard Xu  wrote:
>
> > +1 for the proposal code freeze date and RM candidate.
> >
> > Best,
> > Leonard
> >
> > > 2024年5月8日 下午10:27,gongzhongqiang  写道:
> > >
> > > Hi Qingsheng
> > >
> > > Thank you for driving the release.
> > > Agree with the goal and I'm willing to help.
> > >
> > > Best,
> > > Zhongqiang Gong
> > >
> > > Qingsheng Ren  于2024年5月8日周三 14:22写道:
> > >
> > >> Hi devs,
> > >>
> > >> As we are in the midst of the release voting process for Flink CDC
> > 3.1.0, I
> > >> think it's a good time to kick off the upcoming Flink CDC 3.2 release
> > >> cycle.
> > >>
> > >> In this release cycle I would like to focus on the stability of Flink
> > CDC,
> > >> especially for the newly introduced YAML-based data integration
> > >> framework. To ensure we can iterate and improve swiftly, I propose to
> > make
> > >> 3.2 a relatively short release cycle, targeting a feature freeze by
> May
> > 24,
> > >> 2024.
> > >>
> > >> For developers that are interested in participating and contributing
> new
> > >> features in this release cycle, please feel free to list your planning
> > >> features in the wiki page [1].
> > >>
> > >> I'm happy to volunteer as a release manager and of course open to work
> > >> together with someone on this.
> > >>
> > >> What do you think?
> > >>
> > >> Best,
> > >> Qingsheng
> > >>
> > >> [1]
> > >>
> https://cwiki.apache.org/confluence/display/FLINK/Flink+CDC+3.2+Release
> > >>
> >
> >
>


Re: [VOTE] Release flink-connector-kafka v3.2.0, release candidate #1

2024-04-28 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk8
- Verified web PR
- Check that the jar is built by jdk8

Best,
Hang

Ahmed Hamdy  于2024年4月24日周三 17:21写道:

> Thanks Danny,
> +1 (non-binding)
>
> - Verified Checksums and hashes
> - Verified Signatures
> - Reviewed web PR
> - github tag exists
> - Build source
>
>
> Best Regards
> Ahmed Hamdy
>
>
> On Tue, 23 Apr 2024 at 03:47, Muhammet Orazov
> 
> wrote:
>
> > Thanks Danny, +1 (non-binding)
> >
> > - Checked 512 hash
> > - Checked gpg signature
> > - Reviewed pr
> > - Built the source with JDK 11 & 8
> >
> > Best,
> > Muhammet
> >
> > On 2024-04-22 13:55, Danny Cranmer wrote:
> > > Hi everyone,
> > >
> > > Please review and vote on release candidate #1 for
> > > flink-connector-kafka
> > > v3.2.0, as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > > This release supports Flink 1.18 and 1.19.
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release to be deployed to dist.apache.org
> > > [2],
> > > which are signed with the key with fingerprint 125FD8DB [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag v3.2.0-rc1 [5],
> > > * website pull request listing the new release [6].
> > > * CI build of the tag [7].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Danny
> > >
> > > [1]
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354209
> > > [2]
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.2.0-rc1
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > > https://repository.apache.org/content/repositories/orgapacheflink-1723
> > > [5]
> > >
> https://github.com/apache/flink-connector-kafka/releases/tag/v3.2.0-rc1
> > > [6] https://github.com/apache/flink-web/pull/738
> > > [7] https://github.com/apache/flink-connector-kafka
> >
>


Re: [VOTE] Release flink-connector-gcp-pubsub v3.1.0, release candidate #1

2024-04-21 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk8
- Verified web PR
- Check that the jar is built by jdk8

Best,
Hang

Ahmed Hamdy  于2024年4月18日周四 20:01写道:

> Hi Danny,
> +1 (non-binding)
>
> -  verified hashes and checksums
> - verified signature
> - verified source contains no binaries
> - tag exists in github
> - reviewed web PR
>
> Best Regards
> Ahmed Hamdy
>
>
> On Thu, 18 Apr 2024 at 11:32, Danny Cranmer 
> wrote:
>
> > Hi everyone,
> >
> > Please review and vote on release candidate #1 for
> > flink-connector-gcp-pubsub v3.1.0, as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > This release supports Flink 1.18 and 1.19.
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> > [2],
> > which are signed with the key with fingerprint 125FD8DB [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v3.1.0-rc1 [5],
> > * website pull request listing the new release [6].
> > * CI build of the tag [7].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Danny
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353813
> > [2]
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-gcp-pubsub-3.1.0-rc1
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1720
> > [5]
> >
> >
> https://github.com/apache/flink-connector-gcp-pubsub/releases/tag/v3.1.0-rc1
> > [6] https://github.com/apache/flink-web/pull/736/files
> > [7]
> >
> >
> https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/8735952883
> >
>


Re: [VOTE] Release flink-connector-jdbc v3.2.0, release candidate #2

2024-04-21 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk8
- Verified web PR
- Check that the jar is built by jdk8

Best,
Hang

Ahmed Hamdy  于2024年4月18日周四 21:37写道:

> +1 (non-binding)
>
> - Verified Checksums and hashes
> - Verified Signatures
> - No binaries in source
> - Build source
> - Github tag exists
> - Reviewed Web PR
>
>
> Best Regards
> Ahmed Hamdy
>
>
> On Thu, 18 Apr 2024 at 11:22, Danny Cranmer 
> wrote:
>
> > Sorry for typos:
> >
> > > Please review and vote on the release candidate #1 for the version
> 3.2.0,
> > as follows:
> > Should be "release candidate #2"
> >
> > > * source code tag v3.2.0-rc1 [5],
> > Should be "source code tag v3.2.0-rc2"
> >
> > Thanks,
> > Danny
> >
> > On Thu, Apr 18, 2024 at 11:19 AM Danny Cranmer 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > Please review and vote on the release candidate #1 for the version
> 3.2.0,
> > > as follows:
> > > [ ] +1, Approve the release
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > >
> > > This release supports Flink 1.18 and 1.19.
> > >
> > > The complete staging area is available for your review, which includes:
> > > * JIRA release notes [1],
> > > * the official Apache source release to be deployed to dist.apache.org
> > > [2], which are signed with the key with fingerprint 125FD8DB [3],
> > > * all artifacts to be deployed to the Maven Central Repository [4],
> > > * source code tag v3.2.0-rc1 [5],
> > > * website pull request listing the new release [6].
> > > * CI run of tag [7].
> > >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > > approval, with at least 3 PMC affirmative votes.
> > >
> > > Thanks,
> > > Danny
> > >
> > > [1]
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353143
> > > [2]
> > >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.2.0-rc2
> > > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > > [4]
> > >
> https://repository.apache.org/content/repositories/orgapacheflink-1718/
> > > [5]
> > https://github.com/apache/flink-connector-jdbc/releases/tag/v3.2.0-rc2
> > > [6] https://github.com/apache/flink-web/pull/734
> > > [7]
> > https://github.com/apache/flink-connector-jdbc/actions/runs/8736019099
> > >
> >
>


Re: [VOTE] Release flink-connector-mongodb v1.2.0, release candidate #2

2024-04-21 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk8
- Verified web PR
- Check that the jar is built by jdk8

Best,
Hang

Ahmed Hamdy  于2024年4月18日周四 21:40写道:

> +1 (non-binding)
>
> -  verified hashes and checksums
> - verified signature
> - verified source contains no binaries
> - tag exists in github
> - reviewed web PR
>
>
> Best Regards
> Ahmed Hamdy
>
>
> On Thu, 18 Apr 2024 at 11:21, Danny Cranmer 
> wrote:
>
> > Hi everyone,
> >
> > Please review and vote on the release candidate #2 for v1.2.0, as
> follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > This release supports Flink 1.18 and 1.19.
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> > [2],
> > which are signed with the key with fingerprint 125FD8DB [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v1.2.0-rc2 [5],
> > * website pull request listing the new release [6].
> > * CI build of tag [7].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Danny
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354192
> > [2]
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.2.0-rc2
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> > https://repository.apache.org/content/repositories/orgapacheflink-1719/
> > [5]
> >
> https://github.com/apache/flink-connector-mongodb/releases/tag/v1.2.0-rc2
> > [6] https://github.com/apache/flink-web/pull/735
> > [7]
> >
> https://github.com/apache/flink-connector-mongodb/actions/runs/8735987710
> >
>


Re: [VOTE] Release flink-connector-aws v4.3.0, release candidate #2

2024-04-21 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk8
- Verified web PR
- Check that the jar is built by jdk8

Best,
Hang

Danny Cranmer  于2024年4月19日周五 18:08写道:

> Hi everyone,
>
> Please review and vote on release candidate #2 for flink-connector-aws
> v4.3.0, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This version supports Flink 1.18 and 1.19.
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint 125FD8DB [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v4.3.0-rc2 [5],
> * website pull request listing the new release [6].
> * CI build of the tag [7].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Release Manager
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353793
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-aws-4.3.0-rc2
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1721/
> [5] https://github.com/apache/flink-connector-aws/releases/tag/v4.3.0-rc2
> [6] https://github.com/apache/flink-web/pull/733
> [7] https://github.com/apache/flink-connector-aws/actions/runs/8751694197
>


Re: [ANNOUNCE] New Apache Flink Committer - Zakelly Lan

2024-04-15 Thread Hang Ruan
Congratulations Zakelly!

Best,
Hang

Yuxin Tan  于2024年4月16日周二 11:04写道:

> Congratulations, Zakelly!
>
> Best,
> Yuxin
>
>
> Xuannan Su  于2024年4月16日周二 10:30写道:
>
> > Congratulations Zakelly!
> >
> > Best regards,
> > Xuannan
> >
> > On Mon, Apr 15, 2024 at 4:31 PM Jing Ge 
> > wrote:
> > >
> > > Congratulations Zakelly!
> > >
> > > Best regards,
> > > Jing
> > >
> > > On Mon, Apr 15, 2024 at 4:26 PM Xia Sun  wrote:
> > >
> > > > Congratulations Zakelly!
> > > >
> > > >  Best,
> > > >  Xia
> > > >
> > > > Leonard Xu  于2024年4月15日周一 16:16写道:
> > > >
> > > > > Congratulations Zakelly!
> > > > >
> > > > >
> > > > > Best,
> > > > > Leonard
> > > > > > 2024年4月15日 下午3:56,Samrat Deb  写道:
> > > > > >
> > > > > > Congratulations Zakelly!
> > > > >
> > > > >
> > > >
> >
>


Re: [ANNOUNCE] New Apache Flink PMC Member - Jing Ge

2024-04-15 Thread Hang Ruan
Congratulations, Jing!

Best,
Hang

Yuxin Tan  于2024年4月16日周二 11:07写道:

> Congratulations, Jing!
>
> Best,
> Yuxin
>
>
> Danny Cranmer  于2024年4月15日周一 20:26写道:
>
> > Congrats Jing!
> >
> > Best Regards,
> > Danny
> >
> > On Mon, Apr 15, 2024 at 11:51 AM Swapnal Varma 
> > wrote:
> >
> > > Congratulations, Jing!
> > >
> > > Best,
> > > Swapnal
> > >
> > > On Mon, 15 Apr 2024, 15:14 Jacky Lau,  wrote:
> > >
> > > > Congratulations, Jing!
> > > >
> > > > Best,
> > > > Jacky Lau
> > > >
> > >
> >
>


Re: [ANNOUNCE] New Apache Flink PMC Member - Lincoln Lee

2024-04-15 Thread Hang Ruan
Congratulations, Lincoln!

Best,
Hang

yh z  于2024年4月16日周二 09:14写道:

> Congratulations, Lincoln!
>
> Best,
> Yunhong (Swuferhong)
>
>
> Swapnal Varma  于2024年4月15日周一 18:50写道:
>
> > Congratulations, Lincoln!
> >
> > Best,
> > Swapnal
> >
> >
> > On Mon, 15 Apr 2024, 15:16 Jacky Lau,  wrote:
> >
> > > Congratulations, Lincoln!
> > >
> > > Best,
> > > Jacky Lau
> > >
> > > Jinzhong Li  于2024年4月15日周一 15:45写道:
> > >
> > > > Congratulations, Lincoln!
> > > >
> > > > Best,
> > > > Jinzhong Li
> > > >
> > > > On Mon, Apr 15, 2024 at 2:56 PM Hangxiang Yu 
> > > wrote:
> > > >
> > > > > Congratulations, Lincoln!
> > > > >
> > > > > On Mon, Apr 15, 2024 at 10:17 AM Zakelly Lan <
> zakelly@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Congratulations, Lincoln!
> > > > > >
> > > > > >
> > > > > > Best,
> > > > > > Zakelly
> > > > > >
> > > > > > On Sat, Apr 13, 2024 at 12:48 AM Ferenc Csaky
> > > >  > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Congratulations, Lincoln!
> > > > > > >
> > > > > > > Best,
> > > > > > > Ferenc
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Friday, April 12th, 2024 at 15:54,
> > > lorenzo.affe...@ververica.com
> > > > > > .INVALID
> > > > > > >  wrote:
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Huge congrats! Well done!
> > > > > > > > On Apr 12, 2024 at 13:56 +0200, Ron liu ron9@gmail.com,
> > > wrote:
> > > > > > > >
> > > > > > > > > Congratulations, Lincoln!
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Ron
> > > > > > > > >
> > > > > > > > > Junrui Lee jrlee@gmail.com 于2024年4月12日周五 18:54写道:
> > > > > > > > >
> > > > > > > > > > Congratulations, Lincoln!
> > > > > > > > > >
> > > > > > > > > > Best,
> > > > > > > > > > Junrui
> > > > > > > > > >
> > > > > > > > > > Aleksandr Pilipenko z3d...@gmail.com 于2024年4月12日周五
> > 18:29写道:
> > > > > > > > > >
> > > > > > > > > > > > Congratulations, Lincoln!
> > > > > > > > > > > >
> > > > > > > > > > > > Best Regards
> > > > > > > > > > > > Aleksandr
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best,
> > > > > Hangxiang.
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] FLIP-399: Flink Connector Doris

2024-04-12 Thread Hang Ruan
+1 (non-binding)

Best,
Hang

Martijn Visser  于2024年4月12日周五 05:39写道:

> +1 (binding)
>
> On Wed, Apr 10, 2024 at 4:34 AM Jing Ge 
> wrote:
>
> > +1(binding)
> >
> > Best regards,
> > Jing
> >
> > On Tue, Apr 9, 2024 at 8:54 PM Feng Jin  wrote:
> >
> > > +1 (non-binding)
> > >
> > > Best,
> > > Feng
> > >
> > > On Tue, Apr 9, 2024 at 5:56 PM gongzhongqiang <
> gongzhongqi...@apache.org
> > >
> > > wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > Best,
> > > >
> > > > Zhongqiang Gong
> > > >
> > > > wudi <676366...@qq.com.invalid> 于2024年4月9日周二 10:48写道:
> > > >
> > > > > Hi devs,
> > > > >
> > > > > I would like to start a vote about FLIP-399 [1]. The FLIP is about
> > > > > contributing the Flink Doris Connector[2] to the Flink community.
> > > > > Discussion thread [3].
> > > > >
> > > > > The vote will be open for at least 72 hours unless there is an
> > > objection
> > > > or
> > > > > insufficient votes.
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Di.Wu
> > > > >
> > > > >
> > > > > [1]
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-399%3A+Flink+Connector+Doris
> > > > > [2] https://github.com/apache/doris-flink-connector
> > > > > [3]
> https://lists.apache.org/thread/p3z4wsw3ftdyfs9p2wd7bbr2gfyl3xnh
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: Re: [ANNOUNCE] Apache Paimon is graduated to Top Level Project

2024-03-31 Thread Hang Ruan
Congratulations!

Best,
Hang

Lincoln Lee  于2024年3月31日周日 00:10写道:

> Congratulations!
>
> Best,
> Lincoln Lee
>
>
> Jark Wu  于2024年3月30日周六 22:13写道:
>
> > Congratulations!
> >
> > Best,
> > Jark
> >
> > On Fri, 29 Mar 2024 at 12:08, Yun Tang  wrote:
> >
> > > Congratulations to all Paimon guys!
> > >
> > > Glad to see a Flink sub-project has been graduated to an Apache
> top-level
> > > project.
> > >
> > > Best
> > > Yun Tang
> > >
> > > 
> > > From: Hangxiang Yu 
> > > Sent: Friday, March 29, 2024 10:32
> > > To: dev@flink.apache.org 
> > > Subject: Re: Re: [ANNOUNCE] Apache Paimon is graduated to Top Level
> > Project
> > >
> > > Congratulations!
> > >
> > > On Fri, Mar 29, 2024 at 10:27 AM Benchao Li 
> > wrote:
> > >
> > > > Congratulations!
> > > >
> > > > Zakelly Lan  于2024年3月29日周五 10:25写道:
> > > > >
> > > > > Congratulations!
> > > > >
> > > > >
> > > > > Best,
> > > > > Zakelly
> > > > >
> > > > > On Thu, Mar 28, 2024 at 10:13 PM Jing Ge
>  > >
> > > > wrote:
> > > > >
> > > > > > Congrats!
> > > > > >
> > > > > > Best regards,
> > > > > > Jing
> > > > > >
> > > > > > On Thu, Mar 28, 2024 at 1:27 PM Feifan Wang 
> > > > wrote:
> > > > > >
> > > > > > > Congratulations!——
> > > > > > >
> > > > > > > Best regards,
> > > > > > >
> > > > > > > Feifan Wang
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > At 2024-03-28 20:02:43, "Yanfei Lei" 
> > wrote:
> > > > > > > >Congratulations!
> > > > > > > >
> > > > > > > >Best,
> > > > > > > >Yanfei
> > > > > > > >
> > > > > > > >Zhanghao Chen  于2024年3月28日周四
> > 19:59写道:
> > > > > > > >>
> > > > > > > >> Congratulations!
> > > > > > > >>
> > > > > > > >> Best,
> > > > > > > >> Zhanghao Chen
> > > > > > > >> 
> > > > > > > >> From: Yu Li 
> > > > > > > >> Sent: Thursday, March 28, 2024 15:55
> > > > > > > >> To: d...@paimon.apache.org 
> > > > > > > >> Cc: dev ; user  >
> > > > > > > >> Subject: Re: [ANNOUNCE] Apache Paimon is graduated to Top
> > Level
> > > > > > Project
> > > > > > > >>
> > > > > > > >> CC the Flink user and dev mailing list.
> > > > > > > >>
> > > > > > > >> Paimon originated within the Flink community, initially
> known
> > as
> > > > Flink
> > > > > > > >> Table Store, and all our incubating mentors are members of
> the
> > > > Flink
> > > > > > > >> Project Management Committee. I am confident that the bonds
> of
> > > > > > > >> enduring friendship and close collaboration will continue to
> > > > unite the
> > > > > > > >> two communities.
> > > > > > > >>
> > > > > > > >> And congratulations all!
> > > > > > > >>
> > > > > > > >> Best Regards,
> > > > > > > >> Yu
> > > > > > > >>
> > > > > > > >> On Wed, 27 Mar 2024 at 20:35, Guojun Li <
> > > gjli.schna...@gmail.com>
> > > > > > > wrote:
> > > > > > > >> >
> > > > > > > >> > Congratulations!
> > > > > > > >> >
> > > > > > > >> > Best,
> > > > > > > >> > Guojun
> > > > > > > >> >
> > > > > > > >> > On Wed, Mar 27, 2024 at 5:24 PM wulin <
> ouyangwu...@163.com>
> > > > wrote:
> > > > > > > >> >
> > > > > > > >> > > Congratulations~
> > > > > > > >> > >
> > > > > > > >> > > > 2024年3月27日 15:54,王刚 
> > 写道:
> > > > > > > >> > > >
> > > > > > > >> > > > Congratulations~
> > > > > > > >> > > >
> > > > > > > >> > > >> 2024年3月26日 10:25,Jingsong Li  >
> > > 写道:
> > > > > > > >> > > >>
> > > > > > > >> > > >> Hi Paimon community,
> > > > > > > >> > > >>
> > > > > > > >> > > >> I’m glad to announce that the ASF board has approved
> a
> > > > > > > resolution to
> > > > > > > >> > > >> graduate Paimon into a full Top Level Project. Thanks
> > to
> > > > > > > everyone for
> > > > > > > >> > > >> your help to get to this point.
> > > > > > > >> > > >>
> > > > > > > >> > > >> I just created an issue to track the things we need
> to
> > > > modify
> > > > > > > [2],
> > > > > > > >> > > >> please comment on it if you feel that something is
> > > > missing. You
> > > > > > > can
> > > > > > > >> > > >> refer to apache documentation [1] too.
> > > > > > > >> > > >>
> > > > > > > >> > > >> And, we already completed the GitHub repo migration
> > [3],
> > > > please
> > > > > > > update
> > > > > > > >> > > >> your local git repo to track the new repo [4].
> > > > > > > >> > > >>
> > > > > > > >> > > >> You can run the following command to complete the
> > remote
> > > > repo
> > > > > > > tracking
> > > > > > > >> > > >> migration.
> > > > > > > >> > > >>
> > > > > > > >> > > >> git remote set-url origin
> > > > https://github.com/apache/paimon.git
> > > > > > > >> > > >>
> > > > > > > >> > > >> If you have a different name, please change the
> > 'origin'
> > > to
> > > > > > your
> > > > > > > remote
> > > > > > > >> > > name.
> > > > > > > >> > > >>
> > > > > > > >> > > >> Please join me in celebrating!
> > > > > > > >> > > >>
> > > > > > > >> > > >> [1]
> > > > > > > >> > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
> https://incubator.apache.org/guides/transferring.html#life_after_graduation
> > > > > > > 

Re: [DISCUSS] Flink Website Menu Adjustment

2024-03-26 Thread Hang Ruan
+1 for the proposal.

Best,
Hang

Hangxiang Yu  于2024年3月26日周二 13:40写道:

> Thanks Zhongqiang for driving this.
> +1 for the proposal.
>
> On Tue, Mar 26, 2024 at 1:36 PM Shawn Huang  wrote:
>
> > +1 for the proposal
> >
> > Best,
> > Shawn Huang
> >
> >
> > Hongshun Wang  于2024年3月26日周二 11:56写道:
> >
> > > +1 for the proposal
> > >
> > > Best Regards,
> > > Hongshun Wang
> > >
> > > On Tue, Mar 26, 2024 at 11:37 AM gongzhongqiang <
> > gongzhongqi...@apache.org
> > > >
> > > wrote:
> > >
> > > > Hi Martijn,
> > > >
> > > > Thank you for your feedback.
> > > >
> > > > I agree with your point that we should make a one-time update to the
> > > menu,
> > > > rather than continuously updating it. This will be done unless some
> > > > sub-projects are moved or archived.
> > > >
> > > > Best regards,
> > > >
> > > > Zhongqiang Gong
> > > >
> > > >
> > > > Martijn Visser  于2024年3月25日周一 23:35写道:
> > > >
> > > > > Hi Zhongqiang Gong,
> > > > >
> > > > > Are you suggesting to continuously update the menu based on the
> > number
> > > of
> > > > > releases, or just this one time? I wouldn't be in favor of
> > continuously
> > > > > updating: returning customers expect a certain order in the menu,
> > and I
> > > > > don't see a lot of value in continuously changing that. I do think
> > that
> > > > the
> > > > > order that you have currently proposed is better then the one we
> have
> > > > right
> > > > > now, so I would +1 a one-time update but not a continuously
> updating
> > > > order.
> > > > >
> > > > > Best regards,
> > > > >
> > > > > Martijn
> > > > >
> > > > > On Mon, Mar 25, 2024 at 4:15 PM Yanquan Lv 
> > > wrote:
> > > > >
> > > > > > +1 for this proposal.
> > > > > >
> > > > > > gongzhongqiang  于2024年3月25日周一
> 15:49写道:
> > > > > >
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > I'd like to start a discussion on adjusting the Flink website
> [1]
> > > > menu
> > > > > to
> > > > > > > improve accuracy and usability.While migrating Flink CDC
> > > > documentation
> > > > > > > to the website, I found outdated links, need to review and
> update
> > > > menus
> > > > > > > for the most relevant information for our users.
> > > > > > >
> > > > > > >
> > > > > > > Proposal:
> > > > > > >
> > > > > > > - Remove Paimon [2] from the "Getting Started" and
> > "Documentation"
> > > > > menus:
> > > > > > > Paimon [2] is now an independent top project of ASF. CC:
> jingsong
> > > > lees
> > > > > > >
> > > > > > > - Sort the projects in the subdirectory by the activity of the
> > > > > projects.
> > > > > > > Here I list the number of releases for each project in the past
> > > year.
> > > > > > >
> > > > > > > Flink Kubernetes Operator : 7
> > > > > > > Flink CDC : 5
> > > > > > > Flink ML  : 2
> > > > > > > Flink Stateful Functions : 1
> > > > > > >
> > > > > > >
> > > > > > > Expected Outcome :
> > > > > > >
> > > > > > > - Menu "Getting Started"
> > > > > > >
> > > > > > > Before:
> > > > > > >
> > > > > > > With Flink
> > > > > > >
> > > > > > > With Flink Stateful Functions
> > > > > > >
> > > > > > > With Flink ML
> > > > > > >
> > > > > > > With Flink Kubernetes Operator
> > > > > > >
> > > > > > > With Paimon(incubating) (formerly Flink Table Store)
> > > > > > >
> > > > > > > With Flink CDC
> > > > > > >
> > > > > > > Training Course
> > > > > > >
> > > > > > >
> > > > > > > After:
> > > > > > >
> > > > > > > With Flink
> > > > > > > With Flink Kubernetes Operator
> > > > > > >
> > > > > > > With Flink CDC
> > > > > > >
> > > > > > > With Flink ML
> > > > > > >
> > > > > > > With Flink Stateful Functions
> > > > > > >
> > > > > > > Training Course
> > > > > > >
> > > > > > >
> > > > > > > - Menu "Documentation" will same with "Getting Started"
> > > > > > >
> > > > > > >
> > > > > > > I look forward to hearing your thoughts and suggestions on this
> > > > > proposal.
> > > > > > >
> > > > > > > [1] https://flink.apache.org/
> > > > > > > [2] https://github.com/apache/incubator-paimon
> > > > > > > [3] https://github.com/apache/flink-statefun
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Best regards,
> > > > > > >
> > > > > > > Zhongqiang Gong
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
> --
> Best,
> Hangxiang.
>


Re: [VOTE] FLIP-439: Externalize Kudu Connector from Bahir

2024-03-21 Thread Hang Ruan
+1 (non-binding)

Best,
Hang

Őrhidi Mátyás  于2024年3月21日周四 00:00写道:

> +1 (binding)
>
> On Wed, Mar 20, 2024 at 8:37 AM Gabor Somogyi 
> wrote:
>
> > +1 (binding)
> >
> > G
> >
> >
> > On Wed, Mar 20, 2024 at 3:59 PM Gyula Fóra  wrote:
> >
> > > +1 (binding)
> > >
> > > Thanks!
> > > Gyula
> > >
> > > On Wed, Mar 20, 2024 at 3:36 PM Mate Czagany 
> wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > Thank you,
> > > > Mate
> > > >
> > > > Ferenc Csaky  ezt írta (időpont: 2024.
> > márc.
> > > > 20., Sze, 15:11):
> > > >
> > > > > Hello devs,
> > > > >
> > > > > I would like to start a vote about FLIP-439 [1]. The FLIP is about
> to
> > > > > externalize the Kudu
> > > > > connector from the recently retired Apache Bahir project [2] to
> keep
> > it
> > > > > maintainable and
> > > > > make it up to date as well. Discussion thread [3].
> > > > >
> > > > > The vote will be open for at least 72 hours (until 2024 March 23
> > 14:03
> > > > > UTC) unless there
> > > > > are any objections or insufficient votes.
> > > > >
> > > > > Thanks,
> > > > > Ferenc
> > > > >
> > > > > [1]
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-439%3A+Externalize+Kudu+Connector+from+Bahir
> > > > > [2] https://attic.apache.org/projects/bahir.html
> > > > > [3]
> https://lists.apache.org/thread/oydhcfkco2kqp4hdd1glzy5vkw131rkz
> > > >
> > >
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 Thread Hang Ruan
Congrattulations!

Best,
Hang

Lincoln Lee  于2024年3月21日周四 09:54写道:

>
> Congrats, thanks for the great work!
>
>
> Best,
> Lincoln Lee
>
>
> Peter Huang  于2024年3月20日周三 22:48写道:
>
>> Congratulations
>>
>>
>> Best Regards
>> Peter Huang
>>
>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>>
>>>
>>> Congratulations
>>>
>>>
>>>
>>> Best,
>>> Huajie Wang
>>>
>>>
>>>
>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>
>>>> Hi devs and users,
>>>>
>>>> We are thrilled to announce that the donation of Flink CDC as a
>>>> sub-project of Apache Flink has completed. We invite you to explore the new
>>>> resources available:
>>>>
>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>>> - Flink CDC Documentation:
>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>>
>>>> After Flink community accepted this donation[1], we have completed
>>>> software copyright signing, code repo migration, code cleanup, website
>>>> migration, CI migration and github issues migration etc.
>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors
>>>> for their contributions and help during this process!
>>>>
>>>>
>>>> For all previous contributors: The contribution process has slightly
>>>> changed to align with the main Flink project. To report bugs or suggest new
>>>> features, please open tickets
>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>>> longer accept GitHub issues for these purposes.
>>>>
>>>>
>>>> Welcome to explore the new repository and documentation. Your feedback
>>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>>
>>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>>
>>>> Best,
>>>> Leonard
>>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>>
>>>>


Re: Re: [DISCUSS] FLIP-436: Introduce "SHOW CREATE CATALOG" Syntax

2024-03-20 Thread Hang Ruan
Hi. Yubin.

Thanks for your update. LGTM.

Best,
Hang

Yubin Li  于2024年3月20日周三 11:56写道:

> Hi Hang,
>
> I have updated FLIP as you suggested, thanks for your valuable feedback!
>
> Best,
> Yubin
>
> On Wed, Mar 20, 2024 at 11:15 AM Hang Ruan  wrote:
> >
> > Hi, Yubin,
> >
> > I found a little mistake in FLIP.
> > `ALTER CATALOG catalog_name RESET (key1=val1, key2=val2, ...)` should be
> > changed as `ALTER CATALOG catalog_name RESET (key1, key2, ...)`, right?
> >
> > Best,
> > Hang
> >
> >
> > Lincoln Lee  于2024年3月20日周三 10:04写道:
> >
> > > Hi Yubin,
> > >
> > > Sorry, please ignore my last reply (wrong context).
> > > I also asked Leonard, your proposal to extend the `CatalogDescriptor`
> > > should be okay.
> > >
> > > Thank you for your update : ) !
> > >
> > >
> > > Best,
> > > Lincoln Lee
> > >
> > >
> > > Lincoln Lee  于2024年3月20日周三 09:35写道:
> > >
> > > > Hi Yubin,
> > > >
> > > > Thank you for detailed explaination! I overlooked
> `CatalogBaseTable`, in
> > > > fact
> > > >  there is already a `String getComment();` interface similar to
> > > `database`
> > > > and `table`.
> > > > Can we continue the work on FLINK-21665 and complete its
> implementation?
> > > > It seems to be very close.
> > > >
> > > > Best,
> > > > Lincoln Lee
> > > >
> > > >
> > > > Yubin Li  于2024年3月20日周三 01:42写道:
> > > >
> > > >> Hi Lincoln,
> > > >>
> > > >> Thanks for your detailed comments!
> > > >>
> > > >> Supporting comments for `Catalog` is a really helpful feature, I
> agree
> > > >> with you to make it introduced in this FLIP, thank you for pointing
> > > >> that out :)
> > > >>
> > > >> Concerning the implementation, I propose to introduce `getComment()`
> > > >> method in `CatalogDescriptor`, and the reasons are as follows. WDYT?
> > > >> 1. For the sake of design consistency, follow the design of FLIP-295
> > > >> [1] which introduced `CatalogStore` component, `CatalogDescriptor`
> > > >> includes names and attributes, both of which are used to describe
> the
> > > >> catalog, and `comment` can be added smoothly.
> > > >> 2. Extending the existing class rather than add new method to the
> > > >> existing interface, Especially, the `Catalog` interface, as a core
> > > >> interface, is used by a series of important components such as
> > > >> `CatalogFactory`, `CatalogManager` and `FactoryUtil`, and is
> > > >> implemented by a large number of connectors such as JDBC, Paimon,
> and
> > > >> Hive. Adding methods to it will greatly increase the implementation
> > > >> complexity, and more importantly, increase the cost of iteration,
> > > >> maintenance, and verification.
> > > >>
> > > >> Please see FLIP doc [2] for details.
> > > >>
> > > >> [1]
> > > >>
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-295%3A+Support+lazy+initialization+of+catalogs+and+persistence+of+catalog+configurations
> > > >> [2]
> > > >>
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax
> > > >>
> > > >> Best,
> > > >> Yubin
> > > >>
> > > >> On Tue, Mar 19, 2024 at 9:57 PM Lincoln Lee  >
> > > >> wrote:
> > > >> >
> > > >> > Hi Yubin,
> > > >> >
> > > >> > Thanks for your quickly response!
> > > >> >
> > > >> > It would be better to support comments just like create
> `database` and
> > > >> > `table` with comment.
> > > >> > That is, add `String getComment();` to the current `Catalog`
> > > interface.
> > > >> > WDYT?
> > > >> >
> > > >> > Best,
> > > >> > Lincoln Lee
> > > >> >
> > > >> >
> > > >> > Yubin Li  于2024年3月19日周二 21:44写道:
> > > >> >
> > > >> > > Hi Lincoln,
> > > >> > >
> > > >> > > Good catch. Thanks for your suggestions.
>

Re: [VOTE] FLIP-436: Introduce Catalog-related Syntax

2024-03-19 Thread Hang Ruan
+1 (non-binding)

Best,
Hang

Jane Chan  于2024年3月19日周二 22:02写道:

> +1 (binding)
>
> Best,
> Jane
>
> On Tue, Mar 19, 2024 at 9:30 PM Leonard Xu  wrote:
>
> > +1(binding)
> >
> >
> > Best,
> > Leonard
> > > 2024年3月19日 下午9:03,Lincoln Lee  写道:
> > >
> > > +1 (binding)
> > >
> > > Best,
> > > Lincoln Lee
> > >
> > >
> > > Feng Jin  于2024年3月19日周二 19:59写道:
> > >
> > >> +1 (non-binding)
> > >>
> > >> Best,
> > >> Feng
> > >>
> > >> On Tue, Mar 19, 2024 at 7:46 PM Ferenc Csaky
>  > >
> > >> wrote:
> > >>
> > >>> +1 (non-binding).
> > >>>
> > >>> Best,
> > >>> Ferenc
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On Tuesday, March 19th, 2024 at 12:39, Jark Wu 
> > wrote:
> > >>>
> > 
> > 
> >  +1 (binding)
> > 
> >  Best,
> >  Jark
> > 
> >  On Tue, 19 Mar 2024 at 19:05, Yuepeng Pan panyuep...@apache.org
> > wrote:
> > 
> > > Hi, Yubin
> > >
> > > Thanks for driving it !
> > >
> > > +1 non-binding.
> > >
> > > Best,
> > > Yuepeng Pan.
> > >
> > > At 2024-03-19 17:56:42, "Yubin Li" lyb5...@gmail.com wrote:
> > >
> > >> Hi everyone,
> > >>
> > >> Thanks for all the feedback, I'd like to start a vote on the
> > >>> FLIP-436:
> > >> Introduce Catalog-related Syntax [1]. The discussion thread is
> here
> > >> [2].
> > >>
> > >> The vote will be open for at least 72 hours unless there is an
> > >> objection or insufficient votes.
> > >>
> > >> [1]
> > >>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax
> > >> [2]
> > >> https://lists.apache.org/thread/10k1bjb4sngyjwhmfqfky28lyoo7sv0z
> > >>
> > >> Best regards,
> > >> Yubin
> > >>>
> > >>
> >
> >
>


Re: Re: [DISCUSS] FLIP-436: Introduce "SHOW CREATE CATALOG" Syntax

2024-03-19 Thread Hang Ruan
n
> >> > > > > > > > >
> >> > > > > > > > > [1]
> >> > > > > > > > >
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > >
> >> > > > >
> >> > >
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-436%3A+Introduce+Catalog-related+Syntax
> >> > > > > > > > >
> >> > > > > > > > > On Fri, Mar 15, 2024 at 10:12 AM Xuyang <
> >> xyzhong...@163.com>
> >> > > > > wrote:
> >> > > > > > > > >
> >> > > > > > > > > > Hi, Yubin. Big +1 for this Flip. I just left one minor
> >> > > comment
> >> > > > > > > > following.
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > > I found that although flink has not supported syntax
> >> > > 'DESCRIBE
> >> > > > > > > CATALOG
> >> > > > > > > > > > catalog_name' currently, it was already
> >> > > > > > > > > > discussed in flip-69[1], do we need to restart
> >> discussing it?
> >> > > > > > > > > > I don't have a particular preference regarding the
> >> restart
> >> > > > > > > discussion.
> >> > > > > > > > It
> >> > > > > > > > > > seems that there is no difference on this syntax
> >> > > > > > > > > > in FLIP-436, so maybe it would be best to refer back
> to
> >> > > FLIP-69
> >> > > > > in
> >> > > > > > > this
> >> > > > > > > > > > FLIP. WDYT?
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > > [1]
> >> > > > > > > > > >
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > >
> >> > > > >
> >> > >
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-69%3A+Flink+SQL+DDL+Enhancement
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > > --
> >> > > > > > > > > >
> >> > > > > > > > > > Best!
> >> > > > > > > > > > Xuyang
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > > At 2024-03-15 02:49:59, "Yubin Li"  >
> >> > > wrote:
> >> > > > > > > > > > >Hi folks,
> >> > > > > > > > > > >
> >> > > > > > > > > > >Thank you all for your input, it really makes sense
> to
> >> > > introduce
> >> > > > > > > > missing
> >> > > > > > > > > > >catalog-related SQL syntaxes under this FLIP, and I
> >> have
> >> > > > > changed the
> >> > > > > > > > > > >title of doc to "FLIP-436: Introduce Catalog-related
> >> > > Syntax".
> >> > > > > > > > > > >
> >> > > > > > > > > > >After comprehensive consideration, the following
> >> syntaxes
> >> > > > > should be
> >> > > > > > > > > > >introduced, more suggestions are welcome :)
> >> > > > > > > > > > >
> >> > > > > > > > > > >> 1. SHOW CREATE CATALOG catalog_name
> >> > > > > > > > > > >> 2. DESCRIBE/DESC CATALOG catalog_name
> >> > > > > > > > > > >> 3. ALTER CATALOG cat

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Hang Ruan
Congratulations!

Best,
Hang

Paul Lam  于2024年3月18日周一 17:18写道:

> Congrats! Thanks to everyone involved!
>
> Best,
> Paul Lam
>
> > 2024年3月18日 16:37,Samrat Deb  写道:
> >
> > Congratulations !
> >
> > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li 
> wrote:
> >
> >> Congratulations!
> >>
> >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
> >>>
> >>> Congratulations, thanks for the great work!
> >>>
> >>> Best,
> >>> Rui
> >>>
> >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> >> wrote:
> 
>  The Apache Flink community is very happy to announce the release of
> >> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink
> 1.19
> >> series.
> 
>  Apache Flink® is an open-source stream processing framework for
> >> distributed, high-performing, always-available, and accurate data
> streaming
> >> applications.
> 
>  The release is available for download at:
>  https://flink.apache.org/downloads.html
> 
>  Please check out the release blog post for an overview of the
> >> improvements for this bugfix release:
> 
> >>
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> 
>  The full release notes are available in Jira:
> 
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> 
>  We would like to thank all contributors of the Apache Flink community
> >> who made this release possible!
> 
> 
>  Best,
>  Yun, Jing, Martijn and Lincoln
> >>
>
>


Re: [DISCUSS] FLIP-434: Support optimizations for pre-partitioned data sources

2024-03-14 Thread Hang Ruan
Hi, Jeyhun.

Thanks for the FLIP. Totally +1 for it.

I have a question about the part `Additional option to disable this
optimization`. Is this option a source configuration or a table
configuration?

Besides that, there is a little mistake if I do not understand wrongly.
Should `Check if upstream_any is pre-partitioned data source AND contains
the same partition keys as the source` be changed as `Check if upstream_any
is pre-partitioned data source AND contains the same partition keys as
downstream_any` ?

Best,
Hang

Jeyhun Karimov  于2024年3月13日周三 21:11写道:

> Hi Jane,
>
> Thanks for your comments.
>
>
> 1. Concerning the `sourcePartitions()` method, the partition information
> > returned during the optimization phase may not be the same as the
> partition
> > information during runtime execution. For long-running jobs, partitions
> may
> > be continuously created. Is this FLIP equipped to handle scenarios?
>
>
> - Good point. This scenario is definitely supported.
> Once a new partition is added, or in general, new splits are
> discovered,
> PartitionAwareSplitAssigner::addSplits(Collection
> newSplits)
> method will be called. Inside that method, we are able to detect if a split
> belongs to existing partitions or there is a new partition.
> Once a new partition is detected, we add it to our existing mapping. Our
> mapping looks like Map> subtaskToPartitionAssignment,
> where
> it maps each source subtaskID to zero or more partitions.
>
> 2. Regarding the `RemoveRedundantShuffleRule` optimization rule, I
> > understand that it is also necessary to verify whether the hash key
> within
> > the Exchange node is consistent with the partition key defined in the
> table
> > source that implements `SupportsPartitioning`.
>
>
> - Yes, I overlooked that point, fixed. Actually, the rule is much
> complicated. I tried to simplify it in the FLIP. Good point.
>
>
> 3. Could you elaborate on the desired physical plan and integration with
> > `CompiledPlan` to enhance the overall functionality?
>
>
> - For compiled plan, PartitioningSpec will be used, with a json tag
> "Partitioning". As a result, in the compiled plan, the source operator will
> have
> "abilities" : [ { "type" : "Partitioning" } ] as part of the compiled plan.
> More about the implementation details below:
>
> 
> PartitioningSpec class
> 
> @JsonTypeName("Partitioning")
> public final class PartitioningSpec extends SourceAbilitySpecBase {
>  // some code here
> @Override
> public void apply(DynamicTableSource tableSource, SourceAbilityContext
> context) {
> if (tableSource instanceof SupportsPartitioning) {
> ((SupportsPartitioning) tableSource).applyPartitionedRead();
> } else {
> throw new TableException(
> String.format(
> "%s does not support SupportsPartitioning.",
> tableSource.getClass().getName()));
> }
> }
>   // some code here
> }
>
> 
> SourceAbilitySpec class
> 
> @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include =
> JsonTypeInfo.As.PROPERTY, property = "type")
> @JsonSubTypes({
> @JsonSubTypes.Type(value = FilterPushDownSpec.class),
> @JsonSubTypes.Type(value = LimitPushDownSpec.class),
> @JsonSubTypes.Type(value = PartitionPushDownSpec.class),
> @JsonSubTypes.Type(value = ProjectPushDownSpec.class),
> @JsonSubTypes.Type(value = ReadingMetadataSpec.class),
> @JsonSubTypes.Type(value = WatermarkPushDownSpec.class),
> @JsonSubTypes.Type(value = SourceWatermarkSpec.class),
> @JsonSubTypes.Type(value = AggregatePushDownSpec.class),
> +  @JsonSubTypes.Type(value = PartitioningSpec.class)   //
> new added
>
>
>
> Please let me know if that answers your questions or if you have other
> comments.
>
> Regards,
> Jeyhun
>
>
> On Tue, Mar 12, 2024 at 8:56 AM Jane Chan  wrote:
>
> > Hi Jeyhun,
> >
> > Thank you for leading the discussion. I'm generally +1 with this
> proposal,
> > along with some questions. Please see my comments below.
> >
> > 1. Concerning the `sourcePartitions()` method, the partition information
> > returned during the optimization phase may not be the same as the
> partition
> > information during runtime execution. For long-running jobs, partitions
> may
> > be continuously created. Is this FLIP equipped to handle scenarios?
> >
> > 2. Regarding the `RemoveRedundantShuffleRule` optimization rule, I
> > understand that it is also necessary to verify whether the hash key
> within
> > the Exchange node is consistent with the partition key defined in the
> table
> > source that implements `SupportsPartitioning`.
> >
> > 3. Could you elaborate on the desired physical plan and integration with
> > `CompiledPlan` to enhance the overall functionality?
> >
> > Best,
> > Jane
> >
> > On Tue, Mar 12, 2024 at 11:11 AM Jim Hughes  >

Re: [DISCUSS] FLIP-436: Introduce "SHOW CREATE CATALOG" Syntax

2024-03-13 Thread Hang Ruan
Hi, Yubin.

Thanks for the FLIP. +1 for it.

Best,
Hang

Yubin Li  于2024年3月14日周四 10:15写道:

> Hi Jingsong, Feng, and Jeyhun
>
> Thanks for your support and feedback!
>
> > However, could we add a new method `getCatalogDescriptor()` to
> > CatalogManager instead of directly exposing CatalogStore?
>
> Good point, Besides the audit tracking issue, The proposed feature
> only requires `getCatalogDescriptor()` function. Exposing components
> with excessive functionality will bring unnecessary risks, I have made
> modifications in the FLIP doc [1]. Thank Feng :)
>
> > Showing the SQL parser implementation in the FLIP for the SQL syntax
> > might be a bit confusing. Also, the formal definition is missing for
> > this SQL clause.
>
> Thank Jeyhun for pointing it out :) I have updated the doc [1] .
>
> [1]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=296290756
>
> Best,
> Yubin
>
>
> On Thu, Mar 14, 2024 at 2:18 AM Jeyhun Karimov 
> wrote:
> >
> > Hi Yubin,
> >
> > Thanks for the proposal. +1 for it.
> > I have one comment:
> >
> > I would like to see the SQL syntax for the proposed statement.  Showing
> the
> > SQL parser implementation in the FLIP
> > for the SQL syntax might be a bit confusing. Also, the formal definition
> is
> > missing for this SQL clause.
> > Maybe something like [1] might be useful. WDYT?
> >
> > Regards,
> > Jeyhun
> >
> > [1]
> >
> https://github.com/apache/flink/blob/0da60ca1a4754f858cf7c52dd4f0c97ae0e1b0cb/docs/content/docs/dev/table/sql/show.md?plain=1#L620-L632
> >
> > On Wed, Mar 13, 2024 at 3:28 PM Feng Jin  wrote:
> >
> > > Hi Yubin
> > >
> > > Thank you for initiating this FLIP.
> > >
> > > I have just one minor question:
> > >
> > > I noticed that we added a new function `getCatalogStore` to expose
> > > CatalogStore, and it seems fine.
> > > However, could we add a new method `getCatalogDescriptor()` to
> > > CatalogManager instead of directly exposing CatalogStore?
> > > By only providing the `getCatalogDescriptor()` interface, it may be
> easier
> > > for us to implement audit tracking in CatalogManager in the future.
> WDYT ?
> > > Although we have only collected some modified events at the moment.[1]
> > >
> > >
> > > [1].
> > >
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-294%3A+Support+Customized+Catalog+Modification+Listener
> > >
> > > Best,
> > > Feng
> > >
> > > On Wed, Mar 13, 2024 at 5:31 PM Jingsong Li 
> > > wrote:
> > >
> > > > +1 for this.
> > > >
> > > > We are missing a series of catalog related syntaxes.
> > > > Especially after the introduction of catalog store. [1]
> > > >
> > > > [1]
> > > >
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-295%3A+Support+lazy+initialization+of+catalogs+and+persistence+of+catalog+configurations
> > > >
> > > > Best,
> > > > Jingsong
> > > >
> > > > On Wed, Mar 13, 2024 at 5:09 PM Yubin Li  wrote:
> > > > >
> > > > > Hi devs,
> > > > >
> > > > > I'd like to start a discussion about FLIP-436: Introduce "SHOW
> CREATE
> > > > > CATALOG" Syntax [1].
> > > > >
> > > > > At present, the `SHOW CREATE TABLE` statement provides strong
> support
> > > for
> > > > > users to easily
> > > > > reuse created tables. However, despite the increasing importance
> of the
> > > > > `Catalog` in user's
> > > > > business, there is no similar statement for users to use.
> > > > >
> > > > > According to the online discussion in FLINK-24939 [2] with Jark Wu
> and
> > > > Feng
> > > > > Jin, since `CatalogStore`
> > > > > has been introduced in FLIP-295 [3], we could use this component to
> > > > > implement such a long-awaited
> > > > > feature, Please refer to the document [1] for implementation
> details.
> > > > >
> > > > > examples as follows:
> > > > >
> > > > > Flink SQL> create catalog cat2 WITH ('type'='generic_in_memory',
> > > > > > 'default-database'='db');
> > > > > > [INFO] Execute statement succeeded.
> > > > > > Flink SQL> show create catalog cat2;
> > > > > >
> > > > > >
> > > >
> > >
> ++
> > > > > > | result |
> > > > > >
> > > > > >
> > > >
> > >
> ++
> > > > > > | CREATE CATALOG `cat2` WITH (
> > > > > >   'default-database' = 'db',
> > > > > >   'type' = 'generic_in_memory'
> > > > > > )
> > > > > >  |
> > > > > >
> > > > > >
> > > >
> > >
> ++
> > > > > > 1 row in set
> > > > >
> > > > >
> > > > >
> > > > > Looking forward to hearing from you, thanks!
> > > > >
> > > > > Best regards,
> > > > > Yubin
> > > > >
> > > > > [1]
> > > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=296290756
> > > > > [2] https://issues.apache.org/jira/browse/FLINK-24939
> > > > > [3]
> > > > >
> > > >
> > >
> 

Re: [VOTE] Release 1.19.0, release candidate #2

2024-03-12 Thread Hang Ruan
+1 (non-binding)

- Verified signatures and checksums
- Verified that source does not contain binaries
- Build source code successfully
- Reviewed the release note and left a comment

Best,
Hang

Feng Jin  于2024年3月12日周二 11:23写道:

> +1 (non-binding)
>
> - Verified signatures and checksums
> - Verified that source does not contain binaries
> - Build source code successfully
> - Run a simple sql query successfully
>
> Best,
> Feng Jin
>
>
> On Tue, Mar 12, 2024 at 11:09 AM Ron liu  wrote:
>
> > +1 (non binding)
> >
> > quickly verified:
> > - verified that source distribution does not contain binaries
> > - verified checksums
> > - built source code successfully
> >
> >
> > Best,
> > Ron
> >
> > Jeyhun Karimov  于2024年3月12日周二 01:00写道:
> >
> > > +1 (non binding)
> > >
> > > - verified that source distribution does not contain binaries
> > > - verified signatures and checksums
> > > - built source code successfully
> > >
> > > Regards,
> > > Jeyhun
> > >
> > >
> > > On Mon, Mar 11, 2024 at 3:08 PM Samrat Deb 
> > wrote:
> > >
> > > > +1 (non binding)
> > > >
> > > > - verified signatures and checksums
> > > > - ASF headers are present in all expected file
> > > > - No unexpected binaries files found in the source
> > > > - Build successful locally
> > > > - tested basic word count example
> > > >
> > > >
> > > >
> > > >
> > > > Bests,
> > > > Samrat
> > > >
> > > > On Mon, 11 Mar 2024 at 7:33 PM, Ahmed Hamdy 
> > > wrote:
> > > >
> > > > > Hi Lincoln
> > > > > +1 (non-binding) from me
> > > > >
> > > > > - Verified Checksums & Signatures
> > > > > - Verified Source dists don't contain binaries
> > > > > - Built source successfully
> > > > > - reviewed web PR
> > > > >
> > > > >
> > > > > Best Regards
> > > > > Ahmed Hamdy
> > > > >
> > > > >
> > > > > On Mon, 11 Mar 2024 at 15:18, Lincoln Lee 
> > > > wrote:
> > > > >
> > > > > > Hi Robin,
> > > > > >
> > > > > > Thanks for helping verifying the release note[1], FLINK-14879
> > should
> > > > not
> > > > > > have been included, after confirming this
> > > > > > I moved all unresolved non-blocker issues left over from 1.19.0
> to
> > > > 1.20.0
> > > > > > and reconfigured the release note [1].
> > > > > >
> > > > > > Best,
> > > > > > Lincoln Lee
> > > > > >
> > > > > > [1]
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > > > > >
> > > > > >
> > > > > > Robin Moffatt  于2024年3月11日周一
> 19:36写道:
> > > > > >
> > > > > > > Looking at the release notes [1] it lists `DESCRIBE DATABASE`
> > > > > > (FLINK-14879)
> > > > > > > and `DESCRIBE CATALOG` (FLINK-14690).
> > > > > > > When I try these in 1.19 RC2 the behaviour is as in 1.18.1,
> i.e.
> > it
> > > > is
> > > > > > not
> > > > > > > supported:
> > > > > > >
> > > > > > > ```
> > > > > > > [INFO] Execute statement succeed.
> > > > > > >
> > > > > > > Flink SQL> show catalogs;
> > > > > > > +-+
> > > > > > > |catalog name |
> > > > > > > +-+
> > > > > > > |   c_new |
> > > > > > > | default_catalog |
> > > > > > > +-+
> > > > > > > 2 rows in set
> > > > > > >
> > > > > > > Flink SQL> DESCRIBE CATALOG c_new;
> > > > > > > [ERROR] Could not execute SQL statement. Reason:
> > > > > > > org.apache.calcite.sql.validate.SqlValidatorException: Column
> > > 'c_new'
> > > > > not
> > > > > > > found in any table
> > > > > > >
> > > > > > > Flink SQL> show databases;
> > > > > > > +--+
> > > > > > > |database name |
> > > > > > > +--+
> > > > > > > | default_database |
> > > > > > > +--+
> > > > > > > 1 row in set
> > > > > > >
> > > > > > > Flink SQL> DESCRIBE DATABASE default_database;
> > > > > > > [ERROR] Could not execute SQL statement. Reason:
> > > > > > > org.apache.calcite.sql.validate.SqlValidatorException: Column
> > > > > > > 'default_database' not found in
> > > > > > > any table
> > > > > > > ```
> > > > > > >
> > > > > > > Is this an error in the release notes, or my mistake in
> > > interpreting
> > > > > > them?
> > > > > > >
> > > > > > > thanks, Robin.
> > > > > > >
> > > > > > >
> > > > > > > [1]
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > > > > > >
> > > > > > > On Thu, 7 Mar 2024 at 10:01, Lincoln Lee <
> lincoln.8...@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi everyone,
> > > > > > > >
> > > > > > > > Please review and vote on the release candidate #2 for the
> > > version
> > > > > > > 1.19.0,
> > > > > > > > as follows:
> > > > > > > > [ ] +1, Approve the release
> > > > > > > > [ ] -1, Do not approve the release (please provide specific
> > > > comments)
> > > > > > > >
> > > > > > > > The complete staging area is available for your review, which
> > > > > includes:
> > > > > > > >
> > > > > > > > * JIRA release notes [1], and the pull request adding release
> > > 

Re: [DISCUSS] FLIP Suggestion: Externalize Kudu Connector from Bahir

2024-03-08 Thread Hang Ruan
Hi, Ferenc.

Thanks for the FLIP discussion. +1 for the proposal.
I think that how to state this code originally lived in Bahir may be in the
FLIP.

Best,
Hang

Leonard Xu  于2024年3月7日周四 14:14写道:

> Thanks Ferenc for kicking off this discussion, I left some comments here:
>
> (1) About the release version, could you specify kudu connector version
> instead of flink version 1.18 as external connector version is different
> with flink ?
>
> (2) About the connector config options, could you enumerate these options
> so that we can review they’re reasonable or not?
>
> (3) Metrics is also key part of connector, could you add the supported
> connector metrics to public interface as well?
>
>
> Best,
> Leonard
>
>
> > 2024年3月6日 下午11:23,Ferenc Csaky  写道:
> >
> > Hello devs,
> >
> > Opening this thread to discuss a FLIP [1] about externalizing the Kudu
> connector, as recently
> > the Apache Bahir project were moved to the attic [2]. Some details were
> discussed already
> > in another thread [3]. I am proposing to externalize this connector and
> keep it maintainable,
> > and up to date.
> >
> > Best regards,
> > Ferenc
> >
> > [1]
> https://docs.google.com/document/d/1vHF_uVe0FTYCb6PRVStovqDeqb_C_FKjt2P5xXa7uhE
> > [2] https://bahir.apache.org/
> > [3] https://lists.apache.org/thread/2nb8dxxfznkyl4hlhdm3vkomm8rk4oyq
>
>


[jira] [Created] (FLINK-34586) Update the README in Flink CDC

2024-03-06 Thread Hang Ruan (Jira)
Hang Ruan created FLINK-34586:
-

 Summary: Update the README in Flink CDC
 Key: FLINK-34586
 URL: https://issues.apache.org/jira/browse/FLINK-34586
 Project: Flink
  Issue Type: Improvement
  Components: Flink CDC
Reporter: Hang Ruan


We should update the README file in Flink CDC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34585) [JUnit5 Migration] Module: Flink CDC

2024-03-05 Thread Hang Ruan (Jira)
Hang Ruan created FLINK-34585:
-

 Summary: [JUnit5 Migration] Module: Flink CDC
 Key: FLINK-34585
 URL: https://issues.apache.org/jira/browse/FLINK-34585
 Project: Flink
  Issue Type: Sub-task
  Components: Flink CDC
Reporter: Hang Ruan


Most tests in Flink CDC are still using Junit 4. We need to use Junit 5 instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34584) Change package name to org.apache.flink.cdc

2024-03-05 Thread Hang Ruan (Jira)
Hang Ruan created FLINK-34584:
-

 Summary: Change package name to org.apache.flink.cdc
 Key: FLINK-34584
 URL: https://issues.apache.org/jira/browse/FLINK-34584
 Project: Flink
  Issue Type: Sub-task
  Components: Flink CDC
Reporter: Hang Ruan


Flink CDC need to change its package name to org.apache.flink.cdc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-399: Flink Connector Doris

2024-03-03 Thread Hang Ruan
Hi,

Thanks for the proposal. +1 for the FLIP.

Best,
Hang

Jeyhun Karimov  于2024年3月2日周六 17:53写道:

> Hi,
>
> Thanks for the proposal. +1 for the FLIP.
> I have a few questions:
>
> - How exactly the two (Stream Load's two-phase commit and Flink's two-phase
> commit) combination will ensure the e2e exactly-once semantics?
>
> - The FLIP proposes to combine Doris's batch writing with the primary key
> table to achieve Exactly-Once semantics. Could you elaborate more on that?
> Why it is not the default behavior but a workaround?
>
> Regards,
> Jeyhun
>
> On Sat, Mar 2, 2024 at 10:14 AM Yanquan Lv  wrote:
>
> > Thanks for driving this.
> > The content is very detailed, it is recommended to add a section on Test
> > Plan for more completeness.
> >
> > Di Wu  于2024年1月25日周四 15:40写道:
> >
> > > Hi all,
> > >
> > > Previously, we had some discussions about contributing Flink Doris
> > > Connector to the Flink community [1]. I want to further promote this
> > work.
> > > I hope everyone will help participate in this FLIP discussion and
> provide
> > > more valuable opinions and suggestions.
> > > Thanks.
> > >
> > > [1] https://lists.apache.org/thread/lvh8g9o6qj8bt3oh60q81z0o1cv3nn8p
> > >
> > > Brs,
> > > di.wu
> > >
> > >
> > >
> > > On 2023/12/07 05:02:46 wudi wrote:
> > > >
> > > > Hi all,
> > > >
> > > > As discussed in the previous email [1], about contributing the Flink
> > > Doris Connector to the Flink community.
> > > >
> > > >
> > > > Apache Doris[2] is a high-performance, real-time analytical database
> > > based on MPP architecture, for scenarios where Flink is used for data
> > > analysis, processing, or real-time writing on Doris, Flink Doris
> > Connector
> > > is an effective tool.
> > > >
> > > > At the same time, Contributing Flink Doris Connector to the Flink
> > > community will further expand the Flink Connectors ecosystem.
> > > >
> > > > So I would like to start an official discussion FLIP-399: Flink
> > > Connector Doris[3].
> > > >
> > > > Looking forward to comments, feedbacks and suggestions from the
> > > community on the proposal.
> > > >
> > > > [1] https://lists.apache.org/thread/lvh8g9o6qj8bt3oh60q81z0o1cv3nn8p
> > > > [2]
> > https://doris.apache.org/docs/dev/get-starting/what-is-apache-doris/
> > > > [3]
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-399%3A+Flink+Connector+Doris
> > > >
> > > >
> > > > Brs,
> > > >
> > > > di.wu
> > > >
> > >
> >
>


Re: [VOTE] FLIP-314: Support Customized Job Lineage Listener

2024-02-28 Thread Hang Ruan
+1 (non-binding)

Best,
Hang

weijie guo  于2024年2月29日周四 09:55写道:

> +1 (binding)
>
> Best regards,
>
> Weijie
>
>
> Feng Jin  于2024年2月29日周四 09:37写道:
>
> > +1 (non-binding)
> >
> > Best,
> > Feng Jin
> >
> > On Thu, Feb 29, 2024 at 4:41 AM Márton Balassi  >
> > wrote:
> >
> > > +1 (binding)
> > >
> > > Marton
> > >
> > > On Wed, Feb 28, 2024 at 5:14 PM Gyula Fóra 
> wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > Gyula
> > > >
> > > > On Wed, Feb 28, 2024 at 11:10 AM Maciej Obuchowski <
> > > mobuchow...@apache.org
> > > > >
> > > > wrote:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > Best,
> > > > > Maciej Obuchowski
> > > > >
> > > > > śr., 28 lut 2024 o 10:29 Zhanghao Chen 
> > > > > napisał(a):
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > Best,
> > > > > > Zhanghao Chen
> > > > > > 
> > > > > > From: Yong Fang 
> > > > > > Sent: Wednesday, February 28, 2024 10:12
> > > > > > To: dev 
> > > > > > Subject: [VOTE] FLIP-314: Support Customized Job Lineage Listener
> > > > > >
> > > > > > Hi devs,
> > > > > >
> > > > > > I would like to restart a vote about FLIP-314: Support Customized
> > Job
> > > > > > Lineage Listener[1].
> > > > > >
> > > > > > Previously, we added lineage related interfaces in FLIP-314.
> Before
> > > the
> > > > > > interfaces were developed and merged into the master, @Maciej and
> > > > > > @Zhenqiu provided valuable suggestions for the interface from the
> > > > > > perspective of the lineage system. So we updated the interfaces
> of
> > > > > FLIP-314
> > > > > > and discussed them again in the discussion thread [2].
> > > > > >
> > > > > > So I am here to initiate a new vote on FLIP-314, the vote will be
> > > open
> > > > > for
> > > > > > at least 72 hours unless there is an objection or insufficient
> > votes
> > > > > >
> > > > > > [1]
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-314%3A+Support+Customized+Job+Lineage+Listener
> > > > > > [2]
> > https://lists.apache.org/thread/wopprvp3ww243mtw23nj59p57cghh7mc
> > > > > >
> > > > > > Best,
> > > > > > Fang Yong
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Requesting link to join Slack Community

2024-02-27 Thread Hang Ruan
Hi, Geetesh.

Here is the invite link :
https://join.slack.com/t/apache-flink/shared_invite/zt-1t4khgllz-Fm1CnXzdBbUchBz4HzJCAg
.
I will raise a PR to update the link.

Best,
Hang

Geetesh Nikhade  于2024年2月28日周三 04:49写道:

> Hi Folks,
>
> I would like to join the Apache Flink Community on Slack, but it looks
> like the link shared on official flink website seems to have expired. Can
> someone please share a new join link? or let me know what would be the best
> way to get that link?
>
> Thanks in advance.
>
> Best,
> Geetesh
>


Re: [VOTE] Release flink-connector-parent 1.1.0 release candidate #2

2024-02-20 Thread Hang Ruan
+1 (non-binding)

- verified checksum and signature
- checked Github release tag
- checked release notes
- verified no binaries in source
- reviewed the web PR

Best,
Hang

Leonard Xu  于2024年2月20日周二 14:26写道:

> +1 (binding)
>
> - verified signatures
> - verified hashsums
> - built from source code succeeded
> - checked Github release tag
> - checked release notes
> - reviewed all Jira tickets have been resolved
> - reviewed the web PR
>
> Best,
> Leonard
>
>
> > 2024年2月20日 上午11:14,Rui Fan <1996fan...@gmail.com> 写道:
> >
> > Thanks for driving this, Etienne!
> >
> > +1 (non-binding)
> >
> > - Verified checksum and signature
> > - Verified pom content
> > - Build source on my Mac with jdk8
> > - Verified no binaries in source
> > - Checked staging repo on Maven central
> > - Checked source code tag
> > - Reviewed web PR
> >
> > Best,
> > Rui
> >
> > On Tue, Feb 20, 2024 at 10:33 AM Qingsheng Ren  wrote:
> >
> >> Thanks for driving this, Etienne!
> >>
> >> +1 (binding)
> >>
> >> - Checked release note
> >> - Verified checksum and signature
> >> - Verified pom content
> >> - Verified no binaries in source
> >> - Checked staging repo on Maven central
> >> - Checked source code tag
> >> - Reviewed web PR
> >> - Built Kafka connector from source with parent pom in staging repo
> >>
> >> Best,
> >> Qingsheng
> >>
> >> On Tue, Feb 20, 2024 at 1:34 AM Etienne Chauchot 
> >> wrote:
> >>
> >>> Hi everyone,
> >>> Please review and vote on the release candidate #2 for the version
> >>> 1.1.0, as follows:
> >>> [ ] +1, Approve the release
> >>> [ ] -1, Do not approve the release (please provide specific comments)
> >>>
> >>>
> >>> The complete staging area is available for your review, which includes:
> >>> * JIRA release notes [1],
> >>> * the official Apache source release to be deployed to dist.apache.org
> >>> [2], which are signed with the key with fingerprint
> >>> D1A76BA19D6294DD0033F6843A019F0B8DD163EA [3],
> >>> * all artifacts to be deployed to the Maven Central Repository [4],
> >>> * source code tag v1.1.0-rc2 [5],
> >>> * website pull request listing the new release [6].
> >>>
> >>> * confluence wiki: connector parent upgrade to version 1.1.0 that will
> >>> be validated after the artifact is released (there is no PR mechanism
> on
> >>> the wiki) [7]
> >>>
> >>>
> >>> The vote will be open for at least 72 hours. It is adopted by majority
> >>> approval, with at least 3 PMC affirmative votes.
> >>>
> >>> Thanks,
> >>> Etienne
> >>>
> >>> [1]
> >>>
> >>>
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353442
> >>> [2]
> >>>
> >>>
> >>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-parent-1.1.0-rc2
> >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >>> [4]
> >> https://repository.apache.org/content/repositories/orgapacheflink-1707
> >>> [5]
> >>>
> >>>
> >>
> https://github.com/apache/flink-connector-shared-utils/releases/tag/v1.1.0-rc2
> >>>
> >>> [6] https://github.com/apache/flink-web/pull/717
> >>>
> >>> [7]
> >>>
> >>>
> >>
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
> >>>
> >>
>
>


Re: [ANNOUNCE] New Apache Flink Committer - Jiabao Sun

2024-02-19 Thread Hang Ruan
Congratulations, Jiabao!

Best,
Hang

Qingsheng Ren  于2024年2月19日周一 17:53写道:

> Hi everyone,
>
> On behalf of the PMC, I'm happy to announce Jiabao Sun as a new Flink
> Committer.
>
> Jiabao began contributing in August 2022 and has contributed 60+ commits
> for Flink main repo and various connectors. His most notable contribution
> is being the core author and maintainer of MongoDB connector, which is
> fully functional in DataStream and Table/SQL APIs. Jiabao is also the
> author of FLIP-377 and the main contributor of JUnit 5 migration in runtime
> and table planner modules.
>
> Beyond his technical contributions, Jiabao is an active member of our
> community, participating in the mailing list and consistently volunteering
> for release verifications and code reviews with enthusiasm.
>
> Please join me in congratulating Jiabao for becoming an Apache Flink
> committer!
>
> Best,
> Qingsheng (on behalf of the Flink PMC)
>


Re: 退订

2024-02-05 Thread Hang Ruan
Hi,

请分别发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 和
dev-unsubscr...@flink.apache.org 地址来取消订阅来自 user...@flink.apache.org
 和 dev@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理你的邮件订阅。
Please send email to user-zh-unsubscr...@flink.apache.org and
dev-unsubscr...@flink.apache.org if you want to unsubscribe the mail from
user...@flink.apache.org  and dev@flink.apache.org,
and you can refer [1][2] for more details.

Best,
Hang

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2] https://flink.apache.org/community.html#mailing-lists

12260035 <12260...@qq.com.invalid> 于2024年2月6日周二 14:17写道:

> 退订
>
>
>
>
> --原始邮件--
> 发件人:
>   "dev"
> <
> qr7...@163.com;
> 发送时间:2024年1月19日(星期五) 下午3:36
> 收件人:"dev"
> 主题:退订
>
>
>
> 退订


Re: [VOTE] FLIP-331: Support EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute to optimize task deployment

2024-02-04 Thread Hang Ruan
 +1 (non-binding)

Best,
Hang

Dong Lin  于2024年2月5日周一 11:08写道:

> Thanks for the FLIP.
>
> +1 (binding)
>
> Best,
> Dong
>
> On Wed, Jan 31, 2024 at 11:41 AM Xuannan Su  wrote:
>
> > Hi everyone,
> >
> > Thanks for all the feedback about the FLIP-331: Support
> > EndOfStreamTrigger and isOutputOnlyAfterEndOfStream operator attribute
> > to optimize task deployment [1] [2].
> >
> > I'd like to start a vote for it. The vote will be open for at least 72
> > hours(excluding weekends,until Feb 5, 12:00AM GMT) unless there is an
> > objection or an insufficient number of votes.
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-331%3A+Support+EndOfStreamTrigger+and+isOutputOnlyAfterEndOfStream+operator+attribute+to+optimize+task+deployment
> > [2] https://lists.apache.org/thread/qq39rmg3f23ysx5m094s4c4cq0m4tdj5
> >
> >
> > Best,
> > Xuannan
> >
>


Re: [VOTE] Release flink-connector-jdbc, release candidate #3

2024-02-01 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk8
- Check that the jar is built by jdk8

Best,
Hang

Sergey Nuyanzin  于2024年2月1日周四 19:50写道:

> Hi everyone,
> Please review and vote on the release candidate #3 for the version 3.1.2,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint 1596BBF0726835D8 [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.1.2-rc3 [5],
> * website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Release Manager
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354088
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.1.2-rc3
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1706/
> [5] https://github.com/apache/flink-connector-jdbc/releases/tag/v3.1.2-rc3
> [6] https://github.com/apache/flink-web/pull/707
>


Re: [VOTE] Release flink-connector-mongodb v1.1.0, release candidate #2

2024-01-30 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk11
- Verified web PR
- Check that the jar is built by jdk8
- Review the release note

Best,
Hang

Jiabao Sun  于2024年1月30日周二 21:44写道:

> Thanks Leonard for driving this.
>
> +1(non-binding)
>
> - Release notes look good
> - Tag is present in Github
> - Validated checksum hash
> - Verified signature
> - Build the source with Maven by jdk8,11,17,21
> - Checked the dist jar was built by jdk8
> - Reviewed web PR
> - Run a filter push down test by sql-client on Flink 1.18.1 and it works
> well
>
> Best,
> Jiabao
>
>
> On 2024/01/30 10:23:07 Leonard Xu wrote:
> > Hey all,
> >
> > Please help review and vote on the release candidate #2 for the version
> v1.1.0 of the
> > Apache Flink MongoDB Connector as follows:
> >
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * The official Apache source release to be deployed to dist.apache.org
> [2],
> > which are signed with the key with fingerprint
> > 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
> > * All artifacts to be deployed to the Maven Central Repository [4],
> > * Source code tag v1.1.0-rc2 [5],
> > * Website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> >
> > Best,
> > Leonard
> > [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353483
> > [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc2/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1705/
> > [5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc2
> > [6] https://github.com/apache/flink-web/pull/719


Re: [VOTE] Release flink-connector-mongodb 1.1.0, release candidate #1

2024-01-29 Thread Hang Ruan
Hi, Leonard.

I find that META-INF/MANIFEST.MF in
flink-sql-connector-mongodb-1.1.0-1.18.jar shows as follow.

Manifest-Version: 1.0
Archiver-Version: Plexus Archiver
Created-By: Apache Maven 3.8.1
Built-By: bangjiangxu
Build-Jdk: 11.0.11
Specification-Title: Flink : Connectors : SQL : MongoDB
Specification-Version: 1.1.0-1.18
Specification-Vendor: The Apache Software Foundation
Implementation-Title: Flink : Connectors : SQL : MongoDB
Implementation-Version: 1.1.0-1.18
Implementation-Vendor-Id: org.apache.flink
Implementation-Vendor: The Apache Software Foundation

Maybe we should build mongodb connector with jdk8.

Best,
Hang

Jiabao Sun  于2024年1月29日周一 21:51写道:

> Thanks Leonard for driving this.
>
> +1(non-binding)
>
> - Release notes look good
> - Tag is present in Github
> - Validated checksum hash
> - Verified signature
> - Build the source with Maven by jdk8,11,17,21
> - Verified web PR and left minor comments
> - Run a filter push down test by sql-client on Flink 1.18.1 and it works
> well
>
> Best,
> Jiabao
>
>
> On 2024/01/29 12:33:23 Leonard Xu wrote:
> > Hey all,
> >
> > Please help review and vote on the release candidate #1 for the version
> 1.1.0 of the
> > Apache Flink MongoDB Connector as follows:
> >
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * The official Apache source release to be deployed to dist.apache.org
> [2],
> > which are signed with the key with fingerprint
> > 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
> > * All artifacts to be deployed to the Maven Central Repository [4],
> > * Source code tag v.1.0-rc1 [5],
> > * Website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> >
> > Best,
> > Leonard
> > [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353483
> > [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-mongodb-1.1.0-rc1/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1702/
> > [5] https://github.com/apache/flink-connector-mongodb/tree/v1.1.0-rc1
> > [6] https://github.com/apache/flink-web/pull/719


Re: [VOTE] Release flink-connector-jdbc, release candidate #2

2024-01-29 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk11
- Verified web PR
- Check that the jar is built by jdk8

Best,
Hang

Jiabao Sun  于2024年1月30日周二 10:52写道:

> +1(non-binding)
>
> - Release notes look good
> - Tag is present in Github
> - Validated checksum hash
> - Verified signature
> - Verified web PR and left minor comments
>
> Best,
> Jiabao
>
>
> On 2024/01/30 00:17:54 Sergey Nuyanzin wrote:
> > Hi everyone,
> > Please review and vote on the release candidate #2 for the version
> > 3.1.2, as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> > [2], which are signed with the key with fingerprint
> > 1596BBF0726835D8 [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v3.1.2-rc2 [5],
> > * website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Release Manager
> >
> > [1]
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354088
> > [2]
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-jdbc-3.1.2-rc2
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1704/
> > [5]
> https://github.com/apache/flink-connector-jdbc/releases/tag/v3.1.2-rc2
> > [6] https://github.com/apache/flink-web/pull/707
> >


Re: [VOTE] Release flink-connector-kafka v3.1.0, release candidate #1

2024-01-28 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk11
- Verified web PR
- Check that the jar is built by jdk8

Best,
Hang

Martijn Visser  于2024年1月26日周五 21:05写道:

> Hi everyone,
> Please review and vote on the release candidate #1 for the Flink Kafka
> connector version 3.1.0, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This release is compatible with Flink 1.17.* and Flink 1.18.*
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint
> A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.1.0-rc1 [5],
> * website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Release Manager
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353135
> [2]
>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-kafka-3.1.0-rc1
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1700
> [5]
> https://github.com/apache/flink-connector-kafka/releases/tag/v3.1.0-rc1
> [6] https://github.com/apache/flink-web/pull/718
>


Re: [DISCUSS] Release new version of Flink's Kafka connector

2024-01-26 Thread Hang Ruan
Thanks, Martijn.

+1 for releasing a version 3.1 , which only supports Flink 1.18.

Best,
Hang

Leonard Xu  于2024年1月26日周五 16:18写道:

> Thanks Martijn for driving this.
>
> +1 to use v3.1 version/branch, we can use v4.0 for Flink minor versions
> 1.18&1.19 later.
>
>
> Best,
> Leonard
>
>
>
> > 2024年1月26日 下午4:13,Martijn Visser  写道:
> >
> > Hi!
> >
> > Thanks for chipping in, clarifying and correcting me. I'll kick off a
> release for v3.1 today then!
> >
> > Best regards,
> >
> > Martijn
> >
> > On Fri, Jan 26, 2024 at 8:46 AM Mason Chen  > wrote:
> > Hi Martijn,
> >
> > +1 no objections, thanks for volunteering. I'll definitely help verify
> the
> > rc when it becomes available.
> >
> > I think FLIP-288 (I assume you meant this) doesn't introduce incompatible
> > changes since the implementation should be state compatible as well as
> the
> > default changes should be transparent to the user and actually correct
> > possibly erroneous behavior.
> >
> > Also, the RecordEvaluator was released with Flink 1.18 (I assume you
> meant
> > this). Given the above, I'm +1 for a v3.1 release that only supports 1.18
> > while we support patches on v3.0 that supports 1.17. This logic is also
> > inline with what was agreed upon for external connector versioning [1].
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
> <
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
> >
> >
> > Best,
> > Mason
> >
> > On Thu, Jan 25, 2024 at 2:16 PM Martijn Visser  >
> > wrote:
> >
> > > Hi everyone,
> > >
> > > The latest version of the Flink Kafka connector that's available is
> > > currently v3.0.2, which is compatible with both Flink 1.17 and Flink
> 1.18.
> > >
> > > I would like to propose to create a release which is either v3.1, or
> v4.0
> > > (see below), with compatibility for Flink 1.17 and Flink 1.18. This
> newer
> > > version would contain many improvements [1] [2] like:
> > >
> > > * FLIP-246 Dynamic Kafka Source
> > > * FLIP-288 Dynamic Partition Discovery
> > > * Rack Awareness support
> > > * Kafka Record support for KafkaSink
> > > * Misc bug fixes and CVE issues
> > >
> > > If there are no objections, I would like to volunteer as release
> manager.
> > >
> > > The only thing why I'm not sure if this should be a v3.1 or a v4.0, is
> > > because I'm not 100% sure if FLIP-246 introduces incompatible API
> changes
> > > (requiring a new major version), or if the functionality was added in a
> > > backwards compatible matter (meaning a new minor version would be
> > > sufficient). I'm looping in Hongshun Wang and Leonard Xu to help
> clarify
> > > this.
> > >
> > > There's also a discussion happening in an open PR [3] on dropping
> support
> > > for Flink 1.18 afterwards (since this PR would add support for
> > > RecordEvaluator, which only exists in Flink 1.19). My proposal would be
> > > that after either v3.1 or v4.0 is released, we would indeed drop
> support
> > > for Flink 1.18 with that PR and the next Flink Kafka connector would be
> > > either v4.0 (if v3.1 is the next release) or v5.0 (if v4.0 is the next
> > > release).
> > >
> > > Best regards,
> > >
> > > Martijn
> > >
> > > [1]
> > >
> > >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353135
> <
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353135
> >
> > > [2]
> > >
> > >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352917
> <
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352917
> >
> > > [3]
> > >
> > >
> https://github.com/apache/flink-connector-kafka/pull/76#pullrequestreview-1844645464
> <
> https://github.com/apache/flink-connector-kafka/pull/76#pullrequestreview-1844645464
> >
> > >
>
>


Re: [VOTE] FLIP-417: Expose JobManagerOperatorMetrics via REST API

2024-01-25 Thread Hang Ruan
Thanks for the FLIP.

+1 (non-binding)

Best,
Hang

Mason Chen  于2024年1月26日周五 04:51写道:

> Hi Devs,
>
> I would like to start a vote on FLIP-417: Expose JobManagerOperatorMetrics
> via REST API [1] which has been discussed in this thread [2].
>
> The vote will be open for at least 72 hours unless there is an objection or
> not enough votes.
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-417%3A+Expose+JobManagerOperatorMetrics+via+REST+API
> [2] https://lists.apache.org/thread/tt0hf6kf5lcxd7g62v9dhpn3z978pxw0
>
> Best,
> Mason
>


Re: [DISCUSS] FLIP-417: Expose JobManagerOperatorMetrics via REST API

2024-01-16 Thread Hang Ruan
Hi, Mason.

The field `operatorName` in JobManagerOperatorQueryScopeInfo refers to the
fields in OperatorQueryScopeInfo and chooses the operatorName instead of
OperatorID.
It is fine by my side to change from opertorName to operatorID in this
FLIP.

Best,
Hang

Mason Chen  于2024年1月17日周三 09:39写道:

> Hi Xuyang and Hang,
>
> Thanks for your support and feedback! See my responses below:
>
> 1. IIRC, in a sense, operator ID and vertex ID are the same thing. The
> > operator ID can
> > be converted from the vertex ID[1]. Therefore, it is somewhat strange to
> > have both vertex
> > ID and operator ID in a single URL.
> >
> I think Hang explained it perfectly. Essentially, a vertix may contain one
> or more operators so the operator ID is required to distinguish this case.
>
> 2. If I misunderstood the semantics of operator IDs here, then what is the
> > relationship
> > between vertex ID and operator ID, and do we need a url like
> > `/jobs//vertices//operators/`
> > to list all operator ids under this vertices?
> >
> Good question, we definitely need expose operator IDs through the REST API
> to make this usable. I'm looking at how users would currently discover the
> vertex id to query. From the supported REST APIs [1], you can currently
> obtain it from
>
> 1. `/jobs/`
> 2. `/jobs//plan`
>
> From the response of both these APIs, they include the vertex ids (the
> vertices AND nodes fields), but not the operator ids. We would need to add
> the logic to the plan generation [2]. The response is a little confusing
> because there is a field in the vertices called "operator name". I propose
> to add a new field called "operators" to the vertex response object, which
> would be a list of objects with the structure
>
> Operator
> {
>   "id": "THE-FLINK-GENERATED-ID"
> }.
>
> The JobManagerOperatorQueryScopeInfo has three fields: jobID, vertexID and
> > operatorName. So we should use the operator name in the API.
> > If you think we should use the operator id, there need be more changes
> > about it.
> >
> I think we should use operator id since it uniquely identifies an
> operator--on the contrary, the operator name does not (it may be empty or
> repeated between operators by the user). I actually had a question on that
> since you implemented the metric group. What's the reason we use operator
> name currently? Could it also use operator id so we can match against the
> id?
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/
> [2]
>
> https://github.com/apache/flink/blob/416cb7aaa02c176e01485ff11ab4269f76b5e9e2/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/jsonplan/JsonPlanGenerator.java#L53
>
> Best,
> Mason
>
>
> On Thu, Jan 11, 2024 at 10:54 PM Hang Ruan  wrote:
>
> > Hi, Mason.
> >
> > Thanks for driving this FLIP.
> >
> > The JobManagerOperatorQueryScopeInfo has three fields: jobID, vertexID
> and
> > operatorName. So we should use the operator name in the API.
> > If you think we should use the operator id, there need be more changes
> > about it.
> >
> > About the Xuyang's questions, we add both vertexID and operatorID
> > information because of the operator chain.
> > A operator chain has a vertexID and contains many different operators.
> The
> > operator information helps to distinguish them in the same operator
> chain.
> >
> > Best,
> > Hang
> >
> >
> > Xuyang  于2024年1月12日周五 10:21写道:
> >
> > > Hi, Mason.
> > > Thanks for driving this Flip. I think it's important for external
> system
> > > to be able to
> > > perceive the metric of the operator coordinator. +1 for it.
> > >
> > >
> > > I just have the following minor questions and am looking forward to
> your
> > > reply. Please forgive
> > > me if I have some misunderstandings.
> > >
> > >
> > > 1. IIRC, in a sense, operator ID and vertex ID are the same thing. The
> > > operator ID can
> > > be converted from the vertex ID[1]. Therefore, it is somewhat strange
> to
> > > have both vertex
> > > ID and operator ID in a single URL.
> > >
> > >
> > > 2. If I misunderstood the semantics of operator IDs here, then what is
> > the
> > > relationship
> > > between vertex ID and operator ID, and do we need a url like
> > > `/jobs//vertices//operators/`
> > > to list all operator ids under this vertices?
> > >
> > >
> > >
> > >
> > > [1]
> > >
>

Re: [VOTE] FLIP-377: Support fine-grained configuration to control filter push down for Table/SQL Sources

2024-01-16 Thread Hang Ruan
+1 (non-binding)

Best,
Hang

Jiabao Sun  于2024年1月9日周二 19:39写道:

> Hi Devs,
>
> I'd like to start a vote on FLIP-377: Support fine-grained configuration
> to control filter push down for Table/SQL Sources[1]
> which has been discussed in this thread[2].
>
> The vote will be open for at least 72 hours unless there is an objection
> or not enough votes.
>
> [1]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=276105768
> [2] https://lists.apache.org/thread/nvxx8sp9jm009yywm075hoffr632tm7j
>
> Best,
> Jiabao


Re: [VOTE] Release flink-connector-rabbitmq, v3.0.2 release candidate #1

2024-01-13 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk11
- Verified web PR

Best,
Hang


Jiabao Sun  于2024年1月13日周六 16:51写道:

> +1 (non-binding)
>
> - Validated hashes
> - Verified signature
> - Verified tags
> - Verified Lisence
> - Reviewed web pr
>
> Best,
> Jiabao
>
>
> > 2024年1月12日 20:50,Martijn Visser  写道:
> >
> > Hi everyone,
> > Please review and vote on the release candidate #1 for the version
> > 3.0.2, as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x.
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> > [2], which are signed with the key with fingerprint
> > A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v3.0.2-rc1 [5],
> > * website pull request listing the new release [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Release Manager
> >
> > [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353145
> > [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-rabbitmq-3.0.2-rc1
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1697
> > [5]
> https://github.com/apache/flink-connector-rabbitmq/releases/tag/v3.0.2-rc1
> > [6] https://github.com/apache/flink-web/pull/712
>
>


Re: [VOTE] Release flink-connector-hbase v3.0.1, release candidate #2

2024-01-12 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk11
- Verified web PR

Best,
Hang

Martijn Visser  于2024年1月12日周五 20:30写道:

> Hi everyone,
> Please review and vote on the release candidate #2 for the
> flink-connector-hbase version
> 3.0.1, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This version is compatible with Flink 1.16.x, 1.17.x and 1.18.x
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2], which are signed with the key with fingerprint
> A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.0.1-rc1 [5],
> * website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Release Manager
>
> [1] https://issues.apache.org/jira/projects/FLINK/versions/12353603
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-hbase-3.0.1-rc2
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1696/
> [5]
> https://github.com/apache/flink-connector-hbase/releases/tag/v3.0.1-rc2
> [6] https://github.com/apache/flink-web/pull/708
>


Re: [DISCUSS] FLIP-417: Expose JobManagerOperatorMetrics via REST API

2024-01-11 Thread Hang Ruan
Hi, Mason.

Thanks for driving this FLIP.

The JobManagerOperatorQueryScopeInfo has three fields: jobID, vertexID and
operatorName. So we should use the operator name in the API.
If you think we should use the operator id, there need be more changes
about it.

About the Xuyang's questions, we add both vertexID and operatorID
information because of the operator chain.
A operator chain has a vertexID and contains many different operators. The
operator information helps to distinguish them in the same operator chain.

Best,
Hang


Xuyang  于2024年1月12日周五 10:21写道:

> Hi, Mason.
> Thanks for driving this Flip. I think it's important for external system
> to be able to
> perceive the metric of the operator coordinator. +1 for it.
>
>
> I just have the following minor questions and am looking forward to your
> reply. Please forgive
> me if I have some misunderstandings.
>
>
> 1. IIRC, in a sense, operator ID and vertex ID are the same thing. The
> operator ID can
> be converted from the vertex ID[1]. Therefore, it is somewhat strange to
> have both vertex
> ID and operator ID in a single URL.
>
>
> 2. If I misunderstood the semantics of operator IDs here, then what is the
> relationship
> between vertex ID and operator ID, and do we need a url like
> `/jobs//vertices//operators/`
> to list all operator ids under this vertices?
>
>
>
>
> [1]
> https://github.com/apache/flink/blob/7bebd2d9fac517c28afc24c0c034d77cfe2b43a6/flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/OperatorID.java#L40C27-L40C27
>
> --
>
> Best!
> Xuyang
>
>
>
>
>
> At 2024-01-12 04:20:03, "Mason Chen"  wrote:
> >Hi Devs,
> >
> >I'm opening this thread to discuss a short FLIP for exposing
> >JobManagerOperatorMetrics via REST API [1].
> >
> >The current set of REST APIs make it impossible to query coordinator
> >metrics. This FLIP proposes a new REST API to query the
> >JobManagerOperatorMetrics.
> >
> >[1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-417%3A+Expose+JobManagerOperatorMetrics+via+REST+API
> >
> >Best,
> >Mason
>


Re: [VOTE] Release flink-connector-hive, release candidate #1

2024-01-10 Thread Hang Ruan
Hi, Sergey.

Thanks for the quick reply.

I try to package it in other pc with jdk8 and it succeeds. Please ignore
it. It seems like some errors in my environment.

Best,
Hang

Sergey Nuyanzin  于2024年1月11日周四 14:31写道:

> Hi Hang
>
> thanks for checking
> yes, it could be packaged with jdk8, moreover jdk8 is checked in ci
> for instance here ci for the commit tagged with v3.0.0-rc1 [1]
>
> the strange thing in the output that you've provided is
> >org.apache.flink:flink-connector-hive_2.12:jar:3.0.0: Could not find
> > artifact jdk.tools:jdk.tools:jar:1.8 at specified path /Library/Internet
> > Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/../lib/tools.jar
>
> there are no such dependencies in poms,
> could it happen that there is some specific configuration on the machine
> you used for that?
> Can you please check it on another setup?
>
>
> [1] https://github.com/apache/flink-connector-hive/actions/runs/7479158667
>
>
> On Thu, Jan 11, 2024 at 4:44 AM Hang Ruan  wrote:
>
> > Hi, Sergey Nuyanzin.
> >
> > Thanks for driving this.
> >
> > I try to package the source with jdk8 and it will cause an error as
> > follows.
> >
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  4.621 s
> > [INFO] Finished at: 2024-01-11T11:34:30+08:00
> > [INFO]
> > 
> > [ERROR] Failed to execute goal on project flink-connector-hive_2.12:
> Could
> > not resolve dependencies for project
> > org.apache.flink:flink-connector-hive_2.12:jar:3.0.0: Could not find
> > artifact jdk.tools:jdk.tools:jar:1.8 at specified path /Library/Internet
> > Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/../lib/tools.jar -> [Help
> 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e
> > switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> >
> >
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> > [ERROR]
> > [ERROR] After correcting the problems, you can resume the build with the
> > command
> > [ERROR]   mvn  -rf :flink-connector-hive_2.12
> >
> > I see that the 'Building the Apache Flink Hive Connector from Source'
> part
> > in README requires the Java 11. I am not sure whether this could be
> treated
> > as an error.
> > Does the flink-connector-hive support to be packaged with jdk8 now?
> >
> > Best,
> > Hang
> >
> > Jiabao Sun  于2024年1月11日周四 11:35写道:
> >
> > > +1 (non-binding)
> > >
> > > - Validated checksum hash
> > > - Verified signature
> > > - Verified web PR
> > > - Verified tags
> > >
> > > Best,
> > > Jiabao
> > >
> > >
> > > > 2024年1月11日 11:25,Hang Ruan  写道:
> > > >
> > > > Sorry that I make a mistake. I build the source with Maven and jdk11.
> > > >
> > > > Best,
> > > > Hang
> > > >
> > > > Hang Ruan  于2024年1月11日周四 11:13写道:
> > > >
> > > >> +1 (non-binding)
> > > >>
> > > >> - Validated checksum hash
> > > >> - Verified signature
> > > >> - Verified that no binaries exist in the source archive
> > > >> - Build the source with Maven and jdk8
> > > >> - Verified web PR
> > > >> - Verified that the flink-connector-base is not packaged in hive
> > > connector
> > > >>
> > > >> Best,
> > > >> Hang
> > > >>
> > > >> Sergey Nuyanzin  于2024年1月11日周四 06:19写道:
> > > >>
> > > >>> Hi everyone,
> > > >>> Please review and vote on the release candidate #1 for the version
> > > 3.0.0,
> > > >>> as follows:
> > > >>> [ ] +1, Approve the release
> > > >>> [ ] -1, Do not approve the release (please provide specific
> comments)
> > > >>>
> > > >>> This version is compatible with Flink 1.18.x
> > > >>>
> > > >>> The complete staging area is available for your review, which
> > inc

Re: [VOTE] Release flink-connector-hive, release candidate #1

2024-01-10 Thread Hang Ruan
Hi, Sergey Nuyanzin.

Thanks for driving this.

I try to package the source with jdk8 and it will cause an error as follows.

[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time:  4.621 s
[INFO] Finished at: 2024-01-11T11:34:30+08:00
[INFO]

[ERROR] Failed to execute goal on project flink-connector-hive_2.12: Could
not resolve dependencies for project
org.apache.flink:flink-connector-hive_2.12:jar:3.0.0: Could not find
artifact jdk.tools:jdk.tools:jar:1.8 at specified path /Library/Internet
Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/../lib/tools.jar -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR]   mvn  -rf :flink-connector-hive_2.12

I see that the 'Building the Apache Flink Hive Connector from Source' part
in README requires the Java 11. I am not sure whether this could be treated
as an error.
Does the flink-connector-hive support to be packaged with jdk8 now?

Best,
Hang

Jiabao Sun  于2024年1月11日周四 11:35写道:

> +1 (non-binding)
>
> - Validated checksum hash
> - Verified signature
> - Verified web PR
> - Verified tags
>
> Best,
> Jiabao
>
>
> > 2024年1月11日 11:25,Hang Ruan  写道:
> >
> > Sorry that I make a mistake. I build the source with Maven and jdk11.
> >
> > Best,
> > Hang
> >
> > Hang Ruan  于2024年1月11日周四 11:13写道:
> >
> >> +1 (non-binding)
> >>
> >> - Validated checksum hash
> >> - Verified signature
> >> - Verified that no binaries exist in the source archive
> >> - Build the source with Maven and jdk8
> >> - Verified web PR
> >> - Verified that the flink-connector-base is not packaged in hive
> connector
> >>
> >> Best,
> >> Hang
> >>
> >> Sergey Nuyanzin  于2024年1月11日周四 06:19写道:
> >>
> >>> Hi everyone,
> >>> Please review and vote on the release candidate #1 for the version
> 3.0.0,
> >>> as follows:
> >>> [ ] +1, Approve the release
> >>> [ ] -1, Do not approve the release (please provide specific comments)
> >>>
> >>> This version is compatible with Flink 1.18.x
> >>>
> >>> The complete staging area is available for your review, which includes:
> >>> * JIRA release notes [1],
> >>> * the official Apache source release to be deployed to dist.apache.org
> >>> [2],
> >>> which are signed with the key with fingerprint F752 9FAE 2481 1A5C 0DF3
> >>> CA74 1596 BBF0 7268 35D8 [3],
> >>> * all artifacts to be deployed to the Maven Central Repository [4],
> >>> * source code tag v3.0.0-rc1 [5],
> >>> * website pull request listing the new release [6].
> >>>
> >>> The vote will be open for at least 72 hours. It is adopted by majority
> >>> approval, with at least 3 PMC affirmative votes.
> >>>
> >>> Thanks,
> >>> Release Manager
> >>>
> >>> [1] https://issues.apache.org/jira/projects/FLINK/versions/12352591
> >>> [2]
> >>>
> >>>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-hive-3.0.0-rc1
> >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >>> [4]
> >>>
> https://repository.apache.org/content/repositories/orgapacheflink-1694/
> >>> [5]
> >>> https://github.com/apache/flink-connector-hive/releases/tag/v3.0.0-rc1
> >>> [6] https://github.com/apache/flink-web/pull/709
> >>>
> >>
>
>


Re: [VOTE] Release flink-connector-hive, release candidate #1

2024-01-10 Thread Hang Ruan
Sorry that I make a mistake. I build the source with Maven and jdk11.

Best,
Hang

Hang Ruan  于2024年1月11日周四 11:13写道:

> +1 (non-binding)
>
> - Validated checksum hash
> - Verified signature
> - Verified that no binaries exist in the source archive
> - Build the source with Maven and jdk8
> - Verified web PR
> - Verified that the flink-connector-base is not packaged in hive connector
>
> Best,
> Hang
>
> Sergey Nuyanzin  于2024年1月11日周四 06:19写道:
>
>> Hi everyone,
>> Please review and vote on the release candidate #1 for the version 3.0.0,
>> as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific comments)
>>
>> This version is compatible with Flink 1.18.x
>>
>> The complete staging area is available for your review, which includes:
>> * JIRA release notes [1],
>> * the official Apache source release to be deployed to dist.apache.org
>> [2],
>> which are signed with the key with fingerprint F752 9FAE 2481 1A5C 0DF3
>>  CA74 1596 BBF0 7268 35D8 [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag v3.0.0-rc1 [5],
>> * website pull request listing the new release [6].
>>
>> The vote will be open for at least 72 hours. It is adopted by majority
>> approval, with at least 3 PMC affirmative votes.
>>
>> Thanks,
>> Release Manager
>>
>> [1] https://issues.apache.org/jira/projects/FLINK/versions/12352591
>> [2]
>>
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-hive-3.0.0-rc1
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4]
>> https://repository.apache.org/content/repositories/orgapacheflink-1694/
>> [5]
>> https://github.com/apache/flink-connector-hive/releases/tag/v3.0.0-rc1
>> [6] https://github.com/apache/flink-web/pull/709
>>
>


Re: [VOTE] Release flink-connector-hive, release candidate #1

2024-01-10 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with Maven and jdk8
- Verified web PR
- Verified that the flink-connector-base is not packaged in hive connector

Best,
Hang

Sergey Nuyanzin  于2024年1月11日周四 06:19写道:

> Hi everyone,
> Please review and vote on the release candidate #1 for the version 3.0.0,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This version is compatible with Flink 1.18.x
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint F752 9FAE 2481 1A5C 0DF3
>  CA74 1596 BBF0 7268 35D8 [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.0.0-rc1 [5],
> * website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Release Manager
>
> [1] https://issues.apache.org/jira/projects/FLINK/versions/12352591
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-hive-3.0.0-rc1
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1694/
> [5] https://github.com/apache/flink-connector-hive/releases/tag/v3.0.0-rc1
> [6] https://github.com/apache/flink-web/pull/709
>


Re: Re: [VOTE] FLIP-387: Support named parameters for functions and call procedures

2024-01-09 Thread Hang Ruan
+1 (non-binding)

Best,
Hang

Jingsong Li  于2024年1月10日周三 12:03写道:

> +1
>
> On Wed, Jan 10, 2024 at 11:24 AM Xuyang  wrote:
> >
> > +1(non-binding)--
> >
> > Best!
> > Xuyang
> >
> >
> >
> >
> >
> > 在 2024-01-08 00:34:55,"Feng Jin"  写道:
> > >Hi Alexey
> > >
> > >Thank you for the reminder, the link has been updated.
> > >
> > >Best,
> > >Feng Jin
> > >
> > >On Sat, Jan 6, 2024 at 12:55 AM Alexey Leonov-Vendrovskiy <
> > >vendrov...@gmail.com> wrote:
> > >
> > >> Thanks for starting the vote!
> > >> Do you mind adding a link from the FLIP to this thread?
> > >>
> > >> Thanks,
> > >> Alexey
> > >>
> > >> On Thu, Jan 4, 2024 at 6:48 PM Feng Jin 
> wrote:
> > >>
> > >> > Hi everyone
> > >> >
> > >> > Thanks for all the feedback about the FLIP-387: Support named
> parameters
> > >> > for functions and call procedures [1] [2] .
> > >> >
> > >> > I'd like to start a vote for it. The vote will be open for at least
> 72
> > >> > hours(excluding weekends,until Jan 10, 12:00AM GMT) unless there is
> an
> > >> > objection or an insufficient number of votes.
> > >> >
> > >> >
> > >> >
> > >> > [1]
> > >> >
> > >> >
> > >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-387%3A+Support+named+parameters+for+functions+and+call+procedures
> > >> > [2]
> https://lists.apache.org/thread/bto7mpjvcx7d7k86owb00dwrm65jx8cn
> > >> >
> > >> >
> > >> > Best,
> > >> > Feng Jin
> > >> >
> > >>
>


Re: [VOTE] Accept Flink CDC into Apache Flink

2024-01-09 Thread Hang Ruan
+1 (non-binding)

Best,
Hang

gongzhongqiang  于2024年1月9日周二 16:25写道:

> +1 non-binding
>
> Best,
> Zhongqiang
>
> Leonard Xu  于2024年1月9日周二 15:05写道:
>
> > Hello all,
> >
> > This is the official vote whether to accept the Flink CDC code
> contribution
> >  to Apache Flink.
> >
> > The current Flink CDC code, documentation, and website can be
> > found here:
> > code: https://github.com/ververica/flink-cdc-connectors <
> > https://github.com/ververica/flink-cdc-connectors>
> > docs: https://ververica.github.io/flink-cdc-connectors/ <
> > https://ververica.github.io/flink-cdc-connectors/>
> >
> > This vote should capture whether the Apache Flink community is interested
> > in accepting, maintaining, and evolving Flink CDC.
> >
> > Regarding my original proposal[1] in the dev mailing list, I firmly
> believe
> > that this initiative aligns perfectly with Flink. For the Flink
> community,
> > it represents an opportunity to bolster Flink's competitive edge in
> > streaming
> > data integration, fostering the robust growth and prosperity of the
> Apache
> > Flink
> > ecosystem. For the Flink CDC project, becoming a sub-project of Apache
> > Flink
> > means becoming an integral part of a neutral open-source community,
> > capable of
> > attracting a more diverse pool of contributors.
> >
> > All Flink CDC maintainers are dedicated to continuously contributing to
> > achieve
> > seamless integration with Flink. Additionally, PMC members like Jark,
> > Qingsheng,
> > and I are willing to infacilitate the expansion of contributors and
> > committers to
> > effectively maintain this new sub-project.
> >
> > This is a "Adoption of a new Codebase" vote as per the Flink bylaws [2].
> > Only PMC votes are binding. The vote will be open at least 7 days
> > (excluding weekends), meaning until Thursday January 18 12:00 UTC, or
> > until we
> > achieve the 2/3rd majority. We will follow the instructions in the Flink
> > Bylaws
> > in the case of insufficient active binding voters:
> >
> > > 1. Wait until the minimum length of the voting passes.
> > > 2. Publicly reach out via personal email to the remaining binding
> voters
> > in the
> > voting mail thread for at least 2 attempts with at least 7 days between
> > two attempts.
> > > 3. If the binding voter being contacted still failed to respond after
> > all the attempts,
> > the binding voter will be considered as inactive for the purpose of this
> > particular voting.
> >
> > Welcome voting !
> >
> > Best,
> > Leonard
> > [1] https://lists.apache.org/thread/o7klnbsotmmql999bnwmdgo56b6kxx9l
> > [2]
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=120731026
>


Re: [VOTE] FLIP-405: Migrate string configuration key to ConfigOption

2024-01-07 Thread Hang Ruan
+1(non-binding)

Best,
Hang

Rui Fan <1996fan...@gmail.com> 于2024年1月8日周一 13:04写道:

> +1(binding)
>
> Best,
> Rui
>
> On Mon, Jan 8, 2024 at 1:00 PM Xuannan Su  wrote:
>
> > Hi everyone,
> >
> > Thanks for all the feedback about the FLIP-405: Migrate string
> > configuration key to ConfigOption [1] [2].
> >
> > I'd like to start a vote for it. The vote will be open for at least 72
> > hours(excluding weekends,until Jan 11, 12:00AM GMT) unless there is an
> > objection or an insufficient number of votes.
> >
> >
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-405%3A+Migrate+string+configuration+key+to+ConfigOption
> > [2] https://lists.apache.org/thread/zfw1b1g3679yn0ppjbsokfrsx9k7ybg0
> >
> >
> > Best,
> > Xuannan
> >
>


Re: [ANNOUNCE] New Apache Flink Committer - Alexander Fedulov

2024-01-02 Thread Hang Ruan
Congratulations, Alex!

Best,
Hang

Samrat Deb  于2024年1月3日周三 14:18写道:

> Congratulations Alex
>


Re: [DISCUSS] FLIP-405: Migrate string configuration key to ConfigOption

2023-12-26 Thread Hang Ruan
Hi, Rui Fan.

Thanks for this FLIP.

I think the key of LOCAL_NUMBER_TASK_MANAGER is better as
'minicluster.number-of-taskmanagers' or 'minicluster.taskmanager-number'
instead of 'minicluster.number-taskmanager'.

Best,
Hang

Xuannan Su  于2023年12月27日周三 12:40写道:

> Hi Xintong and Rui,
>
> Thanks for the quick feedback and the suggestions.
>
> > 1. I think the default value for `TASK_MANAGER_LOG_PATH_KEY` should be
> "no
> > default".
>
> I have considered both ways of describing the default value. However,
> I found out that some of the configurations, such as `web.tmpdir`, put
> `System.getProperty()` in the default value [1]. Some are putting the
> description in the default value column[2]. So I just picked the first
> one. I am fine with either way, so long as they are consistent. WDYT?
>
> > 3. Simply saying "getting / setting value with string key is discouraged"
> > in JavaDoc of get/setString is IMHO a bit confusing. People may have the
> > question why would we keep the discouraged interfaces at all. I would
> > suggest the following:
> > ```
> > We encourage users and developers to always use ConfigOption for getting
> /
> > setting the configurations if possible, for its rich description, type,
> > default-value and other supports. The string-key-based getter / setter
> > should only be used when ConfigOption is not applicable, e.g., the key is
> > programmatically generated in runtime.
> > ```
>
> The suggested comment looks good to me. Thanks for the suggestion. I
> will update the comment in the FLIP.
>
> > 2. So I wonder if we can simply mark them as deprecated and remove in
> 2.0.
>
> After some investigation, it turns out those options of input/output
> format are only publicly exposed in the DataSet docs[2], which is
> deprecated. Thus, marking them as deprecated and removed in Flink 2.0
> looks fine to me.
>
>
> @Rui
>
> > Configuration has a `public  T get(ConfigOption option)` method.
> > Could we remove all `Xxx getXxx(ConfigOption configOption)` methods?
>
> +1 Only keep the get(ConfigOption option),
> getOptional(ConfigOption option), and set(ConfigOption option, T
> value).
>
> Best,
> Xuannan
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#web-tmpdir
> [2]
> https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#kubernetes-container-image-ref
> [3]
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/dataset/overview/#data-sources
>
>
>
>
> On Tue, Dec 26, 2023 at 8:47 PM Xintong Song 
> wrote:
> >
> > >
> > > Configuration has a `public  T get(ConfigOption option)` method.
> > > Could we remove all `Xxx getXxx(ConfigOption configOption)`
> methods?
> >
> >
> >
> > Note: all `public void setXxx(ConfigOption key, Xxx value)` methods
> > > can be replaced with `public  Configuration set(ConfigOption
> option,
> > > T value)` as well.
> >
> >
> > +1
> >
> >
> > Best,
> >
> > Xintong
> >
> >
> >
> > On Tue, Dec 26, 2023 at 8:44 PM Xintong Song 
> wrote:
> >
> > > These features don't have a public option, but they work. I'm not sure
> > >> whether these features are used by some advanced users.
> > >> Actually, I think some of them are valuable! For example:
> > >>
> > >> - ConfigConstants.YARN_CONTAINER_START_COMMAND_TEMPLATE
> > >>   allows users to define the start command of the yarn container.
> > >> - FileInputFormat.ENUMERATE_NESTED_FILES_FLAG allows
> > >>   flink job reads all files under the directory even if it has nested
> > >> directories.
> > >>
> > >> This FLIP focuses on the refactor option, I'm afraid these features
> are
> > >> used
> > >> in some production and removing these features will affect some flink
> > >> jobs.
> > >> So I prefer to keep these features, WDTY?
> > >>
> > >
> > > First of all, I don't think we should support any knobs that users can
> > > only learn how to use from reading Flink's internal codes. From this
> > > perspective, for existing string-keyed knobs that are not mentioned in
> any
> > > public documentation, yes we can argue that they are functioning, but
> we
> > > can also argue that they are not really exposed to users. That means
> > > migrating them to ConfigOption is not a pure refactor, but would make
> > > something that used to be hidden from users now exposed to users. For
> such
> > > options, I personally would lean toward not exposing them. If we
> consider
> > > them as already exposed, then process-wise there's no problem in
> > > deprecating some infrequently-used options and removing them in a major
> > > version bump, and if they are proved needed later we can add them back
> > > anytime. On the other hand, if we consider them as not yet exposed,
> then
> > > removing them later would be a breaking change.
> > >
> > >
> > > Secondly, I don't really come up with any cases where users need to
> tune
> > > these knobs. E.g., why would we allow users to customize the yarn
> container
> > > start command while we already provide `env.java.opts`? And 

Re: Re: [VOTE] FLIP-372: Allow TwoPhaseCommittingSink WithPreCommitTopology to alter the type of the Committable

2023-12-20 Thread Hang Ruan
Thanks for the FLIP.

+1 (non-binding)

Best,
Hang

Jiabao Sun  于2023年12月21日周四 11:48写道:

> Thanks Peter for driving this.
>
> +1 (non-binding)
>
> Best,
> Jiabao
>
>
> On 2023/12/18 12:06:05 Gyula Fóra wrote:
> > +1 (binding)
> >
> > Gyula
> >
> > On Mon, 18 Dec 2023 at 13:04, Márton Balassi 
> > wrote:
> >
> > > +1 (binding)
> > >
> > > On Mon 18. Dec 2023 at 09:34, Péter Váry 
> > > wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > Since there were no further comments on the discussion thread [1], I
> > > would
> > > > like to start the vote for FLIP-372 [2].
> > > >
> > > > The FLIP started as a small new feature, but in the discussion
> thread and
> > > > in a similar parallel thread [3] we opted for a somewhat bigger
> change in
> > > > the Sink V2 API.
> > > >
> > > > Please read the FLIP and cast your vote.
> > > >
> > > > The vote will remain open for at least 72 hours and only concluded if
> > > there
> > > > are no objections and enough (i.e. at least 3) binding votes.
> > > >
> > > > Thanks,
> > > > Peter
> > > >
> > > > [1] -
> https://lists.apache.org/thread/344pzbrqbbb4w0sfj67km25msp7hxlyd
> > > > [2] -
> > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-372%3A+Allow+TwoPhaseCommittingSink+WithPreCommitTopology+to+alter+the+type+of+the+Committable
> > > > [3] -
> https://lists.apache.org/thread/h6nkgth838dlh5s90sd95zd6hlsxwx57
> > > >
> > >
> >


Re: [VOTE] Release 1.18.1, release candidate #2

2023-12-20 Thread Hang Ruan
+1 (non-binding)

- Validated hashes
- Verified signature
- Build the source with Maven
- Test with the kafka connector 3.0.2: read and write records from kafka in
sql client
- Verified web PRs

Best,
Hang

Jiabao Sun  于2023年12月20日周三 16:57写道:

> Thanks Jing for driving this release.
>
> +1 (non-binding)
>
> - Validated hashes
> - Verified signature
> - Checked the tag
> - Build the source with Maven
> - Verified web PRs
>
> Best,
> Jiabao
>
>
> > 2023年12月20日 07:38,Jing Ge  写道:
> >
> > Hi everyone,
> >
> > The release candidate #1 has been skipped. Please review and vote on the
> > release candidate #2 for the version 1.18.1,
> >
> > as follows:
> >
> > [ ] +1, Approve the release
> >
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> >
> > * the official Apache source release and binary convenience releases to
> be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint 96AE0E32CBE6E0753CE6 [3],
> >
> > * all artifacts to be deployed to the Maven Central Repository [4],
> >
> > * source code tag "release-1.18.1-rc2" [5],
> >
> > * website pull request listing the new release and adding announcement
> blog
> > post [6].
> >
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> >
> > [1]
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353640
> >
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.18.1-rc2/
> >
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1689
> >
> > [5] https://github.com/apache/flink/releases/tag/release-1.18.1-rc2
> >
> > [6] https://github.com/apache/flink-web/pull/706
> >
> > Thanks,
> > Release Manager
>
>


Re: 退订

2023-12-18 Thread Hang Ruan
Please send email to user-unsubscr...@flink.apache.org if you want to
unsubscribe the mail from u...@flink.apache.org, and you can refer [1][2]
for more details.
请发送任意内容的邮件到 user-unsubscr...@flink.apache.org 地址来取消订阅来自
u...@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理你的邮件订阅。

Best,
Hang

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2] https://flink.apache.org/community.html#mailing-lists

唐大彪  于2023年12月18日周一 23:44写道:

> 退订
>


Re: Question on lookup joins

2023-12-17 Thread Hang Ruan
Hi, David.

The FLIP-377[1] is about this part. You could take a look at it.

Best,
Hang

[1]
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=276105768


Hang Ruan  于2023年12月17日周日 20:56写道:

> Hi, David.
>
> I think you are right that the value with NULL should not be returned if
> the filter push down is closed.
>
> Maybe you should explain this sql to make sure this filter not be pushed
> down to the lookup source.
>
> I see the configuration
> 'table.optimizer.source.predicate-pushdown-enabled' relies on the class
> FilterableTableSource, which is deprecated.
> I am not sure whether this configuration is still useful for jdbc
> connector, which is using the SupportsFilterPushDown.
>
> Maybe the jdbc connector should read this configuration and return an
> empty 'acceptedFilters' in the method 'applyFilters'.
>
> Best,
> Hang
>
> David Radley  于2023年12月16日周六 01:47写道:
>
>> Hi ,
>> I am working on FLINK-33365 which related to JDBC predicate pushdown. I
>> want to ensure that the same results occur with predicate pushdown as
>> without. So I am asking this question outside the pr / issue.
>>
>> I notice the following behaviour for lookup joins without predicate
>> pushdown. I was not expecting all the s , when there is not a
>> matching join key.  ’a’ is a table in paimon and ‘db’ is a relational
>> database.
>>
>>
>>
>> Flink SQL> select * from a;
>>
>> +++-+
>>
>> | op | ip |proctime |
>>
>> +++-+
>>
>> | +I |10.10.10.10 | 2023-12-15 17:36:10.028 |
>>
>> | +I |20.20.20.20 | 2023-12-15 17:36:10.030 |
>>
>> | +I |30.30.30.30 | 2023-12-15 17:36:10.031 |
>>
>> ^CQuery terminated, received a total of 3 rows
>>
>>
>>
>> Flink SQL> select * from  db_catalog.menagerie.e;
>>
>>
>> +++-+-+-+-+
>>
>> | op | ip |type | age |
>> height |  weight |
>>
>>
>> +++-+-+-+-+
>>
>> | +I |10.10.10.10 |   1 |  30 |
>>100 | 100 |
>>
>> | +I |10.10.10.10 |   2 |  40 |
>> 90 | 110 |
>>
>> | +I |10.10.10.10 |   2 |  50 |
>> 80 | 120 |
>>
>> | +I |10.10.10.10 |   3 |  50 |
>> 70 |  40 |
>>
>> | +I |20.20.20.20 |   3 |  30 |
>> 80 |  90 |
>>
>>
>> +++-+-+-+-+
>>
>> Received a total of 5 rows
>>
>>
>>
>> Flink SQL> set table.optimizer.source.predicate-pushdown-enabled=false;
>>
>> [INFO] Execute statement succeed.
>>
>>
>>
>> Flink SQL> SELECT * FROM a left join mariadb_catalog.menagerie.e FOR
>> SYSTEM_TIME AS OF a.proctime on e.type = 2 and a.ip = e.ip;
>>
>>
>> +++-++-+-+-+-+
>>
>> | op | ip |proctime |
>> ip0 |type | age |  height |
>> weight |
>>
>>
>> +++-++-+-+-+-+
>>
>> | +I |10.10.10.10 | 2023-12-15 17:38:05.169 |
>> 10.10.10.10 |   2 |  40 |  90 |
>>  110 |
>>
>> | +I |10.10.10.10 | 2023-12-15 17:38:05.169 |
>> 10.10.10.10 |   2 |  50 |  80 |
>>  120 |
>>
>> | +I |20.20.20.20 | 2023-12-15 17:38:05.170 |
>>   |   |   |   |
>>  |
>>
>> | +I |30.30.30.30 | 2023-12-15 17:38:05.172 |
>>   |   |   |   |
>>  |
>>
>> Unless otherwise stated above:
>>
>> IBM United Kingdom Limited
>> Registered in England and Wales with number 741598
>> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>>
>


Re: Question on lookup joins

2023-12-17 Thread Hang Ruan
Hi, David.

I think you are right that the value with NULL should not be returned if
the filter push down is closed.

Maybe you should explain this sql to make sure this filter not be pushed
down to the lookup source.

I see the configuration 'table.optimizer.source.predicate-pushdown-enabled'
relies on the class FilterableTableSource, which is deprecated.
I am not sure whether this configuration is still useful for jdbc
connector, which is using the SupportsFilterPushDown.

Maybe the jdbc connector should read this configuration and return an
empty 'acceptedFilters' in the method 'applyFilters'.

Best,
Hang

David Radley  于2023年12月16日周六 01:47写道:

> Hi ,
> I am working on FLINK-33365 which related to JDBC predicate pushdown. I
> want to ensure that the same results occur with predicate pushdown as
> without. So I am asking this question outside the pr / issue.
>
> I notice the following behaviour for lookup joins without predicate
> pushdown. I was not expecting all the s , when there is not a
> matching join key.  ’a’ is a table in paimon and ‘db’ is a relational
> database.
>
>
>
> Flink SQL> select * from a;
>
> +++-+
>
> | op | ip |proctime |
>
> +++-+
>
> | +I |10.10.10.10 | 2023-12-15 17:36:10.028 |
>
> | +I |20.20.20.20 | 2023-12-15 17:36:10.030 |
>
> | +I |30.30.30.30 | 2023-12-15 17:36:10.031 |
>
> ^CQuery terminated, received a total of 3 rows
>
>
>
> Flink SQL> select * from  db_catalog.menagerie.e;
>
>
> +++-+-+-+-+
>
> | op | ip |type | age |
> height |  weight |
>
>
> +++-+-+-+-+
>
> | +I |10.10.10.10 |   1 |  30 |
>  100 | 100 |
>
> | +I |10.10.10.10 |   2 |  40 |
>   90 | 110 |
>
> | +I |10.10.10.10 |   2 |  50 |
>   80 | 120 |
>
> | +I |10.10.10.10 |   3 |  50 |
>   70 |  40 |
>
> | +I |20.20.20.20 |   3 |  30 |
>   80 |  90 |
>
>
> +++-+-+-+-+
>
> Received a total of 5 rows
>
>
>
> Flink SQL> set table.optimizer.source.predicate-pushdown-enabled=false;
>
> [INFO] Execute statement succeed.
>
>
>
> Flink SQL> SELECT * FROM a left join mariadb_catalog.menagerie.e FOR
> SYSTEM_TIME AS OF a.proctime on e.type = 2 and a.ip = e.ip;
>
>
> +++-++-+-+-+-+
>
> | op | ip |proctime |
>   ip0 |type | age |  height |
> weight |
>
>
> +++-++-+-+-+-+
>
> | +I |10.10.10.10 | 2023-12-15 17:38:05.169 |
>   10.10.10.10 |   2 |  40 |  90 |
>  110 |
>
> | +I |10.10.10.10 | 2023-12-15 17:38:05.169 |
>   10.10.10.10 |   2 |  50 |  80 |
>  120 |
>
> | +I |20.20.20.20 | 2023-12-15 17:38:05.170 |
> |   |   |   |
>  |
>
> | +I |30.30.30.30 | 2023-12-15 17:38:05.172 |
> |   |   |   |
>  |
>
> Unless otherwise stated above:
>
> IBM United Kingdom Limited
> Registered in England and Wales with number 741598
> Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU
>


Re: [VOTE] Release flink-connector-pulsar 4.1.0, release candidate #1

2023-12-14 Thread Hang Ruan
+1 (non-binding)

- Validated checksum hash
- Verified signature
- Verified that no binaries exist in the source archive
- Build the source with jdk8
- Verified web PR
- Make sure flink-connector-base have the provided scope

Best,
Hang

tison  于2023年12月14日周四 11:51写道:

> Thanks Leonard for driving this release!
>
> +1 (non-binding)
>
> * Download link valid
> * Maven staging artifacts look good.
> * Checksum and gpg matches
> * LICENSE and NOTICE exist
> * Can build from source.
>
> Best,
> tison.
>
> Rui Fan <1996fan...@gmail.com> 于2023年12月14日周四 09:23写道:
> >
> > Thanks Leonard for driving this release!
> >
> > +1 (non-binding)
> >
> > - Validated checksum hash
> > - Verified signature
> > - Verified that no binaries exist in the source archive
> > - Build the source with Maven and jdk8
> > - Verified licenses
> > - Verified web PRs, left a minor comment
> >
> > Best,
> > Rui
> >
> > On Wed, Dec 13, 2023 at 7:15 PM Leonard Xu  wrote:
> >>
> >> Hey all,
> >>
> >> Please review and vote on the release candidate #1 for the version
> 4.1.0 of the Apache Flink Pulsar Connector as follows:
> >>
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific comments)
> >>
> >> The complete staging area is available for your review, which includes:
> >> * JIRA release notes [1],
> >> * The official Apache source release to be deployed to dist.apache.org
> [2], which are signed with the key with fingerprint
> >> 5B2F6608732389AEB67331F5B197E1F1108998AD [3],
> >> * All artifacts to be deployed to the Maven Central Repository [4],
> >> * Source code tag v4.1.0-rc1 [5],
> >> * Website pull request listing the new release [6].
> >>
> >> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
> >>
> >>
> >> Best,
> >> Leonard
> >>
> >> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353431
> >> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-pulsar-4.1.0-rc1/
> >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1688/
> >> [5] https://github.com/apache/flink-connector-pulsar/tree/v4.1.0-rc1
> >> [6] https://github.com/apache/flink-web/pull/703
>


Re: [PROPOSAL] Contribute Flink CDC Connectors project to Apache Flink

2023-12-07 Thread Hang Ruan
+1 for contributing CDC Connectors  to Apache Flink.

Best,
Hang

Yuxin Tan  于2023年12月7日周四 16:05写道:

> Cool, +1 for contributing CDC Connectors to Apache Flink.
>
> Best,
> Yuxin
>
>
> Jing Ge  于2023年12月7日周四 15:43写道:
>
> > Awesome! +1
> >
> > Best regards,
> > Jing
> >
> > On Thu, Dec 7, 2023 at 8:34 AM Sergey Nuyanzin 
> > wrote:
> >
> > > thanks for working on this and driving it
> > >
> > > +1
> > >
> > > On Thu, Dec 7, 2023 at 7:26 AM Feng Jin  wrote:
> > >
> > > > This is incredibly exciting news, a big +1 for this.
> > > >
> > > > Thank you for the fantastic work on Flink CDC. We have created
> > thousands
> > > of
> > > > real-time integration jobs using Flink CDC connectors.
> > > >
> > > >
> > > > Best,
> > > > Feng
> > > >
> > > > On Thu, Dec 7, 2023 at 1:45 PM gongzhongqiang <
> > gongzhongqi...@apache.org
> > > >
> > > > wrote:
> > > >
> > > > > It's very exciting to hear the news.
> > > > > +1 for adding CDC Connectors  to Apache Flink !
> > > > >
> > > > >
> > > > > Best,
> > > > > Zhongqiang
> > > > >
> > > > > Leonard Xu  于2023年12月7日周四 11:25写道:
> > > > >
> > > > > > Dear Flink devs,
> > > > > >
> > > > > >
> > > > > > As you may have heard, we at Alibaba (Ververica) are planning to
> > > donate
> > > > > CDC Connectors for the Apache Flink project
> > > > > > *[1]* to the Apache Flink community.
> > > > > >
> > > > > >
> > > > > >
> > > > > > CDC Connectors for Apache Flink comprise a collection of source
> > > > > connectors designed specifically for Apache Flink. These connectors
> > > > > > *[2]*
> > > > > >  enable the ingestion of changes from various databases using
> > Change
> > > > > Data Capture (CDC), most of these CDC connectors are powered by
> > > Debezium
> > > > > > *[3]*
> > > > > > . They support both the DataStream API and the Table/SQL API,
> > > > > facilitating the reading of database snapshots and continuous
> reading
> > > of
> > > > > transaction logs with exactly-once processing, even in the event of
> > > > > failures.
> > > > > >
> > > > > >
> > > > > >
> > > > > > Additionally, in the latest version 3.0, we have introduced many
> > > > > long-awaited features. Starting from CDC version 3.0, we've built a
> > > > > Streaming ELT Framework available for streaming data integration.
> > This
> > > > > framework allows users to write their data synchronization logic
> in a
> > > > > simple YAML file, which will automatically be translated into a
> Flink
> > > > > DataStreaming job. It emphasizes optimizing the task submission
> > process
> > > > and
> > > > > offers advanced functionalities such as whole database
> > synchronization,
> > > > > merging sharded tables, and schema evolution
> > > > > > *[4]*.
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > I believe this initiative is a perfect match for both sides. For
> > the
> > > > > Flink community, it presents an opportunity to enhance Flink's
> > > > competitive
> > > > > advantage in streaming data integration, promoting the healthy
> growth
> > > and
> > > > > prosperity of the Apache Flink ecosystem. For the CDC Connectors
> > > project,
> > > > > becoming a sub-project of Apache Flink means being part of a
> neutral
> > > > > open-source community, which can attract a more diverse pool of
> > > > > contributors.
> > > > > >
> > > > > >
> > > > > > Please note that the aforementioned points represent only some of
> > our
> > > > > motivations and vision for this donation. Specific future
> operations
> > > need
> > > > > to be further discussed in this thread. For example, the
> sub-project
> > > name
> > > > > after the donation; we hope to name it Flink-CDC
> > > > > > aiming to streaming data intergration through Apache Flink,
> > > > > > following the naming convention of Flink-ML; And this project is
> > > > managed
> > > > > by a total of 8 maintainers, including 3 Flink PMC members and 1
> > Flink
> > > > > Committer. The remaining 4 maintainers are also highly active
> > > > contributors
> > > > > to the Flink community, donating this project to the Flink
> community
> > > > > implies that their permissions might be reduced. Therefore, we may
> > need
> > > > to
> > > > > bring up this topic for further discussion within the Flink PMC.
> > > > > Additionally, we need to discuss how to migrate existing users and
> > > > > documents. We have a user group of nearly 10,000 people and a
> > > > multi-version
> > > > > documentation site need to migrate. We also need to plan for the
> > > > migration
> > > > > of CI/CD processes and other specifics.
> > > > > >
> > > > > >
> > > > > >
> > > > > > While there are many intricate details that require
> implementation,
> > > we
> > > > > are committed to progressing and finalizing this donation process.
> > > > > >
> > > > > >
> > > > > >
> > > > > > Despite being Flink’s most active ecological project (as
> evaluated
> > by
> > > > > GitHub metrics), it also boasts a significant user base. However, I
> > > > believe
> > > > > it's essential to 

Re: Subscribe Apache Flink development email.

2023-12-05 Thread Hang Ruan
Hi, aaron.

If you want to subscribe the dev mail list, you need to send an e-mail to
dev-subscr...@flink.apache.org . See more in [1].
Mailing list could be found here[2].

Best,
Hang

[1]
https://flink.apache.org/what-is-flink/community/#how-to-subscribe-to-a-mailing-list
[2] https://flink.apache.org/what-is-flink/community/#mailing-lists


aaron ai  于2023年12月5日周二 14:48写道:

> Subscribe Apache Flink development email.
>


Re: [VOTE] Release 1.17.2, release candidate #1

2023-11-23 Thread Hang Ruan
+1 (non-binding)

- verified signatures
- verified hashsums
- Verified there are no binaries in the source archive
- reviewed the web PR
- built Flink from sources

Best,
Hang

Jiabao Sun  于2023年11月23日周四 22:09写道:

> Thanks for driving this release.
>
> +1(non-binding)
>
> - Checked the tag in git
> - Verified signatures and hashsums
> - Verified there are no binaries in the source archive
> - Built Flink from sources
>
> Best,
> Jiabao
>
>
> > 2023年11月21日 20:46,Matthias Pohl  写道:
> >
> > +1 (binding)
> >
> > * Downloaded artifacts
> > * Built Flink from sources
> > * Verified SHA512 checksums & GPG signatures
> > * Compared checkout with provided sources
> > * Verified pom file versions
> > * Went over NOTICE/pom file changes without finding anything suspicious
> > * Deployed standalone session cluster and ran WordCount example in batch
> > and streaming: Nothing suspicious in log files found
> > * Verified Java version of uploaded binaries
> >
> > Thanks Yun Tang for taking care of it.
> >
> > On Thu, Nov 16, 2023 at 7:02 AM Rui Fan <1996fan...@gmail.com> wrote:
> >
> >> +1 (non-binding)
> >>
> >> - Verified signatures
> >> - Reviewed the flink-web PR, left a couple of comments
> >> - The source archives do not contain any binaries
> >> - Build the source with Maven 3 and java8 (Checked the license as well)
> >> - bin/start-cluster.sh with java8, it works fine and no any unexpected
> LOG
> >> - Ran demo, it's fine:  bin/flink run
> >> examples/streaming/StateMachineExample.jar
> >>
> >> Best,
> >> Rui
> >>
> >> On Mon, Nov 13, 2023 at 4:04 PM Yun Tang  wrote:
> >>
> >>> Hi everyone,
> >>>
> >>> Please review and vote on the release candidate #1 for the version
> >> 1.17.2,
> >>>
> >>> as follows:
> >>>
> >>> [ ] +1, Approve the release
> >>>
> >>> [ ] -1, Do not approve the release (please provide specific comments)
> >>>
> >>>
> >>> The complete staging area is available for your review, which includes:
> >>> * JIRA release notes [1],
> >>>
> >>> * the official Apache source release and binary convenience releases to
> >> be
> >>> deployed to dist.apache.org [2], which are signed with the key with
> >>> fingerprint 2E0E1AB5D39D55E608071FB9F795C02A4D2482B3 [3],
> >>>
> >>> * all artifacts to be deployed to the Maven Central Repository [4],
> >>>
> >>> * source code tag "release-1.17.2-rc1" [5],
> >>>
> >>> * website pull request listing the new release and adding announcement
> >>> blog post [6].
> >>>
> >>>
> >>> The vote will be open for at least 72 hours. It is adopted by majority
> >>> approval, with at least 3 PMC affirmative votes.
> >>>
> >>>
> >>> [1]
> >>>
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353260
> >>>
> >>> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.17.2-rc1/
> >>>
> >>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >>>
> >>> [4]
> >>>
> https://repository.apache.org/content/repositories/orgapacheflink-1669/
> >>>
> >>> [5] https://github.com/apache/flink/releases/tag/release-1.17.2-rc1
> >>>
> >>> [6] https://github.com/apache/flink-web/pull/696
> >>>
> >>> Thanks,
> >>> Release Manager
> >>>
> >>
>
>


Re: [VOTE] Release 1.16.3, release candidate #1

2023-11-23 Thread Hang Ruan
+1 (non-binding)

- verified signatures
- verified hashsums
- Verified there are no binaries in the source archive
- reviewed the Release Note
- reviewed the web PR

Best,
Hang

Jiabao Sun  于2023年11月23日周四 22:15写道:

> Thanks for driving this release.
>
> +1 (non-binding)
>
> - Checked the tag in git
> - Verified signatures and hashsums
> - Verified there are no binaries in the source archive
> - Built Flink from sources
>
> Best,
> Jiabao
>
> > 2023年11月23日 21:55,Sergey Nuyanzin  写道:
> >
> > +1 (non-binding)
> >
> > - Downloaded artifacts
> > - Built Flink from sources
> > - Verified checksums & signatures
> > - Verified pom/NOTICE files
> > - reviewed the web PR
> >
> > On Thu, Nov 23, 2023 at 1:28 PM Leonard Xu  wrote:
> >
> >> +1 (binding)
> >>
> >> - verified signatures
> >> - verified hashsums
> >> - checked that all POM files point to the same version 1.16.3
> >> - started SQL Client, used MySQL CDC connector to read changelog from
> >> database , the result is expected
> >> - reviewed the web PR, left minor comment
> >>
> >> Best,
> >> Leonard
> >>
> >>> 2023年11月21日 下午8:56,Matthias Pohl  写道:
> >>>
> >>> +1 (binding)
> >>>
> >>> * Downloaded artifacts
> >>> * Built Flink from sources
> >>> * Verified SHA512 checksums & GPG signatures
> >>> * Compared checkout with provided sources
> >>> * Verified pom file versions
> >>> * Went over NOTICE/pom file changes without finding anything suspicious
> >>> * Deployed standalone session cluster and ran WordCount example in
> batch
> >>> and streaming: Nothing suspicious in log files found
> >>> * Verified Java version of uploaded binaries
> >>>
> >>> Thanks for wrapping 1.16 up. :)
> >>>
> >>> On Tue, Nov 21, 2023 at 4:55 AM Rui Fan <1996fan...@gmail.com> wrote:
> >>>
>  +1 (non-binding)
> 
>  Verified based on this wiki[1].
> 
>  - Verified signatures and sha512
>  - The source archives do not contain any binaries
>  - Build the source with Maven 3 and java8 (Checked the license as
> well)
>  - bin/start-cluster.sh with java8, it works fine and no any unexpected
> >> LOG
>  - Ran demo, it's fine:  bin/flink run
>  examples/streaming/StateMachineExample.jar
> 
>  [1]
> 
> >>
> https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+Release
> 
>  Best,
>  Rui
> 
>  On Fri, Nov 17, 2023 at 11:52 AM Yun Tang  wrote:
> 
> > +1 (non-binding)
> >
> >
> > *   Verified signatures
> > *   Build from source code, and it looks good
> > *   Verified that jar packages are built with maven-3.2.5 and JDK8
> > *   Reviewed the flink-web PR
> > *   Start a local standalone cluster and submit examples
> >
> > Best
> > Yun Tang
> > 
> > From: Rui Fan <1996fan...@gmail.com>
> > Sent: Monday, November 13, 2023 18:20
> > To: dev 
> > Subject: [VOTE] Release 1.16.3, release candidate #1
> >
> > Hi everyone,
> >
> > Please review and vote on the release candidate #1 for the version
>  1.16.3,
> >
> > as follows:
> >
> > [ ] +1, Approve the release
> >
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
> > The complete staging area is available for your review, which
> includes:
> > * JIRA release notes [1],
> >
> > * the official Apache source release and binary convenience releases
> to
>  be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint B2D64016B940A7E0B9B72E0D7D0528B28037D8BC [3],
> >
> > * all artifacts to be deployed to the Maven Central Repository [4],
> >
> > * source code tag "release-1.16.3-rc1" [5],
> >
> > * website pull request listing the new release and adding
> announcement
>  blog
> > post [6].
> >
> >
> > The vote will be open for at least 72 hours. It is adopted by
> majority
> > approval, with at least 3 PMC affirmative votes.
> >
> >
> > [1]
> >
> >
> 
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353259
> >
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.16.3-rc1/
> >
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >
> > [4]
> >
> >> https://repository.apache.org/content/repositories/orgapacheflink-1670/
> >
> > [5] https://github.com/apache/flink/releases/tag/release-1.16.3-rc1
> >
> > [6] https://github.com/apache/flink-web/pull/698
> >
> > Thanks,
> > Release Manager
> >
> 
> >>
> >>
> >
> > --
> > Best regards,
> > Sergey
>
>


Re: [VOTE] FLIP-381: Deprecate configuration getters/setters that return/set complex Java objects

2023-11-13 Thread Hang Ruan
+1(non-binding)

Best,
Hang

Jing Ge  于2023年11月13日周一 16:36写道:

> +1(binding)
> Thanks!
>
> Best Regards,
> Jing
>
> On Mon, Nov 13, 2023 at 8:34 AM Zhu Zhu  wrote:
>
> > +1 (binding)
> >
> > Thanks,
> > Zhu
> >
> > Xia Sun  于2023年11月13日周一 15:02写道:
> >
> > > +1 (non-binding)
> > >
> > > Best,
> > > Xia
> > >
> > > Samrat Deb  于2023年11月13日周一 12:37写道:
> > >
> > > > +1 (non binding)
> > > >
> > > > Bests,
> > > > Samrat
> > > >
> > > > On Mon, 13 Nov 2023 at 9:10 AM, Yangze Guo 
> wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > Best,
> > > > > Yangze Guo
> > > > >
> > > > > On Mon, Nov 13, 2023 at 11:35 AM weijie guo <
> > guoweijieres...@gmail.com
> > > >
> > > > > wrote:
> > > > > >
> > > > > > +1(binding)
> > > > > >
> > > > > > Best regards,
> > > > > >
> > > > > > Weijie
> > > > > >
> > > > > >
> > > > > > Lijie Wang  于2023年11月13日周一 10:40写道:
> > > > > >
> > > > > > > +1 (binding)
> > > > > > >
> > > > > > > Best,
> > > > > > > Lijie
> > > > > > >
> > > > > > > Yuepeng Pan  于2023年11月10日周五 18:32写道:
> > > > > > >
> > > > > > > > +1(non-binding)
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Roc
> > > > > > > >
> > > > > > > > On 2023/11/10 03:58:10 Junrui Lee wrote:
> > > > > > > > > Hi everyone,
> > > > > > > > >
> > > > > > > > > Thank you to everyone for the feedback on FLIP-381:
> Deprecate
> > > > > > > > configuration
> > > > > > > > > getters/setters that return/set complex Java objects[1]
> which
> > > has
> > > > > been
> > > > > > > > > discussed in this thread [2].
> > > > > > > > >
> > > > > > > > > I would like to start a vote for it. The vote will be open
> > for
> > > at
> > > > > least
> > > > > > > > 72
> > > > > > > > > hours (excluding weekends) unless there is an objection or
> > not
> > > > > enough
> > > > > > > > votes.
> > > > > > > > >
> > > > > > > > > [1]
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278464992
> > > > > > > > > [2]
> > > > > https://lists.apache.org/thread/y5owjkfxq3xs9lmpdbl6d6jmqdgbjqxo
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] FLIP-381: Deprecate configuration getters/setters that return/set complex Java objects

2023-11-02 Thread Hang Ruan
Thanks Junrui for driving the proposal.

+1 from my side. This FLIP will help to make the configuration clearer for
users.

ps: We should also delete the private field `storage` as its getter and
setter are deleted and it is marked as `@Deprecated`. This is not written
in the FLIP.

Best,
Hang

Yuxin Tan  于2023年11月3日周五 11:30写道:

> Thanks Junrui for driving the proposal.
>
> +1 for this proposal. I believe this change will enhance the usability of
> Flink configuration for both users and developers, while also ensuring
> consistency across various types of configurations.
>
> Best,
> Yuxin
>
>
> Lijie Wang  于2023年11月3日周五 10:59写道:
>
> > Thanks Junrui for driving this.
> >
> > Making configurations simple and consistent has great benefits for both
> > users and devs. +1 for the proposal.
> >
> > Best,
> > Lijie
> >
> > weijie guo  于2023年11月2日周四 16:49写道:
> >
> > > Thanks Junrui for driving this proposal!
> > >
> > > I believe this is helpful for the new Process Function API. Because we
> > > don't need to move some related class/components from flink-core to a
> > pure
> > > API module (maybe, called flink-core-api) after this. Even though the
> > FLIP
> > > related to new API is in preparation atm, I still want to emphasize our
> > > goal is that user application should no longer depend on these stuff.
> So
> > > I'm + 1 for this proposal.
> > >
> > >
> > > Best regards,
> > >
> > > Weijie
> > >
> > >
> > > Zhu Zhu  于2023年11月2日周四 16:00写道:
> > >
> > > > Thanks Junrui for creating the FLIP and kicking off this discussion.
> > > >
> > > > The community has been constantly striving to unify and simplify the
> > > > configuration layer of Flink. Some progress has already been made,
> > > > such as FLINK-29379. However, the compatibility of public interfaces
> > > > poses an obstacle to completing the task. The release of Flink 2.0
> > > > presents a great opportunity to accomplish this goal.
> > > >
> > > > +1 for the proposal.
> > > >
> > > > Thanks,
> > > > Zhu
> > > >
> > > > Rui Fan <1996fan...@gmail.com> 于2023年11月2日周四 10:27写道:
> > > >
> > > > > Thanks Junrui for driving this proposal!
> > > > >
> > > > > ConfigOption is easy to use for flink users, easy to manage options
> > > > > for flink platform maintainers, and easy to maintain for flink
> > > developers
> > > > > and flink community.
> > > > >
> > > > > So big +1 for this proposal!
> > > > >
> > > > > Best,
> > > > > Rui
> > > > >
> > > > > On Thu, Nov 2, 2023 at 10:10 AM Junrui Lee 
> > > wrote:
> > > > >
> > > > > > Hi devs,
> > > > > >
> > > > > > I would like to start a discussion on FLIP-381: Deprecate
> > > configuration
> > > > > > getters/setters that return/set complex Java objects[1].
> > > > > >
> > > > > > Currently, the job configuration in FLINK is spread out across
> > > > different
> > > > > > components, which leads to inconsistencies and confusion. To
> > address
> > > > this
> > > > > > issue, it is necessary to migrate non-ConfigOption complex Java
> > > objects
> > > > > to
> > > > > > use ConfigOption and adopt a single Configuration object to host
> > all
> > > > the
> > > > > > configuration.
> > > > > > However, there is a significant blocker in implementing this
> > > solution.
> > > > > > These complex Java objects in StreamExecutionEnvironment,
> > > > > CheckpointConfig,
> > > > > > and ExecutionConfig have already been exposed through the public
> > API,
> > > > > > making it challenging to modify the existing implementation.
> > > > > >
> > > > > > Therefore, I propose to deprecate these Java objects and their
> > > > > > corresponding getter/setter interfaces, ultimately removing them
> in
> > > > > > FLINK-2.0.
> > > > > >
> > > > > > Your feedback and thoughts on this proposal are highly
> appreciated.
> > > > > >
> > > > > > Best regards,
> > > > > > Junrui Lee
> > > > > >
> > > > > > [1]
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278464992
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS][FLINK-33240] Document deprecated options as well

2023-11-01 Thread Hang Ruan
Thanks for the proposal.

+1 from my side and +1 for putting them to a separate section.

Best,
Hang

Samrat Deb  于2023年11月1日周三 15:32写道:

> Thanks for the proposal ,
> +1 for adding deprecated identifier
>
> [Thought] Can we have seperate section / page for deprecated configs ? Wdut
> ?
>
>
> Bests,
> Samrat
>
>
> On Tue, 31 Oct 2023 at 3:44 PM, Alexander Fedulov <
> alexander.fedu...@gmail.com> wrote:
>
> > Hi Zhanghao,
> >
> > Thanks for the proposition.
> > In general +1, this sounds like a good idea as long it is clear that the
> > usage of these settings is discouraged.
> > Just one minor concern - the configuration page is already very long, do
> > you have a rough estimate of how many more options would be added with
> this
> > change?
> >
> > Best,
> > Alexander Fedulov
> >
> > On Mon, 30 Oct 2023 at 18:24, Matthias Pohl  > .invalid>
> > wrote:
> >
> > > Thanks for your proposal, Zhanghao Chen. I think it adds more
> > transparency
> > > to the configuration documentation.
> > >
> > > +1 from my side on the proposal
> > >
> > > On Wed, Oct 11, 2023 at 2:09 PM Zhanghao Chen <
> zhanghao.c...@outlook.com
> > >
> > > wrote:
> > >
> > > > Hi Flink users and developers,
> > > >
> > > > Currently, Flink won't generate doc for the deprecated options. This
> > > might
> > > > confuse users when upgrading from an older version of Flink: they
> have
> > to
> > > > either carefully read the release notes or check the source code for
> > > > upgrade guidance on deprecated options.
> > > >
> > > > I propose to document deprecated options as well, with a
> "(deprecated)"
> > > > tag placed at the beginning of the option description to highlight
> the
> > > > deprecation status [1].
> > > >
> > > > Looking forward to your feedbacks on it.
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-33240
> > > >
> > > > Best,
> > > > Zhanghao Chen
> > > >
> > >
> >
>


Re: [DISCUSS] FLIP-377: Support configuration to disable filter push down for Table/SQL Sources

2023-10-30 Thread Hang Ruan
Thanks for the improvements, Jiabao.

There are some details that I am not sure about.
1. The new option `source.filter-push-down.enabled` will be added to which
class? I think it should be `SourceReaderOptions`.
2. How are the connector developers able to know and follow the FLIP? Do we
need an abstract base class or provide a default method?

Best,
Hang

Jiabao Sun  于2023年10月30日周一 14:45写道:

> Hi, all,
>
> Thanks for the lively discussion.
>
> Based on the discussion, I have made some adjustments to the FLIP document:
>
> 1. The name of the newly added option has been changed to
> "source.filter-push-down.enabled".
> 2. Considering compatibility with older versions, the newly added
> "source.filter-push-down.enabled" option needs to respect the optimizer's
> "table.optimizer.source.predicate-pushdown-enabled" option.
> But there is a consideration to remove the old option in Flink 2.0.
> 3. We can provide more options to disable other source abilities with side
> effects, such as “source.aggregate.enabled” and “source.projection.enabled"
> This is not urgent and can be continuously introduced.
>
> Looking forward to your feedback again.
>
> Best,
> Jiabao
>
>
> > 2023年10月29日 08:45,Becket Qin  写道:
> >
> > Thanks for digging into the git history, Jark. I agree it makes sense to
> > deprecate this API in 2.0.
> >
> > Cheers,
> >
> > Jiangjie (Becket) Qin
> >
> > On Fri, Oct 27, 2023 at 5:47 PM Jark Wu  wrote:
> >
> >> Hi Becket,
> >>
> >> I checked the history of "
> >> *table.optimizer.source.predicate-pushdown-enabled*",
> >> it seems it was introduced since the legacy FilterableTableSource
> >> interface
> >> which might be an experiential feature at that time. I don't see the
> >> necessity
> >> of this option at the moment. Maybe we can deprecate this option and
> drop
> >> it
> >> in Flink 2.0[1] if it is not necessary anymore. This may help to
> >> simplify this discussion.
> >>
> >>
> >> Best,
> >> Jark
> >>
> >> [1]: https://issues.apache.org/jira/browse/FLINK-32383
> >>
> >>
> >>
> >> On Thu, 26 Oct 2023 at 10:14, Becket Qin  wrote:
> >>
> >>> Thanks for the proposal, Jiabao. My two cents below:
> >>>
> >>> 1. If I understand correctly, the motivation of the FLIP is mainly to
> >>> make predicate pushdown optional on SOME of the Sources. If so,
> intuitively
> >>> the configuration should be Source specific instead of general.
> Otherwise,
> >>> we will end up with general configurations that may not take effect for
> >>> some of the Source implementations. This violates the basic rule of a
> >>> configuration - it does what it says, regardless of the implementation.
> >>> While configuration standardization is usually a good thing, it should
> not
> >>> break the basic rules.
> >>> If we really want to have this general configuration, for the sources
> >>> this configuration does not apply, they should throw an exception to
> make
> >>> it clear that this configuration is not supported. However, that seems
> ugly.
> >>>
> >>> 2. I think the actual motivation of this FLIP is about "how a source
> >>> should implement predicate pushdown efficiently", not "whether
> predicate
> >>> pushdown should be applied to the source." For example, if a source
> wants
> >>> to avoid additional computing load in the external system, it can
> always
> >>> read the entire record and apply the predicates by itself. However,
> from
> >>> the Flink perspective, the predicate pushdown is applied, it is just
> >>> implemented differently by the source. So the design principle here is
> that
> >>> Flink only cares about whether a source supports predicate pushdown or
> not,
> >>> it does not care about the implementation efficiency / side effect of
> the
> >>> predicates pushdown. It is the Source implementation's responsibility
> to
> >>> ensure the predicates pushdown is implemented efficiently and does not
> >>> impose excessive pressure on the external system. And it is OK to have
> >>> additional configurations to achieve this goal. Obviously, such
> >>> configurations will be source specific in this case.
> >>>
> >>> 3. Regarding the existing configurations of
> *table.optimizer.source.predicate-pushdown-enabled.
> >>> *I am not sure why we need it. Supposedly, if a source implements a
> >>> SupportsXXXPushDown interface, the optimizer should push the
> corresponding
> >>> predicates to the Source. I am not sure in which case this
> configuration
> >>> would be used. Any ideas @Jark Wu ?
> >>>
> >>> Thanks,
> >>>
> >>> Jiangjie (Becket) Qin
> >>>
> >>>
> >>> On Wed, Oct 25, 2023 at 11:55 PM Jiabao Sun
> >>>  wrote:
> >>>
>  Thanks Jane for the detailed explanation.
> 
>  I think that for users, we should respect conventions over
>  configurations.
>  Conventions can be default values explicitly specified in
>  configurations, or they can be behaviors that follow previous
> versions.
>  If the same code has different behaviors in different versions, it
> would
>  be a very 

Re: [ANNOUNCE] Apache Flink 1.18.0 released

2023-10-26 Thread Hang Ruan
Congratulations!

Best,
Hang

Samrat Deb  于2023年10月27日周五 11:50写道:

> Congratulations on the great release
>
> Bests,
> Samrat
>
> On Fri, 27 Oct 2023 at 7:59 AM, Yangze Guo  wrote:
>
> > Great work! Congratulations to everyone involved!
> >
> > Best,
> > Yangze Guo
> >
> > On Fri, Oct 27, 2023 at 10:23 AM Qingsheng Ren  wrote:
> > >
> > > Congratulations and big THANK YOU to everyone helping with this
> release!
> > >
> > > Best,
> > > Qingsheng
> > >
> > > On Fri, Oct 27, 2023 at 10:18 AM Benchao Li 
> > wrote:
> > >>
> > >> Great work, thanks everyone involved!
> > >>
> > >> Rui Fan <1996fan...@gmail.com> 于2023年10月27日周五 10:16写道:
> > >> >
> > >> > Thanks for the great work!
> > >> >
> > >> > Best,
> > >> > Rui
> > >> >
> > >> > On Fri, Oct 27, 2023 at 10:03 AM Paul Lam 
> > wrote:
> > >> >
> > >> > > Finally! Thanks to all!
> > >> > >
> > >> > > Best,
> > >> > > Paul Lam
> > >> > >
> > >> > > > 2023年10月27日 03:58,Alexander Fedulov <
> alexander.fedu...@gmail.com>
> > 写道:
> > >> > > >
> > >> > > > Great work, thanks everyone!
> > >> > > >
> > >> > > > Best,
> > >> > > > Alexander
> > >> > > >
> > >> > > > On Thu, 26 Oct 2023 at 21:15, Martijn Visser <
> > martijnvis...@apache.org>
> > >> > > > wrote:
> > >> > > >
> > >> > > >> Thank you all who have contributed!
> > >> > > >>
> > >> > > >> Op do 26 okt 2023 om 18:41 schreef Feng Jin <
> > jinfeng1...@gmail.com>
> > >> > > >>
> > >> > > >>> Thanks for the great work! Congratulations
> > >> > > >>>
> > >> > > >>>
> > >> > > >>> Best,
> > >> > > >>> Feng Jin
> > >> > > >>>
> > >> > > >>> On Fri, Oct 27, 2023 at 12:36 AM Leonard Xu <
> xbjt...@gmail.com>
> > wrote:
> > >> > > >>>
> > >> > >  Congratulations, Well done!
> > >> > > 
> > >> > >  Best,
> > >> > >  Leonard
> > >> > > 
> > >> > >  On Fri, Oct 27, 2023 at 12:23 AM Lincoln Lee <
> > lincoln.8...@gmail.com>
> > >> > >  wrote:
> > >> > > 
> > >> > > > Thanks for the great work! Congrats all!
> > >> > > >
> > >> > > > Best,
> > >> > > > Lincoln Lee
> > >> > > >
> > >> > > >
> > >> > > > Jing Ge  于2023年10月27日周五
> 00:16写道:
> > >> > > >
> > >> > > >> The Apache Flink community is very happy to announce the
> > release of
> > >> > > > Apache
> > >> > > >> Flink 1.18.0, which is the first release for the Apache
> > Flink 1.18
> > >> > > > series.
> > >> > > >>
> > >> > > >> Apache Flink® is an open-source unified stream and batch
> data
> > >> > >  processing
> > >> > > >> framework for distributed, high-performing,
> > always-available, and
> > >> > > > accurate
> > >> > > >> data applications.
> > >> > > >>
> > >> > > >> The release is available for download at:
> > >> > > >> https://flink.apache.org/downloads.html
> > >> > > >>
> > >> > > >> Please check out the release blog post for an overview of
> the
> > >> > > > improvements
> > >> > > >> for this release:
> > >> > > >>
> > >> > > >>
> > >> > > >
> > >> > > 
> > >> > > >>>
> > >> > > >>
> > >> > >
> >
> https://flink.apache.org/2023/10/24/announcing-the-release-of-apache-flink-1.18/
> > >> > > >>
> > >> > > >> The full release notes are available in Jira:
> > >> > > >>
> > >> > > >>
> > >> > > >
> > >> > > 
> > >> > > >>>
> > >> > > >>
> > >> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352885
> > >> > > >>
> > >> > > >> We would like to thank all contributors of the Apache Flink
> > >> > > >> community
> > >> > >  who
> > >> > > >> made this release possible!
> > >> > > >>
> > >> > > >> Best regards,
> > >> > > >> Konstantin, Qingsheng, Sergey, and Jing
> > >> > > >>
> > >> > > >
> > >> > > 
> > >> > > >>>
> > >> > > >>
> > >> > >
> > >> > >
> > >>
> > >>
> > >>
> > >> --
> > >>
> > >> Best,
> > >> Benchao Li
> >
>


Re: [DISCUSS] FLIP-377: Support configuration to disable filter push down for Table/SQL Sources

2023-10-25 Thread Hang Ruan
Hi, all,

Thanks for the lively discussion.

I agree with Jiabao. I think enabling "scan.filter-push-down.enabled"
relies on enabling "table.optimizer.source.predicate-pushdown-enabled".
It is a little strange that the planner still needs to push down the
filters when we set "scan.filter-push-down.enabled=false" and
"table.optimizer.source.predicate-pushdown-enabled=true".
Maybe we need to add some checks to warn the users when setting
"scan.filter-push-down.enabled=true" and
"table.optimizer.source.predicate-pushdown-enabled=false".

Besides that, I am +1 for renaming 'scan.filter-push-down.enabled' to
'source.predicate-pushdown.enabled'.

Best,
Hang

Jiabao Sun  于2023年10月25日周三 18:23写道:

> Thanks Benchao for the feedback.
>
> I understand that the configuration of global parallelism and task
> parallelism is at different granularities but with the same configuration.
> However, "table.optimizer.source.predicate-pushdown-enabled" and
> "scan.filter-push-down.enabled" are configurations for different
> components(optimizer and source operator).
>
> From a user's perspective, there are two scenarios:
>
> 1. Disabling all filter pushdown
> In this case, setting "table.optimizer.source.predicate-pushdown-enabled"
> to false is sufficient to meet the requirement.
>
> 2. Disabling filter pushdown for specific sources
> In this scenario, there is no need to adjust the value of
> "table.optimizer.source.predicate-pushdown-enabled".
> Instead, the focus should be on the configuration of
> "scan.filter-push-down.enabled" to meet the requirement.
> In this case, users do not need to set
> "table.optimizer.source.predicate-pushdown-enabled" to false and manually
> enable filter pushdown for specific sources.
>
> Additionally, if "scan.filter-push-down.enabled" doesn't respect to
> "table.optimizer.source.predicate-pushdown-enabled" and the default value
> of "scan.filter-push-down.enabled" is defined as true,
> it means that just modifying
> "table.optimizer.source.predicate-pushdown-enabled" as false will have no
> effect, and filter pushdown will still be performed.
>
> If we define the default value of "scan.filter-push-down.enabled" as
> false, it would introduce a difference in behavior compared to the previous
> version.
> The same SQL query that could successfully push down filters in the old
> version but would no longer do so after the upgrade.
>
> Best,
> Jiabao
>
>
> > 2023年10月25日 17:10,Benchao Li  写道:
> >
> > Thanks Jiabao for the detailed explanations, that helps a lot, I
> > understand your rationale now.
> >
> > Correct me if I'm wrong. Your perspective is from "developer", which
> > means there is an optimizer and connector component, and if we want to
> > enable this feature (pushing filters down into connectors), you must
> > enable it firstly in optimizer, and only then connector has the chance
> > to decide to use it or not.
> >
> > My perspective is from "user" that (Why a user should care about the
> > difference of optimizer/connector) , this is a feature, and has two
> > way to control it, one way is to config it job-level, the other one is
> > in table properties. What a user expects is that they can control a
> > feature in a tiered way, that setting it per job, and then
> > fine-grained tune it per table.
> >
> > This is some kind of similar to other concepts, such as parallelism,
> > users can set a job level default parallelism, and then fine-grained
> > tune it per operator. There may be more such debate in the future
> > e.g., we can have a job level config about adding key-by before lookup
> > join, and also a hint/table property way to fine-grained control it
> > per lookup operator. Hence we'd better find a unified way for all
> > those similar kind of features.
> >
> > Jiabao Sun  于2023年10月25日周三 15:27写道:
> >>
> >> Thanks Jane for further explanation.
> >>
> >> These two configurations correspond to different levels.
> "scan.filter-push-down.enabled" does not make
> "table.optimizer.source.predicate" invalid.
> >> The planner will still push down predicates to all sources.
> >> Whether filter pushdown is allowed or not is determined by the specific
> source's "scan.filter-push-down.enabled" configuration.
> >>
> >> However, "table.optimizer.source.predicate" does directly affect
> "scan.filter-push-down.enabled”.
> >> When the planner disables predicate pushdown, the source-level filter
> pushdown will also not be executed, even if the source allows filter
> pushdown.
> >>
> >> Whatever, in point 1 and 2, our expectation is consistent.
> >> For the 3rd point, I still think that the planner-level configuration
> takes precedence over the source-level configuration.
> >> It may seem counterintuitive when we globally disable predicate
> pushdown but allow filter pushdown at the source level.
> >>
> >> Best,
> >> Jiabao
> >>
> >>
> >>
> >>> 2023年10月25日 14:35,Jane Chan  写道:
> >>>
> >>> Hi Jiabao,
> >>>
> >>> Thanks for clarifying this. While by "scan.filter-push-down.enabled
> takes a
> 

Re: [DISCUSS] FLIP-377: Support configuration to disable filter push down for Table/SQL Sources

2023-10-24 Thread Hang Ruan
Hi, Jiabao.

Thanks for driving this discussion.

IMO, if there are many connectors containing the same logic, I think this
FLIP is useful.
We do not know how many connectors need to add the same code.

Best,
Hang

Jiabao Sun  于2023年10月24日周二 18:26写道:

> Thanks Martijn,
>
> Indeed, implementing the logic check in the applyFilters method can
> fulfill the functionality of disabling filter pushdown.
> My concern is that the same logic check may need to be implemented in each
> source.
>
> public Result applyFilters(List filters) {
> if (supportsFilterPushDown) {
> return applyFiltersInternal(filters);
> } else {
> return Result.of(Collections.emptyList(), filters);
> }
> }
>
>
> If we define enough generic configurations, we can also pass these
> configurations uniformly in the abstract source superclass
> and provide a default implementation to determine whether to allow filter
> pushdown based on the options.
>
> public abstract class FilterableDynamicTableSource
> implements DynamicTableSource, SupportsFilterPushDown {
>
> private Configuration sourceConfig;
>
> @Override
> public boolean enableFilterPushDown() {
> return sourceConfig.get(ENABLE_FILTER_PUSH_DOWN);
> }
> }
>
>
> Best,
> Jiabao
>
>
> > 2023年10月24日 17:59,Martijn Visser  写道:
> >
> > Hi Jiabao,
> >
> > I'm in favour of Jark's approach: while I can see the need for a
> > generic flag, I can also foresee the situation where users actually
> > want to be able to control it per connector. So why not go directly
> > for that approach?
> >
> > Best regards,
> >
> > Martijn
> >
> > On Tue, Oct 24, 2023 at 11:37 AM Jane Chan 
> wrote:
> >>
> >> Hi Jiabao,
> >>
> >> Thanks for driving this discussion. I have a small question that will
> >> "scan.filter-push-down.enabled" take precedence over
> >> "table.optimizer.source.predicate" when the two parameters might
> conflict
> >> each other?
> >>
> >> Best,
> >> Jane
> >>
> >> On Tue, Oct 24, 2023 at 5:05 PM Jiabao Sun  .invalid>
> >> wrote:
> >>
> >>> Thanks Jark,
> >>>
> >>> If we only add configuration without adding the enableFilterPushDown
> >>> method in the SupportsFilterPushDown interface,
> >>> each connector would have to handle the same logic in the applyFilters
> >>> method to determine whether filter pushdown is needed.
> >>> This would increase complexity and violate the original behavior of the
> >>> applyFilters method.
> >>>
> >>> On the contrary, we only need to pass the configuration parameter in
> the
> >>> newly added enableFilterPushDown method
> >>> to decide whether to perform predicate pushdown.
> >>>
> >>> I think this approach would be clearer and simpler.
> >>> WDYT?
> >>>
> >>> Best,
> >>> Jiabao
> >>>
> >>>
>  2023年10月24日 16:58,Jark Wu  写道:
> 
>  Hi JIabao,
> 
>  I think the current interface can already satisfy your requirements.
>  The connector can reject all the filters by returning the input
> filters
>  as `Result#remainingFilters`.
> 
>  So maybe we don't need to introduce a new method to disable
>  pushdown, but just introduce an option for the specific connector.
> 
>  Best,
>  Jark
> 
>  On Tue, 24 Oct 2023 at 16:38, Leonard Xu  wrote:
> 
> > Thanks @Jiabao for kicking off this discussion.
> >
> > Could you add a section to explain the difference between proposed
> > connector level config `scan.filter-push-down.enabled` and existing
> >>> query
> > level config `table.optimizer.source.predicate-pushdown-enabled` ?
> >
> > Best,
> > Leonard
> >
> >> 2023年10月24日 下午4:18,Jiabao Sun  写道:
> >>
> >> Hi Devs,
> >>
> >> I would like to start a discussion on FLIP-377: support
> configuration
> >>> to
> > disable filter pushdown for Table/SQL Sources[1].
> >>
> >> Currently, Flink Table/SQL does not expose fine-grained control for
> > users to enable or disable filter pushdown.
> >> However, filter pushdown has some side effects, such as additional
> > computational pressure on external systems.
> >> Moreover, Improper queries can lead to issues such as full table
> scans,
> > which in turn can impact the stability of external systems.
> >>
> >> Suppose we have an SQL query with two sources: Kafka and a database.
> >> The database is sensitive to pressure, and we want to configure it
> to
> > not perform filter pushdown to the database source.
> >> However, we still want to perform filter pushdown to the Kafka
> source
> >>> to
> > decrease network IO.
> >>
> >> I propose to support configuration to disable filter push down for
> > Table/SQL sources to let user decide whether to perform filter
> pushdown.
> >>
> >> Looking forward to your feedback.
> >>
> >> [1]
> >
> >>>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=276105768
> >>
> >> Best,
> >> Jiabao
> >
> >
> >>>
>
>


Re: [VOTE] Release 1.18.0, release candidate #3

2023-10-24 Thread Hang Ruan
+1(non-binding)

- verified signatures & hash
- build from the source code succeed with jdk 8
- Reviewed release note
- Started a standalone cluster and submitted a Flink SQL job that read and
wrote with Kafka connector and JSON format

Best,
Hang

Samrat Deb  于2023年10月24日周二 14:06写道:

> +1(non-binding)
>
> - Downloaded artifacts from dist[1]
> - Verified SHA512 checksums
> - Verified GPG signatures
> - Build the source with java 8 and 11
>
> [1] https://dist.apache.org/repos/dist/dev/flink/flink-1.18.0-rc3/
>
> Bests,
> Samrat
>
> On Tue, Oct 24, 2023 at 10:44 AM Jingsong Li 
> wrote:
>
> > +1 (binding)
> >
> > - verified signatures & hash
> > - built from source code succeeded
> > - started SQL Client, used Paimon connector to write and read, the
> > result is expected
> >
> > Best,
> > Jingsong
> >
> > On Tue, Oct 24, 2023 at 12:15 PM Yuxin Tan 
> wrote:
> > >
> > > +1(non-binding)
> > >
> > > - Verified checksum
> > > - Build from source code
> > > - Verified signature
> > > - Started a local cluster and run Streaming & Batch wordcount job, the
> > > result is expected
> > > - Verified web PR
> > >
> > > Best,
> > > Yuxin
> > >
> > >
> > > Qingsheng Ren  于2023年10月24日周二 11:19写道:
> > >
> > > > +1 (binding)
> > > >
> > > > - Verified checksums and signatures
> > > > - Built from source with Java 8
> > > > - Started a standalone cluster and submitted a Flink SQL job that
> read
> > and
> > > > wrote with Kafka connector and CSV / JSON format
> > > > - Reviewed web PR and release note
> > > >
> > > > Best,
> > > > Qingsheng
> > > >
> > > > On Mon, Oct 23, 2023 at 10:40 PM Leonard Xu 
> wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > - verified signatures
> > > > > - verified hashsums
> > > > > - built from source code succeeded
> > > > > - checked all dependency artifacts are 1.18
> > > > > - started SQL Client, used MySQL CDC connector to read changelog
> from
> > > > > database , the result is expected
> > > > > - reviewed the web PR, left minor comments
> > > > > - reviewed the release notes PR, left minor comments
> > > > >
> > > > >
> > > > > Best,
> > > > > Leonard
> > > > >
> > > > > > 2023年10月21日 下午7:28,Rui Fan <1996fan...@gmail.com> 写道:
> > > > > >
> > > > > > +1(non-binding)
> > > > > >
> > > > > > - Downloaded artifacts from dist[1]
> > > > > > - Verified SHA512 checksums
> > > > > > - Verified GPG signatures
> > > > > > - Build the source with java-1.8 and verified the licenses
> together
> > > > > > - Verified web PR
> > > > > >
> > > > > > [1]
> https://dist.apache.org/repos/dist/dev/flink/flink-1.18.0-rc3/
> > > > > >
> > > > > > Best,
> > > > > > Rui
> > > > > >
> > > > > > On Fri, Oct 20, 2023 at 10:31 PM Martijn Visser <
> > > > > martijnvis...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > >> +1 (binding)
> > > > > >>
> > > > > >> - Validated hashes
> > > > > >> - Verified signature
> > > > > >> - Verified that no binaries exist in the source archive
> > > > > >> - Build the source with Maven
> > > > > >> - Verified licenses
> > > > > >> - Verified web PR
> > > > > >> - Started a cluster and the Flink SQL client, successfully read
> > and
> > > > > >> wrote with the Kafka connector to Confluent Cloud with AVRO and
> > Schema
> > > > > >> Registry enabled
> > > > > >>
> > > > > >> On Fri, Oct 20, 2023 at 2:55 PM Matthias Pohl
> > > > > >>  wrote:
> > > > > >>>
> > > > > >>> +1 (binding)
> > > > > >>>
> > > > > >>> * Downloaded artifacts
> > > > > >>> * Built Flink from sources
> > > > > >>> * Verified SHA512 checksums GPG signatures
> > > > > >>> * Compared checkout with provided sources
> > > > > >>> * Verified pom file versions
> > > > > >>> * Verified that there are no pom/NOTICE file changes since RC1
> > > > > >>> * Deployed standalone session cluster and ran WordCount example
> > in
> > > > > batch
> > > > > >>> and streaming: Nothing suspicious in log files found
> > > > > >>>
> > > > > >>> On Thu, Oct 19, 2023 at 3:00 PM Piotr Nowojski <
> > pnowoj...@apache.org
> > > > >
> > > > > >> wrote:
> > > > > >>>
> > > > >  +1 (binding)
> > > > > 
> > > > >  Best,
> > > > >  Piotrek
> > > > > 
> > > > >  czw., 19 paź 2023 o 09:55 Yun Tang 
> > napisał(a):
> > > > > 
> > > > > > +1 (non-binding)
> > > > > >
> > > > > >
> > > > > >  *   Build from source code
> > > > > >  *   Verify the pre-built jar packages were built with JDK8
> > > > > >  *   Verify FLIP-291 with a standalone cluster, and it works
> > fine
> > > > > >> with
> > > > > > StateMachine example.
> > > > > >  *   Checked the signature
> > > > > >  *   Viewed the PRs.
> > > > > >
> > > > > > Best
> > > > > > Yun Tang
> > > > > > 
> > > > > > From: Cheng Pan 
> > > > > > Sent: Thursday, October 19, 2023 14:38
> > > > > > To: dev@flink.apache.org 
> > > > > > Subject: RE: [VOTE] Release 1.18.0, release candidate #3
> > > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > 

Re: [DISCUSS] FLINK-25927: Make flink-connector-base dependency usage consistent across all connectors

2023-09-13 Thread Hang Ruan
Hi, all.

I would like to help to do some work about this issue. Because some classes
in flink-connector-base are supposed to be used inside the user jar
directly, FLINK-25927[1] has been reverted by FLINK-26701[2].

And the final solution is as follows.
- package flink-connector-base into flink-dist
- the external connectors will not bundle the connector-base module, which
is written in the Externalized Connector development docs[3]

But the most external connectors still bundle the connector-base module
now. I will check this problem and stop bundling connector-base module in
every externalized connector[4].

Best,
Hang

[1] https://issues.apache.org/jira/browse/FLINK-25927
[2] https://issues.apache.org/jira/browse/FLINK-26701
[3]
https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
[4] https://issues.apache.org/jira/browse/FLINK-30400

Alexander Fedulov  于2022年2月14日周一 21:34写道:

> Hi,
>
> Thomas, Chesnay, thank you for your input. Below I will try to capture two
> actionable alternatives together with their benefits and downsides:
>
> Alternative #1: Package flink-connector-base into flink-dist
>
> Downsides:
> - breaks existing CI/IDE setup that previously neither relied on flink-dist
> nor added flink-connector-base as a dependency
> - could break existing connectors due to conflicts between
> flink-connector-base of different version (if they did not relocate it)
> - more work: flink-dist needs publishing to maven central to provide a
> solution for CI/IDE setups (this is currently not done)
> - flink-dist is heavy: currently about 118MB, which could be potentially
> reduced to ~70MB by removing parts that are not directly related to
> interfaces, like flink-kubernetes, but this needs more work
>
> Benefits:
> - consistency: flink-connector-base does not get "special treatment" when
> compared to other Flink APIs
> - makes it easier for connector base to use utilities of Flink (evolve
> together)
> - makes it easier to evolve dependency on core, table-commons (only source
> compatibility required, not binary)
>
>
> Alternative #2: shade and relocate flink-connector-base in every connector
>
> Downsides:
> - will break connectors that were previously transitively pulling it in via
> flink-connector-files/flink-table uber jar
> - treats this API differently than the other Flink APIs
> - increased API compatibility surface: everything that flink-connector-base
> relies on (flink-core, flink-table-commons) has to be binary compatible
> between the versions, not just the flink-connector-base itself
>
> Benefits:
> - less work from the implementation perspective - flink-dist does not need
> to be published
> - does not break existing CI/IDE setups
> - also no need to pull in the sizeable flink-dist dependency for running in
> IDEs and CI
>
>
> All in all, the issue seems to boil down to the question of API
> compatibility guarantees, as has already been rightly pointed out in this
> thread. The main difference between the approaches is were the
> compatibility guarantee emphasis is put:
>
> 1: connector -> *COMPATIBLE* -> connector-base -> [core, table-common]
> 2: connector -> connector-base -> *COMPATIBLE* -> [core, table-common]
>
> As you see, both approaches are not ideal and have their downsides. A
> better solution could be the one where users rely on a single lightweight
> module that encapsulates all public APIs. This module could then evolve in
> sync and with strict @Public compatibility guarantees. Such an approach is
> a significant effort and, as Thomas mentioned, is only hinted at in
> FLIP-196 as the eventual goal. To move forwards while minimizing the
> potential to break existing connectors and setups, we could try to reap the
> benefits and to mitigate the downsides by combining Alternative #1 and
> Alternative #2, i.e.:
>
>  - shade and relocate all dependencies to flink-connector-base for the
> connectors maintained within Flink
>  - add a documentation notice which asks external connector developers to
> also shade and relocate flink-connector-base in their implementations
>  - package flink-connector-base into flink-dist
>
> This would allow both not to break the existing CI/IDE setups
> (flink-connector-base remains included into connectors) while also not
> break the connectors that were previously pulling in flink-connector-base
> via flink-connector-files/flink-table.
>
> The mixed solution is not meant to be a permanent one, and we should
> revisit the API compatibility topic in 1.16.
>
> Let me know what you think.
>
> Thanks,
> Alexander Fedulov
>
> On Mon, Feb 14, 2022 at 10:01 AM Chesnay Schepler 
> wrote:
>
> > Letting connectors bundle it doesn't necessarily make it harder to
> > achieve; that all depends on how we approach it;
> > e.g., everything that connector-base uses from the core Flink could be
> > required to also be annotated with Public(Evolving).
> > (i.e., treat it as if it were externalized)
> >
> > On 13/02/2022 02:12, Thomas 

Re: [DISCUSS] FLIP-328: Allow source operators to determine isProcessingBacklog based on watermark lag

2023-08-30 Thread Hang Ruan
Hi, Xuannan.

Thanks for preparing the FLIP.

After this FLIP, we will have two ways to report isProcessingBacklog: 1.
>From the source; 2. Judged by the watermark lag. What is the priority
between them?
For example, what is the status isProcessingBacklog when the source report
`isProcessingBacklog=false` and the watermark lag exceeds the threshold?

Best,
Hang

Xuannan Su  于2023年8月30日周三 10:06写道:

> Hi Jing,
>
> Thank you for the suggestion.
>
> The definition of watermark lag is the same as the watermarkLag metric in
> FLIP-33[1]. More specifically, the watermark lag calculation is computed at
> the time when a watermark is emitted downstream in the following way:
> watermarkLag = CurrentTime - Watermark. I have added this description to
> the FLIP.
>
> I hope this addresses your concern.
>
> Best,
> Xuannan
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-33%3A+Standardize+Connector+Metrics
>
>
> > On Aug 28, 2023, at 01:04, Jing Ge  wrote:
> >
> > Hi Xuannan,
> >
> > Thanks for the proposal. +1 for me.
> >
> > There is one tiny thing that I am not sure if I understand it correctly.
> > Since there will be many different WatermarkStrategies and different
> > WatermarkGenerators. Could you please update the FLIP and add the
> > description of how the watermark lag is calculated exactly? E.g.
> Watermark
> > lag = A - B with A is the timestamp of the watermark emitted to the
> > downstream and B is(this is the part I am not really sure after
> reading
> > the FLIP).
> >
> > Best regards,
> > Jing
> >
> >
> > On Mon, Aug 21, 2023 at 9:03 AM Xuannan Su 
> wrote:
> >
> >> Hi Jark,
> >>
> >> Thanks for the comments.
> >>
> >> I agree that the current solution cannot support jobs that cannot define
> >> watermarks. However, after considering the pending-record-based
> solution, I
> >> believe the current solution is superior for the target use case as it
> is
> >> more intuitive for users. The backlog status gives users the ability to
> >> balance between throughput and latency. Making this trade-off decision
> >> based on the watermark lag is more intuitive from the user's
> perspective.
> >> For instance, a user can decide that if the job lags behind the current
> >> time by more than 1 hour, the result is not usable. In that case, we can
> >> optimize for throughput when the data lags behind by more than an hour.
> >> With the pending-record-based solution, it's challenging for users to
> >> determine when to optimize for throughput and when to prioritize
> latency.
> >>
> >> Regarding the limitations of the watermark-based solution:
> >>
> >> 1. The current solution can support jobs with sources that have event
> >> time. Users can always define a watermark at the source operator, even
> if
> >> it's not used by downstream operators, such as streaming join and
> unbounded
> >> aggregate.
> >>
> >> 2.I don't believe it's accurate to say that the watermark lag will keep
> >> increasing if no data is generated in Kafka. The watermark lag and
> backlog
> >> status are determined at the moment when the watermark is emitted to the
> >> downstream operator. If no data is emitted from the source, the
> watermark
> >> lag and backlog status will not be updated. If the WatermarkStrategy
> with
> >> idleness is used, the source becomes non-backlog when it becomes idle.
> >>
> >> 3. I think watermark lag is more intuitive to determine if a job is
> >> processing backlog data. Even when using pending records, it faces a
> >> similar issue. For example, if the source has 1K pending records, those
> >> records can span from 1 day  to 1 hour to 1 second. If the records span
> 1
> >> day, it's probably best to optimize for throughput. If they span 1
> hour, it
> >> depends on the business logic. If they span 1 second, optimizing for
> >> latency is likely the better choice.
> >>
> >> In summary, I believe the watermark-based solution is a superior choice
> >> for the target use case where watermark/event time can be defined.
> >> Additionally, I haven't come across a scenario that requires low-latency
> >> processing and reads from a source that cannot define watermarks. If we
> >> encounter such a use case, we can create another FLIP to address those
> >> needs in the future. What do you think?
> >>
> >>
> >> Best,
> >> Xuannan
> >>
> >>
> >>
> >>> On Aug 20, 2023, at 23:27, Jark Wu  >> imj...@gmail.com>> wrote:
> >>>
> >>> Hi Xuannan,
> >>>
> >>> Thanks for opening this discussion.
> >>>
> >>> This current proposal may work in the mentioned watermark cases.
> >>> However, it seems this is not a general solution for sources to
> determine
> >>> "isProcessingBacklog".
> >>> From my point of view, there are 3 limitations of the current proposal:
> >>> 1. It doesn't cover jobs that don't have watermark/event-time defined,
> >>> for example streaming join and unbounded aggregate. We may still need
> to
> >>> figure out solutions for them.
> >>> 2. Watermark lag can not be trusted, because it increases 

Re: [ANNOUNCE] New Apache Flink Committer - Hangxiang Yu

2023-08-14 Thread Hang Ruan
Congratulations!

Best,
Hang

Roman Khachatryan  于2023年8月14日周一 22:36写道:

> Congratulations, Hangxiang!
>
> Regards,
> Roman
>
>
> On Wed, Aug 9, 2023 at 12:49 PM Benchao Li  wrote:
>
> > Congrats, Hangxiang!
> >
> > Jing Ge  于2023年8月8日周二 17:44写道:
> >
> > > Congrats, Hangxiang!
> > >
> > > Best regards,
> > > Jing
> > >
> > > On Tue, Aug 8, 2023 at 3:04 PM Yangze Guo  wrote:
> > >
> > > > Congrats, Hangxiang!
> > > >
> > > > Best,
> > > > Yangze Guo
> > > >
> > > > On Tue, Aug 8, 2023 at 11:28 AM yh z 
> wrote:
> > > > >
> > > > > Congratulations, Hangxiang !
> > > > >
> > > > >
> > > > > Best,
> > > > > Yunhong Zheng (Swuferhong)
> > > > >
> > > > > yuxia  于2023年8月8日周二 09:20写道:
> > > > >
> > > > > > Congratulations, Hangxiang !
> > > > > >
> > > > > > Best regards,
> > > > > > Yuxia
> > > > > >
> > > > > > - 原始邮件 -
> > > > > > 发件人: "Wencong Liu" 
> > > > > > 收件人: "dev" 
> > > > > > 发送时间: 星期一, 2023年 8 月 07日 下午 11:55:24
> > > > > > 主题: Re:[ANNOUNCE] New Apache Flink Committer - Hangxiang Yu
> > > > > >
> > > > > > Congratulations, Hangxiang !
> > > > > >
> > > > > >
> > > > > > Best,
> > > > > > Wencong
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > At 2023-08-07 14:57:49, "Yuan Mei" 
> wrote:
> > > > > > >On behalf of the PMC, I'm happy to announce Hangxiang Yu as a
> new
> > > > Flink
> > > > > > >Committer.
> > > > > > >
> > > > > > >Hangxiang has been active in the Flink community for more than
> 1.5
> > > > years
> > > > > > >and has played an important role in developing and maintaining
> > State
> > > > and
> > > > > > >Checkpoint related features/components, including Generic
> > > Incremental
> > > > > > >Checkpoints (take great efforts to make the feature prod-ready).
> > > > Hangxiang
> > > > > > >is also the main driver of the FLIP-263: Resolving schema
> > > > compatibility.
> > > > > > >
> > > > > > >Hangxiang is passionate about the Flink community. Besides the
> > > > technical
> > > > > > >contribution above, he is also actively promoting Flink: talks
> > about
> > > > > > Generic
> > > > > > >Incremental Checkpoints in Flink Forward and Meet-up. Hangxiang
> > also
> > > > spent
> > > > > > >a good amount of time supporting users, participating in
> > > Jira/mailing
> > > > list
> > > > > > >discussions, and reviewing code.
> > > > > > >
> > > > > > >Please join me in congratulating Hangxiang for becoming a Flink
> > > > Committer!
> > > > > > >
> > > > > > >Thanks,
> > > > > > >Yuan Mei (on behalf of the Flink PMC)
> > > > > >
> > > >
> > >
> >
> >
> > --
> >
> > Best,
> > Benchao Li
> >
>


Re: [ANNOUNCE] New Apache Flink Committer - Yanfei Lei

2023-08-14 Thread Hang Ruan
Congratulations!

Best,
Hang

Roman Khachatryan  于2023年8月14日周一 22:38写道:

> Congratulations, Yanfey!
>
> Regards,
> Roman
>
>
> On Wed, Aug 9, 2023 at 12:49 PM Benchao Li  wrote:
>
> > Congrats, YanFei!
> >
> > Jing Ge  于2023年8月8日周二 17:41写道:
> >
> > > Congrats, YanFei!
> > >
> > > Best regards,
> > > Jing
> > >
> > > On Tue, Aug 8, 2023 at 3:04 PM Yangze Guo  wrote:
> > >
> > > > Congrats, Yanfei!
> > > >
> > > > Best,
> > > > Yangze Guo
> > > >
> > > > On Tue, Aug 8, 2023 at 9:20 AM yuxia 
> > > wrote:
> > > > >
> > > > > Congratulations, Yanfei!
> > > > >
> > > > > Best regards,
> > > > > Yuxia
> > > > >
> > > > > - 原始邮件 -
> > > > > 发件人: "ron9 liu" 
> > > > > 收件人: "dev" 
> > > > > 发送时间: 星期一, 2023年 8 月 07日 下午 11:44:23
> > > > > 主题: Re: [ANNOUNCE] New Apache Flink Committer - Yanfei Lei
> > > > >
> > > > > Congratulations Yanfei!
> > > > >
> > > > > Best,
> > > > > Ron
> > > > >
> > > > > Zakelly Lan  于2023年8月7日周一 23:15写道:
> > > > >
> > > > > > Congratulations, Yanfei!
> > > > > >
> > > > > > Best regards,
> > > > > > Zakelly
> > > > > >
> > > > > > On Mon, Aug 7, 2023 at 9:04 PM Lincoln Lee <
> lincoln.8...@gmail.com
> > >
> > > > wrote:
> > > > > > >
> > > > > > > Congratulations, Yanfei!
> > > > > > >
> > > > > > > Best,
> > > > > > > Lincoln Lee
> > > > > > >
> > > > > > >
> > > > > > > Weihua Hu  于2023年8月7日周一 20:43写道:
> > > > > > >
> > > > > > > > Congratulations Yanfei!
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Weihua
> > > > > > > >
> > > > > > > >
> > > > > > > > On Mon, Aug 7, 2023 at 8:08 PM Feifan Wang <
> zoltar9...@163.com
> > >
> > > > wrote:
> > > > > > > >
> > > > > > > > > Congratulations Yanfei! :)
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > ——
> > > > > > > > > Name: Feifan Wang
> > > > > > > > > Email: zoltar9...@163.com
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >  Replied Message 
> > > > > > > > > | From | Matt Wang |
> > > > > > > > > | Date | 08/7/2023 19:40 |
> > > > > > > > > | To | dev@flink.apache.org |
> > > > > > > > > | Subject | Re: [ANNOUNCE] New Apache Flink Committer -
> > Yanfei
> > > > Lei |
> > > > > > > > > Congratulations Yanfei!
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Matt Wang
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >  Replied Message 
> > > > > > > > > | From | Mang Zhang |
> > > > > > > > > | Date | 08/7/2023 18:56 |
> > > > > > > > > | To |  |
> > > > > > > > > | Subject | Re:Re: [ANNOUNCE] New Apache Flink Committer -
> > > Yanfei
> > > > > > Lei |
> > > > > > > > > Congratulations--
> > > > > > > > >
> > > > > > > > > Best regards,
> > > > > > > > > Mang Zhang
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > 在 2023-08-07 18:17:58,"Yuxin Tan" 
> > 写道:
> > > > > > > > > Congrats, Yanfei!
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Yuxin
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > weijie guo  于2023年8月7日周一
> 17:59写道:
> > > > > > > > >
> > > > > > > > > Congrats, Yanfei!
> > > > > > > > >
> > > > > > > > > Best regards,
> > > > > > > > >
> > > > > > > > > Weijie
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Biao Geng  于2023年8月7日周一 17:03写道:
> > > > > > > > >
> > > > > > > > > Congrats, Yanfei!
> > > > > > > > > Best,
> > > > > > > > > Biao Geng
> > > > > > > > >
> > > > > > > > > 发送自 Outlook for iOS
> > > > > > > > > 
> > > > > > > > > 发件人: Qingsheng Ren 
> > > > > > > > > 发送时间: Monday, August 7, 2023 4:23:52 PM
> > > > > > > > > 收件人: dev@flink.apache.org 
> > > > > > > > > 主题: Re: [ANNOUNCE] New Apache Flink Committer - Yanfei Lei
> > > > > > > > >
> > > > > > > > > Congratulations and welcome, Yanfei!
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Qingsheng
> > > > > > > > >
> > > > > > > > > On Mon, Aug 7, 2023 at 4:19 PM Matthias Pohl <
> > > > matthias.p...@aiven.io
> > > > > > > > > .invalid>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Congratulations, Yanfei! :)
> > > > > > > > >
> > > > > > > > > On Mon, Aug 7, 2023 at 10:00 AM Junrui Lee <
> > > jrlee@gmail.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Congratulations Yanfei!
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Junrui
> > > > > > > > >
> > > > > > > > > Yun Tang  于2023年8月7日周一 15:19写道:
> > > > > > > > >
> > > > > > > > > Congratulations, Yanfei!
> > > > > > > > >
> > > > > > > > > Best
> > > > > > > > > Yun Tang
> > > > > > > > > 
> > > > > > > > > From: Danny Cranmer 
> > > > > > > > > Sent: Monday, August 7, 2023 15:10
> > > > > > > > > To: dev 
> > > > > > > > > Subject: Re: [ANNOUNCE] New Apache Flink Committer - Yanfei
> > Lei
> > > > > > > > >
> > > > > > > > > Congrats Yanfei! Welcome to the team.
> > > > > > > > >
> > > > > > > > > Danny
> > > > > > > > >
> > > > > > > > > On 

[jira] [Created] (FLINK-32862) Support INIT operation type to be compatible with DTS on Alibaba Cloud

2023-08-14 Thread Hang Ruan (Jira)
Hang Ruan created FLINK-32862:
-

 Summary: Support INIT operation type to be compatible with DTS on 
Alibaba Cloud
 Key: FLINK-32862
 URL: https://issues.apache.org/jira/browse/FLINK-32862
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Hang Ruan


The operation type of canal json messages from DTS on Alibaba Cloud may contain 
a new type `INIT`. We cannot handle these messages.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-309: Support using larger checkpointing interval when source is processing backlog

2023-07-18 Thread Hang Ruan
+1 (non-binding)

Thanks for driving.

Best,
Hang

Leonard Xu  于2023年7月19日周三 10:42写道:

> Thanks Dong for the continuous work.
>
> +1(binding)
>
> Best,
> Leonard
>
> > On Jul 18, 2023, at 10:16 PM, Jingsong Li 
> wrote:
> >
> > +1 binding
> >
> > Thanks Dong for continuous driving.
> >
> > Best,
> > Jingsong
> >
> > On Tue, Jul 18, 2023 at 10:04 PM Jark Wu  wrote:
> >>
> >> +1 (binding)
> >>
> >> Best,
> >> Jark
> >>
> >> On Tue, 18 Jul 2023 at 20:30, Piotr Nowojski 
> wrote:
> >>
> >>> +1 (binding)
> >>>
> >>> Piotrek
> >>>
> >>> wt., 18 lip 2023 o 08:51 Jing Ge 
> napisał(a):
> >>>
>  +1(binding)
> 
>  Best regards,
>  Jing
> 
>  On Tue, Jul 18, 2023 at 8:31 AM Rui Fan <1996fan...@gmail.com> wrote:
> 
> > +1(binding)
> >
> > Best,
> > Rui Fan
> >
> >
> > On Tue, Jul 18, 2023 at 12:04 PM Dong Lin 
> wrote:
> >
> >> Hi all,
> >>
> >> We would like to start the vote for FLIP-309: Support using larger
> >> checkpointing interval when source is processing backlog [1]. This
> >>> FLIP
> > was
> >> discussed in this thread [2].
> >>
> >> The vote will be open until at least July 21st (at least 72 hours),
> >> following
> >> the consensus voting process.
> >>
> >> Cheers,
> >> Yunfeng and Dong
> >>
> >> [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-309
> >>
> >>
> >
> 
> >>>
> %3A+Support+using+larger+checkpointing+interval+when+source+is+processing+backlog
> >> [2]
> https://lists.apache.org/thread/l1l7f30h7zldjp6ow97y70dcthx7tl37
> >>
> >
> 
> >>>
>
>


Re: [ANNOUNCE] Apache Flink has won the 2023 SIGMOD Systems Award

2023-07-06 Thread Hang Ruan
Hi, Leonard.

I would like to help to add this page. Please assign this issue to me.
Thanks.

Best,
Hang

Leonard Xu  于2023年7月7日周五 11:26写道:

> Congrats to all !
>
> It will be helpful to promote Apache Flink if we can add a page to our
> website like others[2]. I’ve created an issue to improve this.
>
>
> Best,
> Leonard
>
> [1] https://issues.apache.org/jira/browse/FLINK-32555
> [2] https://spark.apache.org/news/sigmod-system-award.html
>


Re: [DISCUSS] FLIP-309: Enable operators to trigger checkpoints dynamically

2023-07-05 Thread Hang Ruan
ics and its state, to
> decide
> >> if
> >> > it
> >> > > considers itself as "processingBacklog" or "veryBackpressured". The
> >> base
> >> > > implementation could do it via a similar mechanism as I was
> >> proposing
> >> > > previously, via looking at the busy/backPressuredTimeMsPerSecond,
> >> > > pendingRecords and processing rate.
> >> > > 2. SourceReaderBase could send an event with
> >> > > "processingBacklog"/"veryBackpressured" state.
> >> > > 3. SourceCoordinator would collect those events, and decide what
> >> should
> >> > it
> >> > > do, whether it should switch whole source to the
> >> > > "processingBacklog"/"veryBackpressured" state or not.
> >> > >
> >> > That could provide eventually a generic solution that works fo every
> >> > > source that reports the required metrics. Each source implementation
> >> > could
> >> > > decide
> >> > > whether to use that default behaviour, or if maybe it's better to
> >> > override
> >> > > the default, or combine default with something custom (like
> >> > HybridSource).
> >> > >
> >> > > And as a first step, we could implement that mechanism only on the
> >> > > SourceCoordinator side, without events, without the default generic
> >> > > solution and use
> >> > > it in the HybridSource/MySQL CDC.
> >> > >
> >> > > This approach has some advantages compared to my previous proposal:
> >> > >   + no need to tinker with metrics and pushing metrics from TMs to
> JM
> >> > >   + somehow communicating this information via Events seems a bit
> >> cleaner
> >> > > to me and avoids problems with freshness of the metrics
> >> > > And some issues:
> >> > >   - I don't know if it can be made pluggable in the future. If a
> user
> >> > could
> >> > > implement a custom `CheckpointTrigger` that would automatically work
> >> with
> >> > > all/most
> >> > > of the pre-existing sources?
> >> > >   - I don't know if it can be expanded if needed in the future, to
> >> make
> >> > > decisions based on operators in the middle of a jobgraph.
> >> > >
> >> >
> >> > Thanks for the proposal. Overall, I agree it is valuable to be able to
> >> > determine the isProcessingBacklog based on the source reader metrics.
> >> >
> >> > I will probably suggest making the following changes upon your idea:
> >> > - Instead of letting the source reader send events to the source
> >> > coordinator, the source reader can emit RecordAttributes(isBacklog=..)
> >> as
> >> > described earlier. We will let two-phase commit operator to decide
> >> whether
> >> > they need the short checkpoint interval.
> >> > - We consider isProcessingBacklog=true when watermarkLag is larger
> than
> >> a
> >> > threshold.
> >> >
> >> > This is a nice addition. But I think we still need extra information
> >> from
> >> > user (e.g. the threshold whether the watermarkLag or
> >> > backPressuredTimeMsPerSecond is too high) with extra public APIs for
> >> this
> >> > feature to work reliably. This is because there is no default
> algorithm
> >> > that works in all cases without extra specification from users, due to
> >> the
> >> > issues around the default algorithm we discussed previously.
> >> >
> >> > Overall, I think the current proposal in FLIP-309 is a first step
> >> towards
> >> > addressing these problems. The API for source enumerator to explicitly
> >> set
> >> > isProcessingBacklog based on its status is useful even if we can
> support
> >> > metrics-based solutions.
> >> >
> >> > If that looks reasonable, can we agree to make incremental improvement
> >> and
> >> > work on the metrics-based solution in a followup FLIP?
> >> >
> >> >
> >> > >
> >> > > 3. ===
> >> > >
> >> > > Independent of that, during some brainstorming between me, Chesnay
> and
> >> > > Stefan Richter, an idea popped up, that I thi

Re: Re: [ANNOUNCE] Apache Flink has won the 2023 SIGMOD Systems Award

2023-07-04 Thread Hang Ruan
Congratulations!

Best,
Hang

Jingsong Li  于2023年7月4日周二 13:47写道:

> Congratulations!
>
> Thank you! All of the Flink community!
>
> Best,
> Jingsong
>
> On Tue, Jul 4, 2023 at 1:24 PM tison  wrote:
> >
> > Congrats and with honor :D
> >
> > Best,
> > tison.
> >
> >
> > Mang Zhang  于2023年7月4日周二 11:08写道:
> >
> > > Congratulations!--
> > >
> > > Best regards,
> > > Mang Zhang
> > >
> > >
> > >
> > >
> > >
> > > 在 2023-07-04 01:53:46,"liu ron"  写道:
> > > >Congrats everyone
> > > >
> > > >Best,
> > > >Ron
> > > >
> > > >Jark Wu  于2023年7月3日周一 22:48写道:
> > > >
> > > >> Congrats everyone!
> > > >>
> > > >> Best,
> > > >> Jark
> > > >>
> > > >> > 2023年7月3日 22:37,Yuval Itzchakov  写道:
> > > >> >
> > > >> > Congrats team!
> > > >> >
> > > >> > On Mon, Jul 3, 2023, 17:28 Jing Ge via user <
> u...@flink.apache.org
> > > >> > wrote:
> > > >> >> Congratulations!
> > > >> >>
> > > >> >> Best regards,
> > > >> >> Jing
> > > >> >>
> > > >> >>
> > > >> >> On Mon, Jul 3, 2023 at 3:21 PM yuxia <
> luoyu...@alumni.sjtu.edu.cn
> > > >> > wrote:
> > > >> >>> Congratulations!
> > > >> >>>
> > > >> >>> Best regards,
> > > >> >>> Yuxia
> > > >> >>>
> > > >> >>> 发件人: "Pushpa Ramakrishnan"   > > >> pushpa.ramakrish...@icloud.com>>
> > > >> >>> 收件人: "Xintong Song"  > > >> tonysong...@gmail.com>>
> > > >> >>> 抄送: "dev" mailto:dev@flink.apache.org>>,
> > > >> "User" mailto:u...@flink.apache.org>>
> > > >> >>> 发送时间: 星期一, 2023年 7 月 03日 下午 8:36:30
> > > >> >>> 主题: Re: [ANNOUNCE] Apache Flink has won the 2023 SIGMOD Systems
> > > Award
> > > >> >>>
> > > >> >>> Congratulations \uD83E\uDD73
> > > >> >>>
> > > >> >>> On 03-Jul-2023, at 3:30 PM, Xintong Song  > > >> > wrote:
> > > >> >>>
> > > >> >>> 
> > > >> >>> Dear Community,
> > > >> >>>
> > > >> >>> I'm pleased to share this good news with everyone. As some of
> you
> > > may
> > > >> have already heard, Apache Flink has won the 2023 SIGMOD Systems
> Award
> > > [1].
> > > >> >>>
> > > >> >>> "Apache Flink greatly expanded the use of stream
> data-processing."
> > > --
> > > >> SIGMOD Awards Committee
> > > >> >>>
> > > >> >>> SIGMOD is one of the most influential data management research
> > > >> conferences in the world. The Systems Award is awarded to an
> individual
> > > or
> > > >> set of individuals to recognize the development of a software or
> > > hardware
> > > >> system whose technical contributions have had significant impact on
> the
> > > >> theory or practice of large-scale data management systems. Winning
> of
> > > the
> > > >> award indicates the high recognition of Flink's technological
> > > advancement
> > > >> and industry influence from academia.
> > > >> >>>
> > > >> >>> As an open-source project, Flink wouldn't have come this far
> without
> > > >> the wide, active and supportive community behind it. Kudos to all
> of us
> > > who
> > > >> helped make this happen, including the over 1,400 contributors and
> many
> > > >> others who contributed in ways beyond code.
> > > >> >>>
> > > >> >>> Best,
> > > >> >>> Xintong (on behalf of the Flink PMC)
> > > >> >>>
> > > >> >>> [1] https://sigmod.org/2023-sigmod-systems-award/
> > > >> >>>
> > > >>
> > > >>
> > >
>


Re: [DISCUSS] FLIP-309: Enable operators to trigger checkpoints dynamically

2023-06-28 Thread Hang Ruan
Thanks for Dong and Yunfeng's work.

The FLIP looks good to me. This new version is clearer to understand.

Best,
Hang

Dong Lin  于2023年6月27日周二 16:53写道:

> Thanks Jack, Jingsong, and Zhu for the review!
>
> Thanks Zhu for the suggestion. I have updated the configuration name as
> suggested.
>
> On Tue, Jun 27, 2023 at 4:45 PM Zhu Zhu  wrote:
>
> > Thanks Dong and Yunfeng for creating this FLIP and driving this
> discussion.
> >
> > The new design looks generally good to me. Increasing the checkpoint
> > interval when the job is processing backlogs is easier for users to
> > understand and can help in more scenarios.
> >
> > I have one comment about the new configuration.
> > Naming the new configuration
> > "execution.checkpointing.interval-during-backlog" would be better
> > according to Flink config naming convention.
> > It is also because that nested config keys should be avoided. See
> > FLINK-29372 for more details.
> >
> > Thanks,
> > Zhu
> >
> > Jingsong Li  于2023年6月27日周二 15:45写道:
> > >
> > > Looks good to me!
> > >
> > > Thanks Dong, Yunfeng and all for your discussion and design.
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Tue, Jun 27, 2023 at 3:35 PM Jark Wu  wrote:
> > > >
> > > > Thank you Dong for driving this FLIP.
> > > >
> > > > The new design looks good to me!
> > > >
> > > > Best,
> > > > Jark
> > > >
> > > > > 2023年6月27日 14:38,Dong Lin  写道:
> > > > >
> > > > > Thank you Leonard for the review!
> > > > >
> > > > > Hi Piotr, do you have any comments on the latest proposal?
> > > > >
> > > > > I am wondering if it is OK to start the voting thread this week.
> > > > >
> > > > > On Mon, Jun 26, 2023 at 4:10 PM Leonard Xu 
> > wrote:
> > > > >
> > > > >> Thanks Dong for driving this FLIP forward!
> > > > >>
> > > > >> Introducing  `backlog status` concept for flink job makes sense to
> > me as
> > > > >> following reasons:
> > > > >>
> > > > >> From concept/API design perspective, it’s more general and natural
> > than
> > > > >> above proposals as it can be used in HybridSource for bounded
> > records, CDC
> > > > >> Source for history snapshot and general sources like KafkaSource
> for
> > > > >> historical messages.
> > > > >>
> > > > >> From user cases/requirements, I’ve seen many users manually to set
> > larger
> > > > >> checkpoint interval during backfilling and then set a shorter
> > checkpoint
> > > > >> interval for real-time processing in their production environments
> > as a
> > > > >> flink application optimization. Now, the flink framework can make
> > this
> > > > >> optimization no longer require the user to set the checkpoint
> > interval and
> > > > >> restart the job multiple times.
> > > > >>
> > > > >> Following supporting using larger checkpoint for job under backlog
> > status
> > > > >> in current FLIP, we can explore supporting larger
> > parallelism/memory/cpu
> > > > >> for job under backlog status in the future.
> > > > >>
> > > > >> In short, the updated FLIP looks good to me.
> > > > >>
> > > > >>
> > > > >> Best,
> > > > >> Leonard
> > > > >>
> > > > >>
> > > > >>> On Jun 22, 2023, at 12:07 PM, Dong Lin 
> > wrote:
> > > > >>>
> > > > >>> Hi Piotr,
> > > > >>>
> > > > >>> Thanks again for proposing the isProcessingBacklog concept.
> > > > >>>
> > > > >>> After discussing with Becket Qin and thinking about this more, I
> > agree it
> > > > >>> is a better idea to add a top-level concept to all source
> > operators to
> > > > >>> address the target use-case.
> > > > >>>
> > > > >>> The main reason that changed my mind is that isProcessingBacklog
> > can be
> > > > >>> described as an inherent/nature attribute of every source
> instance
> > and
> > > > >> its
> > > > >>> semantics does not need to depend on any specific checkpointing
> > policy.
> > > > >>> Also, we can hardcode the isProcessingBacklog behavior for the
> > sources we
> > > > >>> have considered so far (e.g. HybridSource and MySQL CDC source)
> > without
> > > > >>> asking users to explicitly configure the per-source behavior,
> which
> > > > >> indeed
> > > > >>> provides better user experience.
> > > > >>>
> > > > >>> I have updated the FLIP based on the latest suggestions. The
> > latest FLIP
> > > > >> no
> > > > >>> longer introduces per-source config that can be used by
> end-users.
> > While
> > > > >> I
> > > > >>> agree with you that CheckpointTrigger can be a useful feature to
> > address
> > > > >>> additional use-cases, I am not sure it is necessary for the
> > use-case
> > > > >>> targeted by FLIP-309. Maybe we can introduce CheckpointTrigger
> > separately
> > > > >>> in another FLIP?
> > > > >>>
> > > > >>> Can you help take another look at the updated FLIP?
> > > > >>>
> > > > >>> Best,
> > > > >>> Dong
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >>> On Fri, Jun 16, 2023 at 11:59 PM Piotr Nowojski <
> > pnowoj...@apache.org>
> > > > >>> wrote:
> > > > >>>
> > > >  Hi Dong,
> > > > 
> > > > > Suppose there are 1000 subtask and each subtask has 1% chance
> of
> > 

Re: [VOTE] FLIP-295: Support lazy initialization of catalogs and persistence of catalog configurations

2023-06-14 Thread Hang Ruan
+1 (non-binding)

Thanks for Feng driving it.

Best,
Hang

Feng Jin  于2023年6月14日周三 10:36写道:

> Hi everyone
>
> Thanks for all the feedback about the FLIP-295: Support lazy initialization
> of catalogs and persistence of catalog configurations[1].
> [2] is the discussion thread.
>
>
> I'd like to start a vote for it. The vote will be open for at least 72
> hours(excluding weekends,until June 19, 10:00AM GMT) unless there is an
> objection or an insufficient number of votes.
>
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-295%3A+Support+lazy+initialization+of+catalogs+and+persistence+of+catalog+configurations
> [2]https://lists.apache.org/thread/dcwgv0gmngqt40fl3694km53pykocn5s
>
>
> Best,
> Feng
>


  1   2   >