flink sqlgateway 提交sql作业如何设置组账号

2024-05-28 文章 阿华田


flink sqlgateway 
提交sql作业,发现sqlgateway服务启动后,默认是当前机器的租户信息进行任务提交到yarn集群,由于公司的hadoop集群设置了租户权限,需要设置提交的用户信息,各位大佬,flink
 sqlgateway 提交sql作业如何设置组账号
| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制



Flink SQL消费kafka topic有办法限速么?

2024-05-27 文章 casel.chen
Flink SQL消费kafka topic有办法限速么?场景是消费kafka 
topic数据写入下游mongodb,在业务高峰期时下游mongodb写入压力大,希望能够限速消费kafka,请问要如何实现?

Re:咨询Flink 1.19文档中关于iterate操作

2024-05-20 文章 Xuyang
Hi, 

目前Iterate api在1.19版本上废弃了,不再支持,具体可以参考[1][2]。Flip[1]中提供了另一种替代的办法[3]




[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-357%3A+Deprecate+Iteration+API+of+DataStream

[2] https://issues.apache.org/jira/browse/FLINK-33144

[3] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=184615300




--

Best!
Xuyang





在 2024-05-20 22:39:37,""  写道:
>尊敬的Flink开发团队:
>
>您好!
>
>我目前正在学习如何使用Apache Flink的DataStream API实现迭代算法,例如图的单源最短路径。在Flink 
>1.18版本的文档中,我注意到有关于iterate操作的介绍,具体请见:https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/datastream/overview/#iterations
>
>但是,我发现Flink 
>1.19版本的文档中不再提及iterate操作。这让我有些困惑。不知道在最新版本中,这是否意味着iterate操作不再被支持?如果是这样的话,请问我该如何在数据流上进行迭代计算?
>
>非常感谢您的时间和帮助,期待您的回复。
>
>谢谢!
>
>李智诚


咨询Flink 1.19文档中关于iterate操作

2024-05-20 文章 www
尊敬的Flink开发团队:

您好!

我目前正在学习如何使用Apache Flink的DataStream API实现迭代算法,例如图的单源最短路径。在Flink 
1.18版本的文档中,我注意到有关于iterate操作的介绍,具体请见:https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/datastream/overview/#iterations

但是,我发现Flink 
1.19版本的文档中不再提及iterate操作。这让我有些困惑。不知道在最新版本中,这是否意味着iterate操作不再被支持?如果是这样的话,请问我该如何在数据流上进行迭代计算?

非常感谢您的时间和帮助,期待您的回复。

谢谢!

李智诚

Re: Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-19 文章 Jingsong Li
CC to the Paimon community.

Best,
Jingsong

On Mon, May 20, 2024 at 9:55 AM Jingsong Li  wrote:
>
> Amazing, congrats!
>
> Best,
> Jingsong
>
> On Sat, May 18, 2024 at 3:10 PM 大卫415 <2446566...@qq.com.invalid> wrote:
> >
> > 退订
> >
> >
> >
> >
> >
> >
> >
> > Original Email
> >
> >
> >
> > Sender:"gongzhongqiang"< gongzhongqi...@apache.org ;
> >
> > Sent Time:2024/5/17 23:10
> >
> > To:"Qingsheng Ren"< re...@apache.org ;
> >
> > Cc recipient:"dev"< d...@flink.apache.org ;"user"< 
> > u...@flink.apache.org ;"user-zh"< user-zh@flink.apache.org ;"Apache 
> > Announce List"< annou...@apache.org ;
> >
> > Subject:Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released
> >
> >
> > Congratulations !
> > Thanks for all contributors.
> >
> >
> > Best,
> >
> > Zhongqiang Gong
> >
> > Qingsheng Ren  于 2024年5月17日周五 17:33写道:
> >
> >  The Apache Flink community is very happy to announce the release of
> >  Apache Flink CDC 3.1.0.
> > 
> >  Apache Flink CDC is a distributed data integration tool for real time
> >  data and batch data, bringing the simplicity and elegance of data
> >  integration via YAML to describe the data movement and transformation
> >  in a data pipeline.
> > 
> >  Please check out the release blog post for an overview of the release:
> > 
> >  
> > https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
> > 
> >  The release is available for download at:
> >  https://flink.apache.org/downloads.html
> > 
> >  Maven artifacts for Flink CDC can be found at:
> >  https://search.maven.org/search?q=g:org.apache.flink%20cdc
> > 
> >  The full release notes are available in Jira:
> > 
> >  
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522version=12354387
> > 
> >  We would like to thank all contributors of the Apache Flink community
> >  who made this release possible!
> > 
> >  Regards,
> >  Qingsheng Ren
> > 


Re: Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-19 文章 Jingsong Li
Amazing, congrats!

Best,
Jingsong

On Sat, May 18, 2024 at 3:10 PM 大卫415 <2446566...@qq.com.invalid> wrote:
>
> 退订
>
>
>
>
>
>
>
> Original Email
>
>
>
> Sender:"gongzhongqiang"< gongzhongqi...@apache.org ;
>
> Sent Time:2024/5/17 23:10
>
> To:"Qingsheng Ren"< re...@apache.org ;
>
> Cc recipient:"dev"< d...@flink.apache.org ;"user"< u...@flink.apache.org 
> ;"user-zh"< user-zh@flink.apache.org ;"Apache Announce List"< 
> annou...@apache.org ;
>
> Subject:Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released
>
>
> Congratulations !
> Thanks for all contributors.
>
>
> Best,
>
> Zhongqiang Gong
>
> Qingsheng Ren  于 2024年5月17日周五 17:33写道:
>
>  The Apache Flink community is very happy to announce the release of
>  Apache Flink CDC 3.1.0.
> 
>  Apache Flink CDC is a distributed data integration tool for real time
>  data and batch data, bringing the simplicity and elegance of data
>  integration via YAML to describe the data movement and transformation
>  in a data pipeline.
> 
>  Please check out the release blog post for an overview of the release:
> 
>  
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
> 
>  The release is available for download at:
>  https://flink.apache.org/downloads.html
> 
>  Maven artifacts for Flink CDC can be found at:
>  https://search.maven.org/search?q=g:org.apache.flink%20cdc
> 
>  The full release notes are available in Jira:
> 
>  
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522version=12354387
> 
>  We would like to thank all contributors of the Apache Flink community
>  who made this release possible!
> 
>  Regards,
>  Qingsheng Ren
> 


Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 文章 gongzhongqiang
Congratulations !
Thanks for all contributors.


Best,

Zhongqiang Gong

Qingsheng Ren  于 2024年5月17日周五 17:33写道:

> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
>
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch data, bringing the simplicity and elegance of data
> integration via YAML to describe the data movement and transformation
> in a data pipeline.
>
> Please check out the release blog post for an overview of the release:
>
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387
>
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
>
> Regards,
> Qingsheng Ren
>


Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 文章 Hang Ruan
Congratulations!

Thanks for the great work.

Best,
Hang

Qingsheng Ren  于2024年5月17日周五 17:33写道:

> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
>
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch data, bringing the simplicity and elegance of data
> integration via YAML to describe the data movement and transformation
> in a data pipeline.
>
> Please check out the release blog post for an overview of the release:
>
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387
>
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
>
> Regards,
> Qingsheng Ren
>


Re: [ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 文章 Leonard Xu
Congratulations !

Thanks Qingsheng for the great work and all contributors involved !!

Best,
Leonard


> 2024年5月17日 下午5:32,Qingsheng Ren  写道:
> 
> The Apache Flink community is very happy to announce the release of
> Apache Flink CDC 3.1.0.
> 
> Apache Flink CDC is a distributed data integration tool for real time
> data and batch data, bringing the simplicity and elegance of data
> integration via YAML to describe the data movement and transformation
> in a data pipeline.
> 
> Please check out the release blog post for an overview of the release:
> https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> Maven artifacts for Flink CDC can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20cdc
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387
> 
> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> 
> Regards,
> Qingsheng Ren



[ANNOUNCE] Apache Flink CDC 3.1.0 released

2024-05-17 文章 Qingsheng Ren
The Apache Flink community is very happy to announce the release of
Apache Flink CDC 3.1.0.

Apache Flink CDC is a distributed data integration tool for real time
data and batch data, bringing the simplicity and elegance of data
integration via YAML to describe the data movement and transformation
in a data pipeline.

Please check out the release blog post for an overview of the release:
https://flink.apache.org/2024/05/17/apache-flink-cdc-3.1.0-release-announcement/

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink CDC can be found at:
https://search.maven.org/search?q=g:org.apache.flink%20cdc

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12354387

We would like to thank all contributors of the Apache Flink community
who made this release possible!

Regards,
Qingsheng Ren


Re: Flink 1.18.1 ,重启状态恢复

2024-05-16 文章 Yanfei Lei
看起来和 FLINK-34063 / FLINK-33863 是同样的问题,您可以升级到1.18.2 试试看。
[1] https://issues.apache.org/jira/browse/FLINK-33863
[2] https://issues.apache.org/jira/browse/FLINK-34063

陈叶超  于2024年5月16日周四 16:38写道:
>
> 升级到 flink 1.18.1 ,任务重启状态恢复的话,遇到如下报错:
> 2024-04-09 13:03:48
> java.lang.Exception: Exception while creating StreamOperatorStateContext.
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:258)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:256)
> at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:753)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:728)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:693)
> at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
> at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:922)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
> at java.lang.Thread.run(Thread.java:750)
> Caused by: org.apache.flink.util.FlinkException: Could not restore operator 
> state backend for 
> RowDataStoreWriteOperator_8d96fc510e75de3baf03ef7367db7d42_(2/2) from any of 
> the 1 provided restore options.
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.operatorStateBackend(StreamTaskStateInitializerImpl.java:289)
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:176)
> ... 11 more
> Caused by: org.apache.flink.runtime.state.BackendBuildingException: Failed 
> when trying to restore operator state backend
> at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:88)
> at 
> org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createOperatorStateBackend(EmbeddedRocksDBStateBackend.java:533)
> at 
> org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createOperatorStateBackend(RocksDBStateBackend.java:380)
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$operatorStateBackend$0(StreamTaskStateInitializerImpl.java:280)
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
> ... 13 more
> Caused by: java.io.IOException: invalid stream header
> at 
> org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:235)
> at 
> org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:145)
> at 
> org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:129)
> at 
> org.apache.flink.runtime.state.SnappyStreamCompressionDecorator.decorateWithCompression(SnappyStreamCompressionDecorator.java:53)
> at 
> org.apache.flink.runtime.state.StreamCompressionDecorator.decorateWithCompression(StreamCompressionDecorator.java:60)
> at 
> org.apache.flink.runtime.state.CompressibleFSDataInputStream.(CompressibleFSDataInputStream.java:39)
> at 
> org.apache.flink.runtime.state.OperatorStateRestoreOperation.restore(OperatorStateRestoreOperation.java:185)
> at 
> org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:85)
> ... 18 more
>


-- 
Best,
Yanfei


Get access to unmatching events in Apache Flink Cep

2024-05-16 文章 Anton Sidorov
Hello!

I have a Flink Job with CEP pattern.

Pattern example:

// Strict Contiguity
// a b+ c d e
Pattern.begin("a", AfterMatchSkipStrategy.skipPastLastEvent()).where(...)
.next("b").where(...).oneOrMore()
.next("c").where(...)
.next("d").where(...)
.next("e").where(...);

I have events with wrong order stream on input:

a b d c e

On output I haven`t any matching. But I want have access to events, that
not matching.

Can I have access to middle NFA state in CEP pattern, or get some other way
to view unmatching events?

Example project with CEP pattern on github
<https://github.com/A-Kinski/apache-flink-cep/tree/main>, and my question
on SO
<https://stackoverflow.com/questions/78483004/get-access-to-unmatching-events-in-apache-flink-cep>

Thanks in advance


Flink 1.18.1 ,重启状态恢复

2024-05-16 文章 陈叶超
升级到 flink 1.18.1 ,任务重启状态恢复的话,遇到如下报错:
2024-04-09 13:03:48
java.lang.Exception: Exception while creating StreamOperatorStateContext.
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:258)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:256)
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:753)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:728)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:693)
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:922)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.flink.util.FlinkException: Could not restore operator 
state backend for 
RowDataStoreWriteOperator_8d96fc510e75de3baf03ef7367db7d42_(2/2) from any of 
the 1 provided restore options.
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:160)
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.operatorStateBackend(StreamTaskStateInitializerImpl.java:289)
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:176)
... 11 more
Caused by: org.apache.flink.runtime.state.BackendBuildingException: Failed when 
trying to restore operator state backend
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:88)
at 
org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createOperatorStateBackend(EmbeddedRocksDBStateBackend.java:533)
at 
org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createOperatorStateBackend(RocksDBStateBackend.java:380)
at 
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$operatorStateBackend$0(StreamTaskStateInitializerImpl.java:280)
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:168)
at 
org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:135)
... 13 more
Caused by: java.io.IOException: invalid stream header
at 
org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:235)
at 
org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:145)
at 
org.xerial.snappy.SnappyFramedInputStream.(SnappyFramedInputStream.java:129)
at 
org.apache.flink.runtime.state.SnappyStreamCompressionDecorator.decorateWithCompression(SnappyStreamCompressionDecorator.java:53)
at 
org.apache.flink.runtime.state.StreamCompressionDecorator.decorateWithCompression(StreamCompressionDecorator.java:60)
at 
org.apache.flink.runtime.state.CompressibleFSDataInputStream.(CompressibleFSDataInputStream.java:39)
at 
org.apache.flink.runtime.state.OperatorStateRestoreOperation.restore(OperatorStateRestoreOperation.java:185)
at 
org.apache.flink.runtime.state.DefaultOperatorStateBackendBuilder.build(DefaultOperatorStateBackendBuilder.java:85)
... 18 more



Re:Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-15 文章 Xuyang
Hi, 

> 现在可以用中文了?

我看你发的是中文答疑邮箱




> 就是opt目录里面的gateway.jar直接编辑Factory文件把connector注册就行了

你的意思是,之前报错类似"找不到一个jdbc 
connector",然后直接在gateway的jar包里的META-INF/services内的Factory文件(SPI文件)内加入jdbc 
connector的Factory实现类就好了吗?




如果是这个问题就有点奇怪,因为本身flink-connector-jdbc的spi文件就已经将相关的类写进去了[1],按理说放到lib目录下,就会spi发现的




[1] 
https://github.com/apache/flink-connector-jdbc/blob/bde28e6a92ffa75ae45bc8df6be55d299ff995a2/flink-connector-jdbc/src/main/resources/META-INF/services/org.apache.flink.table.factories.Factory#L16




--

Best!
Xuyang





在 2024-05-15 15:51:49,abc15...@163.com 写道:
>现在可以用中文了?就是opt目录里面的gateway.jar直接编辑Factory文件把connector注册就行了
>
>
>> 在 2024年5月15日,15:36,Xuyang  写道:
>> 
>> Hi, 看起来你之前的问题是jdbc driver找不到,可以简单描述下你的解决的方法吗?“注册connection数的数量”有点不太好理解。
>> 
>> 
>> 
>> 
>> 如果确实有类似的问题、并且通过这种手段解决了的话,可以建一个improvement的jira issue[1]来帮助社区跟踪、改善这个问题,感谢!
>> 
>> 
>> 
>> 
>> [1] https://issues.apache.org/jira/projects/FLINK/summary
>> 
>> 
>> 
>> 
>> --
>> 
>>Best!
>>Xuyang
>> 
>> 
>> 
>> 
>> 
>>> 在 2024-05-10 12:26:22,abc15...@163.com 写道:
>>> I've solved it. You need to register the number of connections in the jar 
>>> of gateway. But this is inconvenient, and I still hope to improve it.
>>> 发自我的 iPhone
>>> 
>>>>> 在 2024年5月10日,11:56,Xuyang  写道:
>>>> 
>>>> Hi, can you print the classloader and verify if the jdbc connector exists 
>>>> in it?
>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> 
>>>>   Best!
>>>>   Xuyang
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> At 2024-05-09 17:48:33, "McClone"  wrote:
>>>>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can 
>>>>> not  find jdbc connector,but use sql-client is normal.


Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-15 文章 abc15606
现在可以用中文了?就是opt目录里面的gateway.jar直接编辑Factory文件把connector注册就行了


> 在 2024年5月15日,15:36,Xuyang  写道:
> 
> Hi, 看起来你之前的问题是jdbc driver找不到,可以简单描述下你的解决的方法吗?“注册connection数的数量”有点不太好理解。
> 
> 
> 
> 
> 如果确实有类似的问题、并且通过这种手段解决了的话,可以建一个improvement的jira issue[1]来帮助社区跟踪、改善这个问题,感谢!
> 
> 
> 
> 
> [1] https://issues.apache.org/jira/projects/FLINK/summary
> 
> 
> 
> 
> --
> 
>Best!
>Xuyang
> 
> 
> 
> 
> 
>> 在 2024-05-10 12:26:22,abc15...@163.com 写道:
>> I've solved it. You need to register the number of connections in the jar of 
>> gateway. But this is inconvenient, and I still hope to improve it.
>> 发自我的 iPhone
>> 
>>>> 在 2024年5月10日,11:56,Xuyang  写道:
>>> 
>>> Hi, can you print the classloader and verify if the jdbc connector exists 
>>> in it?
>>> 
>>> 
>>> 
>>> 
>>> --
>>> 
>>>   Best!
>>>   Xuyang
>>> 
>>> 
>>> 
>>> 
>>> 
>>> At 2024-05-09 17:48:33, "McClone"  wrote:
>>>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can 
>>>> not  find jdbc connector,but use sql-client is normal.



Re:请问如何贡献Flink Hologres连接器?

2024-05-15 文章 Xuyang
Hi, 

我觉得如果只是从贡献的角度来说,支持flink hologres 
connector是没问题的,hologres目前作为比较热门的数据库,肯定是有很多需求的,并且现在aliyun 
github官方也基于此提供了开源的flink hologres connector[1]。





但是涉及到aliyun等公司商业化的ververica-connector-hologres包,如果想直接开源的话,在我的角度最好事先确认下面几点,不然可能会隐含一些法律风险

  1. jar包的提供方(aliyun等公司)是否知情、且愿意开源,不然直接拿着商业化的东西给出来有点不太好

2. jar包内的协议是否满足开源的协议,而不是商业化的协议




我推荐如果真要开源,可以基于开源github仓库的flink hologres connector[1]来贡献(比如现在我看目前它最高支持flink 
1.17,可以试试贡献支持到1.18、1.19等等)




[1] https://github.com/aliyun/alibabacloud-hologres-connectors




--

Best!
Xuyang





在 2024-05-14 11:24:37,"casel.chen"  写道:
>我们有使用阿里云商业版Hologres数据库,同时我们有自研的Flink实时计算平台,为了实现在Hologres上实时建仓,我们基于开源Apache 
>Flink 1.17.1结合阿里云maven仓库的ververica-connector-hologres包[1]和开源的holo 
>client[2]开发了hologres 
>connector,修复了一些jar依赖问题。目前我们已经在生产环境使用了一段时间,暂时没有发现问题,现在想将它贡献给社区。
>
>
>请问:
>1. 贡献Flink Hologres连接器是否合规?
>2. 如果合规的话,PR应该提到哪个项目代码仓库?
>3. 还是说要像 https://flink-packages.org/categories/connectors 
>这样链接到自己的github仓库?如果是的话要怎么在flink-packages.org上面注册呢?
>
>
>[1] 
>https://repo1.maven.org/maven2/com/alibaba/ververica/ververica-connector-hologres/1.17-vvr-8.0.4-1/
>[2] 
>https://github.com/aliyun/alibabacloud-hologres-connectors/tree/master/holo-client


Re:Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-15 文章 Xuyang
Hi, 看起来你之前的问题是jdbc driver找不到,可以简单描述下你的解决的方法吗?“注册connection数的数量”有点不太好理解。




如果确实有类似的问题、并且通过这种手段解决了的话,可以建一个improvement的jira issue[1]来帮助社区跟踪、改善这个问题,感谢!




[1] https://issues.apache.org/jira/projects/FLINK/summary




--

Best!
Xuyang





在 2024-05-10 12:26:22,abc15...@163.com 写道:
>I've solved it. You need to register the number of connections in the jar of 
>gateway. But this is inconvenient, and I still hope to improve it.
>发自我的 iPhone
>
>> 在 2024年5月10日,11:56,Xuyang  写道:
>> 
>> Hi, can you print the classloader and verify if the jdbc connector exists 
>> in it?
>> 
>> 
>> 
>> 
>> --
>> 
>>Best!
>>    Xuyang
>> 
>> 
>> 
>> 
>> 
>> At 2024-05-09 17:48:33, "McClone"  wrote:
>>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can 
>>> not  find jdbc connector,but use sql-client is normal.


请问如何贡献Flink Hologres连接器?

2024-05-13 文章 casel.chen
我们有使用阿里云商业版Hologres数据库,同时我们有自研的Flink实时计算平台,为了实现在Hologres上实时建仓,我们基于开源Apache 
Flink 1.17.1结合阿里云maven仓库的ververica-connector-hologres包[1]和开源的holo 
client[2]开发了hologres 
connector,修复了一些jar依赖问题。目前我们已经在生产环境使用了一段时间,暂时没有发现问题,现在想将它贡献给社区。


请问:
1. 贡献Flink Hologres连接器是否合规?
2. 如果合规的话,PR应该提到哪个项目代码仓库?
3. 还是说要像 https://flink-packages.org/categories/connectors 
这样链接到自己的github仓库?如果是的话要怎么在flink-packages.org上面注册呢?


[1] 
https://repo1.maven.org/maven2/com/alibaba/ververica/ververica-connector-hologres/1.17-vvr-8.0.4-1/
[2] 
https://github.com/aliyun/alibabacloud-hologres-connectors/tree/master/holo-client

Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-13 文章 kellygeorg...@163.com
退订



 Replied Message 
| From | abc15...@163.com |
| Date | 05/10/2024 12:26 |
| To | user-zh@flink.apache.org |
| Cc | |
| Subject | Re: use flink 1.19 JDBC Driver can find jdbc connector |
I've solved it. You need to register the number of connections in the jar of 
gateway. But this is inconvenient, and I still hope to improve it.
发自我的 iPhone

> 在 2024年5月10日,11:56,Xuyang  写道:
>
> Hi, can you print the classloader and verify if the jdbc connector exists in 
> it?
>
>
>
>
> --
>
>Best!
>Xuyang
>
>
>
>
>
> At 2024-05-09 17:48:33, "McClone"  wrote:
>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not 
>>  find jdbc connector,but use sql-client is normal.


Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-09 文章 abc15606
I've solved it. You need to register the number of connections in the jar of 
gateway. But this is inconvenient, and I still hope to improve it.
发自我的 iPhone

> 在 2024年5月10日,11:56,Xuyang  写道:
> 
> Hi, can you print the classloader and verify if the jdbc connector exists in 
> it?
> 
> 
> 
> 
> --
> 
>Best!
>Xuyang
> 
> 
> 
> 
> 
> At 2024-05-09 17:48:33, "McClone"  wrote:
>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not 
>>  find jdbc connector,but use sql-client is normal.



Re:use flink 1.19 JDBC Driver can find jdbc connector

2024-05-09 文章 Xuyang
Hi, can you print the classloader and verify if the jdbc connector exists in it?




--

Best!
Xuyang





At 2024-05-09 17:48:33, "McClone"  wrote:
>I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not  
>find jdbc connector,but use sql-client is normal.


请问有没有公司可以提供开源Flink维保服务?

2024-05-09 文章 LIU Xiao
如题


use flink 1.19 JDBC Driver can find jdbc connector

2024-05-09 文章 McClone
I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not  
find jdbc connector,but use sql-client is normal.

Re: Flink sql retract to append

2024-04-30 文章 Zijun Zhao
以处理时间为升序,处理结果肯定不会出现回撤的,因为往后的时间不会比当前时间小了,你可以在试试这个去重

On Tue, Apr 30, 2024 at 3:35 PM 焦童  wrote:

> 谢谢你的建议  但是top-1也会产生回撤信息
>
> > 2024年4月30日 15:27,ha.fen...@aisino.com 写道:
> >
> > 可以参考这个
> >
> https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/dev/table/sql/queries/deduplication/
> > 1.11版本不知道是不是支持
> >
> > From: 焦童
> > Date: 2024-04-30 11:25
> > To: user-zh
> > Subject: Flink sql retract to append
> > Hello ,
> > 我使用Flink 1.11 版本 sql  进行数据去重(通过 group by
> 形式)但是这会产生回撤流,下游存储不支持回撤流信息仅支持append,在DataStream
> 中我可以通过状态进行去重,但是在sql中如何做到去重且不产生回撤流呢。谢谢各位
>
>


Re: Flink sql retract to append

2024-04-30 文章 焦童
谢谢你的建议  但是top-1也会产生回撤信息  

> 2024年4月30日 15:27,ha.fen...@aisino.com 写道:
> 
> 可以参考这个
> https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/dev/table/sql/queries/deduplication/
> 1.11版本不知道是不是支持
> 
> From: 焦童
> Date: 2024-04-30 11:25
> To: user-zh
> Subject: Flink sql retract to append
> Hello ,
> 我使用Flink 1.11 版本 sql  进行数据去重(通过 group by 
> 形式)但是这会产生回撤流,下游存储不支持回撤流信息仅支持append,在DataStream 
> 中我可以通过状态进行去重,但是在sql中如何做到去重且不产生回撤流呢。谢谢各位



Flink sql retract to append

2024-04-29 文章 焦童
Hello ,
 我使用Flink 1.11 版本 sql  进行数据去重(通过 group by 
形式)但是这会产生回撤流,下游存储不支持回撤流信息仅支持append,在DataStream 
中我可以通过状态进行去重,但是在sql中如何做到去重且不产生回撤流呢。谢谢各位

Flink 截止到1.18,是否有办法在Table API上添加uid?

2024-04-24 文章 Guanlin Zhang
Hi Team,

我们这边的业务使用 Flink MySQL CDC到 OpenSearch并且使用TABLE API: INSERT INTO t1 SELECT * 
FROM t2 这种方式。

由于我们这边可能会在运行过程中添加额外的Operator,我们有办法在使用snapshot 恢复后保留之前src和sink 
operator的状态么?我看到在DataStream API可以通过设定uid。Table API有同样的方法吗?我看到Flink 
jira:https://issues.apache.org/jira/browse/FLINK-28861 
可以设置table.exec.uid.generation=PLAN_ONLY。请问默认配置下,中间添加transformation 
operator或者其他变更后从snapshot恢复会保留之前的状态么?




Re: Flink流批一体应用在实时数仓数据核对场景下有哪些注意事项?

2024-04-18 文章 Yunfeng Zhou
流模式和批模式在watermark和一些算子语义等方面上有一些不同,但没看到Join和Window算子上有什么差异,这方面应该在batch
mode下应该是支持的。具体的两种模式的比较可以看一下这个文档

https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/execution_mode/

On Thu, Apr 18, 2024 at 9:44 AM casel.chen  wrote:
>
> 有人尝试这么实践过么?可以给一些建议么?谢谢!
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2024-04-15 11:15:34,"casel.chen"  写道:
> >我最近在调研Flink实时数仓数据质量保障,需要定期(每10/20/30分钟)跑批核对实时数仓产生的数据,传统方式是通过spark作业跑批,如Apache
> > DolphinScheduler的数据质量模块。
> >但这种方式的最大缺点是需要使用spark sql重写flink 
> >sql业务逻辑,难以确保二者一致性。所以我在考虑能否使用Flink流批一体特性,复用flink 
> >sql,只需要将数据源从cdc或kafka换成hologres或starrocks表,再新建跑批结果表,最后只需要比较相同时间段内实时结果表和跑批结果表的数据即可。不过有几点疑问:
> >1. 原实时flink sql表定义中包含的watermark, process_time和event_time这些字段可以复用在batch 
> >mode下么?
> >2. 实时双流关联例如interval join和temporal join能够用于batch mode下么?
> >3. 实时流作业中的窗口函数能够复用于batch mode下么?
> >4. 其他需要关注的事项有哪些?


Flink流批一体应用在实时数仓数据核对场景下有哪些注意事项?

2024-04-14 文章 casel.chen
我最近在调研Flink实时数仓数据质量保障,需要定期(每10/20/30分钟)跑批核对实时数仓产生的数据,传统方式是通过spark作业跑批,如Apache 
DolphinScheduler的数据质量模块。
但这种方式的最大缺点是需要使用spark sql重写flink sql业务逻辑,难以确保二者一致性。所以我在考虑能否使用Flink流批一体特性,复用flink 
sql,只需要将数据源从cdc或kafka换成hologres或starrocks表,再新建跑批结果表,最后只需要比较相同时间段内实时结果表和跑批结果表的数据即可。不过有几点疑问:
1. 原实时flink sql表定义中包含的watermark, process_time和event_time这些字段可以复用在batch mode下么?
2. 实时双流关联例如interval join和temporal join能够用于batch mode下么?
3. 实时流作业中的窗口函数能够复用于batch mode下么?
4. 其他需要关注的事项有哪些?

Re:Unable to use Table API in AWS Managed Flink 1.18

2024-04-10 文章 Xuyang
Hi, Perez.
Flink use SPI to find the jdbc connector in the classloader and when starting, 
the dir '${FLINK_ROOT}/lib' will be added 
into the classpath. That is why in AWS the exception throws. IMO there are two 
ways to solve this question.


1. upload the connector jar to AWS to let the classloader keep this jar. As for 
how to upload connector jars, you need to check 
the relevant documents of AWS.
2. package the jdbc connector jar into your job jar and submit it again.




--

Best!
Xuyang




At 2024-04-10 17:32:19, "Enrique Alberto Perez Delgado" 
 wrote:

Hi all,


I am using AWS Managed Flink 1.18, where I am getting this error when trying to 
submit my job:


```
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option: 'connector'='jdbc' at 
org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:798)
 at 
org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:772)
 at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:317)
 ... 32 more Caused by: org.apache.flink.table.api.ValidationException: Could 
not find any factory for identifier 'jdbc' that implements 
'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.
```


I used to get this error when testing locally until I added the 
`flink-connector-jdbc-3.1.2-1.18`.jar to `/opt/flink/lib` in my local docker 
image, which I thought would be provided by AWS. apparently, it isn’t. Has 
anyone encountered this error before?


I highly appreciate any help you could give me,


Best regards, 


Enrique Perez
Data Engineer
HelloFresh SE | Prinzenstraße 89 | 10969 Berlin, Germany
Phone:  +4917625622422











| |
HelloFresh SE, Berlin (Sitz der Gesellschaft) | Vorstände: Dominik S. Richter 
(Vorsitzender), Thomas W. Griesel, Christian Gärtner, Edward Boyes | 
Vorsitzender des Aufsichtsrats: John H. Rittenhouse | Eingetragen beim 
Amtsgericht Charlottenburg, HRB 182382 B | USt-Id Nr.: DE 302210417

CONFIDENTIALITY NOTICE: This message (including any attachments) is 
confidential and may be privileged. It may be read, copied and used only by the 
intended recipient. If you have received it in error please contact the sender 
(by return e-mail) immediately and delete this message. Any unauthorized use or 
dissemination of this message in whole or in parts is strictly prohibited.

Unable to use Table API in AWS Managed Flink 1.18

2024-04-10 文章 Enrique Alberto Perez Delgado
Hi all,

I am using AWS Managed Flink 1.18, where I am getting this error when trying to 
submit my job:

```
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option: 'connector'='jdbc'
at 
org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:798)
at 
org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:772)
at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:317)
... 32 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
factory for identifier 'jdbc' that implements 
'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.
```

I used to get this error when testing locally until I added the 
`flink-connector-jdbc-3.1.2-1.18`.jar to `/opt/flink/lib` in my local docker 
image, which I thought would be provided by AWS. apparently, it isn’t. Has 
anyone encountered this error before?

I highly appreciate any help you could give me,

Best regards, 

Enrique Perez
Data Engineer
HelloFresh SE | Prinzenstraße 89 | 10969 Berlin, Germany
Phone:  +4917625622422





-- 




 
<https://www.hellofresh.com/jobs/?utm_medium=email_source=email_signature>


HelloFresh SE, Berlin (Sitz der Gesellschaft) | Vorstände: Dominik S. 
Richter (Vorsitzender), Thomas W. Griesel, Christian Gärtner, Edward Boyes 
| Vorsitzender des Aufsichtsrats: John H. Rittenhouse | Eingetragen beim 
Amtsgericht Charlottenburg, HRB 182382 B | USt-Id Nr.: DE 302210417

*CONFIDENTIALITY NOTICE:* This message (including any attachments) is 
confidential and may be privileged. It may be read, copied and used only by 
the intended recipient. If you have received it in error please contact the 
sender (by return e-mail) immediately and delete this message. Any 
unauthorized use or dissemination of this message in whole or in parts is 
strictly prohibited.




Re: flink 已完成job等一段时间会消失

2024-04-09 文章 gongzhongqiang
你好:

如果想长期保留已完成的任务,推荐使用  History Server :
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#history-server

Best,

Zhongqiang Gong

ha.fen...@aisino.com  于2024年4月9日周二 10:39写道:

> 在WEBUI里面,已完成的任务会在completed jobs里面能够看到,过了一会再进去看数据就没有了,是有什么配置自动删除吗?
>


回复:flink 已完成job等一段时间会消失

2024-04-08 文章 spoon_lz
有一个过期时间的配置
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobstore-expiration-time



| |
spoon_lz
|
|
spoon...@126.com
|


 回复的原邮件 
| 发件人 | ha.fen...@aisino.com |
| 发送日期 | 2024年04月9日 10:38 |
| 收件人 | user-zh |
| 主题 | flink 已完成job等一段时间会消失 |
在WEBUI里面,已完成的任务会在completed jobs里面能够看到,过了一会再进去看数据就没有了,是有什么配置自动删除吗?


Re: flink cdc metrics 问题

2024-04-07 文章 Shawn Huang
你好,目前flink cdc没有提供未消费binlog数据条数这样的指标,你可以通过 currentFetchEventTimeLag
这个指标(表示消费到的binlog数据中时间与当前时间延迟)来判断当前消费情况。

[1]
https://github.com/apache/flink-cdc/blob/master/flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/metrics/MySqlSourceReaderMetrics.java

Best,
Shawn Huang


casel.chen  于2024年4月8日周一 12:01写道:

> 请问flink cdc对外有暴露一些监控metrics么?
> 我希望能够监控到使用flink cdc的实时作业当前未消费的binlog数据条数,类似于kafka topic消费积压监控。
> 想通过这个监控防止flink cdc实时作业消费慢而被套圈(最大binlog条数如何获取?)


flink cdc metrics 问题

2024-04-07 文章 casel.chen
请问flink cdc对外有暴露一些监控metrics么?
我希望能够监控到使用flink cdc的实时作业当前未消费的binlog数据条数,类似于kafka topic消费积压监控。
想通过这个监控防止flink cdc实时作业消费慢而被套圈(最大binlog条数如何获取?)

Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.8.0 released

2024-03-25 文章 Rui Fan
Congratulations! Thanks Max for the release and all involved for the great
work!

A gentle reminder to users: the maven artifact has just been released and
will take some time to complete.

Best,
Rui

On Mon, Mar 25, 2024 at 6:35 PM Maximilian Michels  wrote:

> The Apache Flink community is very happy to announce the release of
> the Apache Flink Kubernetes Operator version 1.8.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache
> Flink applications on Kubernetes through all aspects of their
> lifecycle.
>
> Release highlights:
> - Flink Autotuning automatically adjusts TaskManager memory
> - Flink Autoscaling metrics and decision accuracy improved
> - Improve standalone Flink Autoscaling
> - Savepoint trigger nonce for savepoint-based restarts
> - Operator stability improvements for cluster shutdown
>
> Blog post:
> https://flink.apache.org/2024/03/21/apache-flink-kubernetes-operator-1.8.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink Kubernetes Operator can be found at:
>
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>
> Official Docker image for Flink Kubernetes Operator can be found at:
> https://hub.docker.com/r/apache/flink-kubernetes-operator
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12353866=12315522
>
> We would like to thank the Apache Flink community and its contributors
> who made this release possible!
>
> Cheers,
> Max
>


[ANNOUNCE] Apache Flink Kubernetes Operator 1.8.0 released

2024-03-25 文章 Maximilian Michels
The Apache Flink community is very happy to announce the release of
the Apache Flink Kubernetes Operator version 1.8.0.

The Flink Kubernetes Operator allows users to manage their Apache
Flink applications on Kubernetes through all aspects of their
lifecycle.

Release highlights:
- Flink Autotuning automatically adjusts TaskManager memory
- Flink Autoscaling metrics and decision accuracy improved
- Improve standalone Flink Autoscaling
- Savepoint trigger nonce for savepoint-based restarts
- Operator stability improvements for cluster shutdown

Blog post: 
https://flink.apache.org/2024/03/21/apache-flink-kubernetes-operator-1.8.0-release-announcement/

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink Kubernetes Operator can be found at:
https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator

Official Docker image for Flink Kubernetes Operator can be found at:
https://hub.docker.com/r/apache/flink-kubernetes-operator

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12353866=12315522

We would like to thank the Apache Flink community and its contributors
who made this release possible!

Cheers,
Max


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-21 文章 gongzhongqiang
Congrattulations! Thanks for the great work!


Best,
Zhongqiang Gong

Leonard Xu  于2024年3月20日周三 21:36写道:

> Hi devs and users,
>
> We are thrilled to announce that the donation of Flink CDC as a
> sub-project of Apache Flink has completed. We invite you to explore the new
> resources available:
>
> - GitHub Repository: https://github.com/apache/flink-cdc
> - Flink CDC Documentation:
> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>
> After Flink community accepted this donation[1], we have completed
> software copyright signing, code repo migration, code cleanup, website
> migration, CI migration and github issues migration etc.
> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng
> Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their
> contributions and help during this process!
>
>
> For all previous contributors: The contribution process has slightly
> changed to align with the main Flink project. To report bugs or suggest new
> features, please open tickets
> Apache Jira (https://issues.apache.org/jira).  Note that we will no
> longer accept GitHub issues for these purposes.
>
>
> Welcome to explore the new repository and documentation. Your feedback and
> contributions are invaluable as we continue to improve Flink CDC.
>
> Thanks everyone for your support and happy exploring Flink CDC!
>
> Best,
> Leonard
> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Zakelly Lan
Congratulations!


Best,
Zakelly

On Thu, Mar 21, 2024 at 12:05 PM weijie guo 
wrote:

> Congratulations! Well done.
>
>
> Best regards,
>
> Weijie
>
>
> Feng Jin  于2024年3月21日周四 11:40写道:
>
>> Congratulations!
>>
>>
>> Best,
>> Feng
>>
>>
>> On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:
>>
>> > Congratulations!
>> >
>> > Best,
>> > Ron
>> >
>> > Jark Wu  于2024年3月21日周四 10:46写道:
>> >
>> > > Congratulations and welcome!
>> > >
>> > > Best,
>> > > Jark
>> > >
>> > > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>> > >
>> > > > Congratulations!
>> > > >
>> > > > Best,
>> > > > Rui
>> > > >
>> > > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
>> > > wrote:
>> > > >
>> > > > > Congrattulations!
>> > > > >
>> > > > > Best,
>> > > > > Hang
>> > > > >
>> > > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
>> > > > >
>> > > > >>
>> > > > >> Congrats, thanks for the great work!
>> > > > >>
>> > > > >>
>> > > > >> Best,
>> > > > >> Lincoln Lee
>> > > > >>
>> > > > >>
>> > > > >> Peter Huang  于2024年3月20日周三 22:48写道:
>> > > > >>
>> > > > >>> Congratulations
>> > > > >>>
>> > > > >>>
>> > > > >>> Best Regards
>> > > > >>> Peter Huang
>> > > > >>>
>> > > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang > >
>> > > > wrote:
>> > > > >>>
>> > > > >>>>
>> > > > >>>> Congratulations
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> Best,
>> > > > >>>> Huajie Wang
>> > > > >>>>
>> > > > >>>>
>> > > > >>>>
>> > > > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>> > > > >>>>
>> > > > >>>>> Hi devs and users,
>> > > > >>>>>
>> > > > >>>>> We are thrilled to announce that the donation of Flink CDC as
>> a
>> > > > >>>>> sub-project of Apache Flink has completed. We invite you to
>> > explore
>> > > > the new
>> > > > >>>>> resources available:
>> > > > >>>>>
>> > > > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>> > > > >>>>> - Flink CDC Documentation:
>> > > > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>> > > > >>>>>
>> > > > >>>>> After Flink community accepted this donation[1], we have
>> > completed
>> > > > >>>>> software copyright signing, code repo migration, code cleanup,
>> > > > website
>> > > > >>>>> migration, CI migration and github issues migration etc.
>> > > > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>> > > > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
>> > > > contributors
>> > > > >>>>> for their contributions and help during this process!
>> > > > >>>>>
>> > > > >>>>>
>> > > > >>>>> For all previous contributors: The contribution process has
>> > > slightly
>> > > > >>>>> changed to align with the main Flink project. To report bugs
>> or
>> > > > suggest new
>> > > > >>>>> features, please open tickets
>> > > > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we
>> will
>> > > no
>> > > > >>>>> longer accept GitHub issues for these purposes.
>> > > > >>>>>
>> > > > >>>>>
>> > > > >>>>> Welcome to explore the new repository and documentation. Your
>> > > > feedback
>> > > > >>>>> and contributions are invaluable as we continue to improve
>> Flink
>> > > CDC.
>> > > > >>>>>
>> > > > >>>>> Thanks everyone for your support and happy exploring Flink
>> CDC!
>> > > > >>>>>
>> > > > >>>>> Best,
>> > > > >>>>> Leonard
>> > > > >>>>> [1]
>> > > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>> > > > >>>>>
>> > > > >>>>>
>> > > >
>> > >
>> >
>>
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 weijie guo
Congratulations! Well done.


Best regards,

Weijie


Feng Jin  于2024年3月21日周四 11:40写道:

> Congratulations!
>
>
> Best,
> Feng
>
>
> On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:
>
> > Congratulations!
> >
> > Best,
> > Ron
> >
> > Jark Wu  于2024年3月21日周四 10:46写道:
> >
> > > Congratulations and welcome!
> > >
> > > Best,
> > > Jark
> > >
> > > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
> > >
> > > > Congratulations!
> > > >
> > > > Best,
> > > > Rui
> > > >
> > > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> > > wrote:
> > > >
> > > > > Congrattulations!
> > > > >
> > > > > Best,
> > > > > Hang
> > > > >
> > > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > > > >
> > > > >>
> > > > >> Congrats, thanks for the great work!
> > > > >>
> > > > >>
> > > > >> Best,
> > > > >> Lincoln Lee
> > > > >>
> > > > >>
> > > > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > > > >>
> > > > >>> Congratulations
> > > > >>>
> > > > >>>
> > > > >>> Best Regards
> > > > >>> Peter Huang
> > > > >>>
> > > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > > > wrote:
> > > > >>>
> > > > >>>>
> > > > >>>> Congratulations
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> Best,
> > > > >>>> Huajie Wang
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> > > > >>>>
> > > > >>>>> Hi devs and users,
> > > > >>>>>
> > > > >>>>> We are thrilled to announce that the donation of Flink CDC as a
> > > > >>>>> sub-project of Apache Flink has completed. We invite you to
> > explore
> > > > the new
> > > > >>>>> resources available:
> > > > >>>>>
> > > > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> > > > >>>>> - Flink CDC Documentation:
> > > > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > > > >>>>>
> > > > >>>>> After Flink community accepted this donation[1], we have
> > completed
> > > > >>>>> software copyright signing, code repo migration, code cleanup,
> > > > website
> > > > >>>>> migration, CI migration and github issues migration etc.
> > > > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > > > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > > > contributors
> > > > >>>>> for their contributions and help during this process!
> > > > >>>>>
> > > > >>>>>
> > > > >>>>> For all previous contributors: The contribution process has
> > > slightly
> > > > >>>>> changed to align with the main Flink project. To report bugs or
> > > > suggest new
> > > > >>>>> features, please open tickets
> > > > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we
> will
> > > no
> > > > >>>>> longer accept GitHub issues for these purposes.
> > > > >>>>>
> > > > >>>>>
> > > > >>>>> Welcome to explore the new repository and documentation. Your
> > > > feedback
> > > > >>>>> and contributions are invaluable as we continue to improve
> Flink
> > > CDC.
> > > > >>>>>
> > > > >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> > > > >>>>>
> > > > >>>>> Best,
> > > > >>>>> Leonard
> > > > >>>>> [1]
> > > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > > > >>>>>
> > > > >>>>>
> > > >
> > >
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Feng Jin
Congratulations!


Best,
Feng


On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:

> Congratulations!
>
> Best,
> Ron
>
> Jark Wu  于2024年3月21日周四 10:46写道:
>
> > Congratulations and welcome!
> >
> > Best,
> > Jark
> >
> > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
> >
> > > Congratulations!
> > >
> > > Best,
> > > Rui
> > >
> > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> > wrote:
> > >
> > > > Congrattulations!
> > > >
> > > > Best,
> > > > Hang
> > > >
> > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > > >
> > > >>
> > > >> Congrats, thanks for the great work!
> > > >>
> > > >>
> > > >> Best,
> > > >> Lincoln Lee
> > > >>
> > > >>
> > > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > > >>
> > > >>> Congratulations
> > > >>>
> > > >>>
> > > >>> Best Regards
> > > >>> Peter Huang
> > > >>>
> > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > > wrote:
> > > >>>
> > > >>>>
> > > >>>> Congratulations
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> Best,
> > > >>>> Huajie Wang
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> > > >>>>
> > > >>>>> Hi devs and users,
> > > >>>>>
> > > >>>>> We are thrilled to announce that the donation of Flink CDC as a
> > > >>>>> sub-project of Apache Flink has completed. We invite you to
> explore
> > > the new
> > > >>>>> resources available:
> > > >>>>>
> > > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> > > >>>>> - Flink CDC Documentation:
> > > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > > >>>>>
> > > >>>>> After Flink community accepted this donation[1], we have
> completed
> > > >>>>> software copyright signing, code repo migration, code cleanup,
> > > website
> > > >>>>> migration, CI migration and github issues migration etc.
> > > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > > contributors
> > > >>>>> for their contributions and help during this process!
> > > >>>>>
> > > >>>>>
> > > >>>>> For all previous contributors: The contribution process has
> > slightly
> > > >>>>> changed to align with the main Flink project. To report bugs or
> > > suggest new
> > > >>>>> features, please open tickets
> > > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will
> > no
> > > >>>>> longer accept GitHub issues for these purposes.
> > > >>>>>
> > > >>>>>
> > > >>>>> Welcome to explore the new repository and documentation. Your
> > > feedback
> > > >>>>> and contributions are invaluable as we continue to improve Flink
> > CDC.
> > > >>>>>
> > > >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> > > >>>>>
> > > >>>>> Best,
> > > >>>>> Leonard
> > > >>>>> [1]
> > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > > >>>>>
> > > >>>>>
> > >
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Ron liu
Congratulations!

Best,
Ron

Jark Wu  于2024年3月21日周四 10:46写道:

> Congratulations and welcome!
>
> Best,
> Jark
>
> On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Congratulations!
> >
> > Best,
> > Rui
> >
> > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> wrote:
> >
> > > Congrattulations!
> > >
> > > Best,
> > > Hang
> > >
> > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > >
> > >>
> > >> Congrats, thanks for the great work!
> > >>
> > >>
> > >> Best,
> > >> Lincoln Lee
> > >>
> > >>
> > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > >>
> > >>> Congratulations
> > >>>
> > >>>
> > >>> Best Regards
> > >>> Peter Huang
> > >>>
> > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > wrote:
> > >>>
> > >>>>
> > >>>> Congratulations
> > >>>>
> > >>>>
> > >>>>
> > >>>> Best,
> > >>>> Huajie Wang
> > >>>>
> > >>>>
> > >>>>
> > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> > >>>>
> > >>>>> Hi devs and users,
> > >>>>>
> > >>>>> We are thrilled to announce that the donation of Flink CDC as a
> > >>>>> sub-project of Apache Flink has completed. We invite you to explore
> > the new
> > >>>>> resources available:
> > >>>>>
> > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> > >>>>> - Flink CDC Documentation:
> > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > >>>>>
> > >>>>> After Flink community accepted this donation[1], we have completed
> > >>>>> software copyright signing, code repo migration, code cleanup,
> > website
> > >>>>> migration, CI migration and github issues migration etc.
> > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > contributors
> > >>>>> for their contributions and help during this process!
> > >>>>>
> > >>>>>
> > >>>>> For all previous contributors: The contribution process has
> slightly
> > >>>>> changed to align with the main Flink project. To report bugs or
> > suggest new
> > >>>>> features, please open tickets
> > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will
> no
> > >>>>> longer accept GitHub issues for these purposes.
> > >>>>>
> > >>>>>
> > >>>>> Welcome to explore the new repository and documentation. Your
> > feedback
> > >>>>> and contributions are invaluable as we continue to improve Flink
> CDC.
> > >>>>>
> > >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> > >>>>>
> > >>>>> Best,
> > >>>>> Leonard
> > >>>>> [1]
> https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > >>>>>
> > >>>>>
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 shuai xu
Congratulations!


Best!
Xushuai

> 2024年3月21日 10:54,Yanquan Lv  写道:
> 
> Congratulations and  Looking forward to future versions!
> 
> Jark Wu  于2024年3月21日周四 10:47写道:
> 
>> Congratulations and welcome!
>> 
>> Best,
>> Jark
>> 
>> On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>> 
>>> Congratulations!
>>> 
>>> Best,
>>> Rui
>>> 
>>> On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
>> wrote:
>>> 
>>>> Congrattulations!
>>>> 
>>>> Best,
>>>> Hang
>>>> 
>>>> Lincoln Lee  于2024年3月21日周四 09:54写道:
>>>> 
>>>>> 
>>>>> Congrats, thanks for the great work!
>>>>> 
>>>>> 
>>>>> Best,
>>>>> Lincoln Lee
>>>>> 
>>>>> 
>>>>> Peter Huang  于2024年3月20日周三 22:48写道:
>>>>> 
>>>>>> Congratulations
>>>>>> 
>>>>>> 
>>>>>> Best Regards
>>>>>> Peter Huang
>>>>>> 
>>>>>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
>>> wrote:
>>>>>> 
>>>>>>> 
>>>>>>> Congratulations
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Best,
>>>>>>> Huajie Wang
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>>>>> 
>>>>>>>> Hi devs and users,
>>>>>>>> 
>>>>>>>> We are thrilled to announce that the donation of Flink CDC as a
>>>>>>>> sub-project of Apache Flink has completed. We invite you to explore
>>> the new
>>>>>>>> resources available:
>>>>>>>> 
>>>>>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>>>>>>> - Flink CDC Documentation:
>>>>>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>>>>>> 
>>>>>>>> After Flink community accepted this donation[1], we have completed
>>>>>>>> software copyright signing, code repo migration, code cleanup,
>>> website
>>>>>>>> migration, CI migration and github issues migration etc.
>>>>>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>>>>>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
>>> contributors
>>>>>>>> for their contributions and help during this process!
>>>>>>>> 
>>>>>>>> 
>>>>>>>> For all previous contributors: The contribution process has
>> slightly
>>>>>>>> changed to align with the main Flink project. To report bugs or
>>> suggest new
>>>>>>>> features, please open tickets
>>>>>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will
>> no
>>>>>>>> longer accept GitHub issues for these purposes.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> Welcome to explore the new repository and documentation. Your
>>> feedback
>>>>>>>> and contributions are invaluable as we continue to improve Flink
>> CDC.
>>>>>>>> 
>>>>>>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>>>>>> 
>>>>>>>> Best,
>>>>>>>> Leonard
>>>>>>>> [1]
>> https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>>>>>> 
>>>>>>>> 
>>> 
>> 



Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Yanquan Lv
Congratulations and  Looking forward to future versions!

Jark Wu  于2024年3月21日周四 10:47写道:

> Congratulations and welcome!
>
> Best,
> Jark
>
> On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Congratulations!
> >
> > Best,
> > Rui
> >
> > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> wrote:
> >
> > > Congrattulations!
> > >
> > > Best,
> > > Hang
> > >
> > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > >
> > >>
> > >> Congrats, thanks for the great work!
> > >>
> > >>
> > >> Best,
> > >> Lincoln Lee
> > >>
> > >>
> > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > >>
> > >>> Congratulations
> > >>>
> > >>>
> > >>> Best Regards
> > >>> Peter Huang
> > >>>
> > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > wrote:
> > >>>
> > >>>>
> > >>>> Congratulations
> > >>>>
> > >>>>
> > >>>>
> > >>>> Best,
> > >>>> Huajie Wang
> > >>>>
> > >>>>
> > >>>>
> > >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> > >>>>
> > >>>>> Hi devs and users,
> > >>>>>
> > >>>>> We are thrilled to announce that the donation of Flink CDC as a
> > >>>>> sub-project of Apache Flink has completed. We invite you to explore
> > the new
> > >>>>> resources available:
> > >>>>>
> > >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> > >>>>> - Flink CDC Documentation:
> > >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > >>>>>
> > >>>>> After Flink community accepted this donation[1], we have completed
> > >>>>> software copyright signing, code repo migration, code cleanup,
> > website
> > >>>>> migration, CI migration and github issues migration etc.
> > >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > contributors
> > >>>>> for their contributions and help during this process!
> > >>>>>
> > >>>>>
> > >>>>> For all previous contributors: The contribution process has
> slightly
> > >>>>> changed to align with the main Flink project. To report bugs or
> > suggest new
> > >>>>> features, please open tickets
> > >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will
> no
> > >>>>> longer accept GitHub issues for these purposes.
> > >>>>>
> > >>>>>
> > >>>>> Welcome to explore the new repository and documentation. Your
> > feedback
> > >>>>> and contributions are invaluable as we continue to improve Flink
> CDC.
> > >>>>>
> > >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> > >>>>>
> > >>>>> Best,
> > >>>>> Leonard
> > >>>>> [1]
> https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > >>>>>
> > >>>>>
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Jark Wu
Congratulations and welcome!

Best,
Jark

On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:

> Congratulations!
>
> Best,
> Rui
>
> On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan  wrote:
>
> > Congrattulations!
> >
> > Best,
> > Hang
> >
> > Lincoln Lee  于2024年3月21日周四 09:54写道:
> >
> >>
> >> Congrats, thanks for the great work!
> >>
> >>
> >> Best,
> >> Lincoln Lee
> >>
> >>
> >> Peter Huang  于2024年3月20日周三 22:48写道:
> >>
> >>> Congratulations
> >>>
> >>>
> >>> Best Regards
> >>> Peter Huang
> >>>
> >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> wrote:
> >>>
> >>>>
> >>>> Congratulations
> >>>>
> >>>>
> >>>>
> >>>> Best,
> >>>> Huajie Wang
> >>>>
> >>>>
> >>>>
> >>>> Leonard Xu  于2024年3月20日周三 21:36写道:
> >>>>
> >>>>> Hi devs and users,
> >>>>>
> >>>>> We are thrilled to announce that the donation of Flink CDC as a
> >>>>> sub-project of Apache Flink has completed. We invite you to explore
> the new
> >>>>> resources available:
> >>>>>
> >>>>> - GitHub Repository: https://github.com/apache/flink-cdc
> >>>>> - Flink CDC Documentation:
> >>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
> >>>>>
> >>>>> After Flink community accepted this donation[1], we have completed
> >>>>> software copyright signing, code repo migration, code cleanup,
> website
> >>>>> migration, CI migration and github issues migration etc.
> >>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> >>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> contributors
> >>>>> for their contributions and help during this process!
> >>>>>
> >>>>>
> >>>>> For all previous contributors: The contribution process has slightly
> >>>>> changed to align with the main Flink project. To report bugs or
> suggest new
> >>>>> features, please open tickets
> >>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
> >>>>> longer accept GitHub issues for these purposes.
> >>>>>
> >>>>>
> >>>>> Welcome to explore the new repository and documentation. Your
> feedback
> >>>>> and contributions are invaluable as we continue to improve Flink CDC.
> >>>>>
> >>>>> Thanks everyone for your support and happy exploring Flink CDC!
> >>>>>
> >>>>> Best,
> >>>>> Leonard
> >>>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> >>>>>
> >>>>>
>


Re:Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Xuyang
Cheers!




--

Best!
Xuyang

在 2024-03-21 10:28:45,"Rui Fan" <1996fan...@gmail.com> 写道:
>Congratulations!
>
>Best,
>Rui
>
>On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan  wrote:
>
>> Congrattulations!
>>
>> Best,
>> Hang
>>
>> Lincoln Lee  于2024年3月21日周四 09:54写道:
>>
>>>
>>> Congrats, thanks for the great work!
>>>
>>>
>>> Best,
>>> Lincoln Lee
>>>
>>>
>>> Peter Huang  于2024年3月20日周三 22:48写道:
>>>
>>>> Congratulations
>>>>
>>>>
>>>> Best Regards
>>>> Peter Huang
>>>>
>>>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>>>>
>>>>>
>>>>> Congratulations
>>>>>
>>>>>
>>>>>
>>>>> Best,
>>>>> Huajie Wang
>>>>>
>>>>>
>>>>>
>>>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>>>
>>>>>> Hi devs and users,
>>>>>>
>>>>>> We are thrilled to announce that the donation of Flink CDC as a
>>>>>> sub-project of Apache Flink has completed. We invite you to explore the 
>>>>>> new
>>>>>> resources available:
>>>>>>
>>>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>>>>> - Flink CDC Documentation:
>>>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>>>>
>>>>>> After Flink community accepted this donation[1], we have completed
>>>>>> software copyright signing, code repo migration, code cleanup, website
>>>>>> migration, CI migration and github issues migration etc.
>>>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>>>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other 
>>>>>> contributors
>>>>>> for their contributions and help during this process!
>>>>>>
>>>>>>
>>>>>> For all previous contributors: The contribution process has slightly
>>>>>> changed to align with the main Flink project. To report bugs or suggest 
>>>>>> new
>>>>>> features, please open tickets
>>>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>>>>> longer accept GitHub issues for these purposes.
>>>>>>
>>>>>>
>>>>>> Welcome to explore the new repository and documentation. Your feedback
>>>>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>>>>
>>>>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>>>>
>>>>>> Best,
>>>>>> Leonard
>>>>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>>>>
>>>>>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Rui Fan
Congratulations!

Best,
Rui

On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan  wrote:

> Congrattulations!
>
> Best,
> Hang
>
> Lincoln Lee  于2024年3月21日周四 09:54写道:
>
>>
>> Congrats, thanks for the great work!
>>
>>
>> Best,
>> Lincoln Lee
>>
>>
>> Peter Huang  于2024年3月20日周三 22:48写道:
>>
>>> Congratulations
>>>
>>>
>>> Best Regards
>>> Peter Huang
>>>
>>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>>>
>>>>
>>>> Congratulations
>>>>
>>>>
>>>>
>>>> Best,
>>>> Huajie Wang
>>>>
>>>>
>>>>
>>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>>
>>>>> Hi devs and users,
>>>>>
>>>>> We are thrilled to announce that the donation of Flink CDC as a
>>>>> sub-project of Apache Flink has completed. We invite you to explore the 
>>>>> new
>>>>> resources available:
>>>>>
>>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>>>> - Flink CDC Documentation:
>>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>>>
>>>>> After Flink community accepted this donation[1], we have completed
>>>>> software copyright signing, code repo migration, code cleanup, website
>>>>> migration, CI migration and github issues migration etc.
>>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors
>>>>> for their contributions and help during this process!
>>>>>
>>>>>
>>>>> For all previous contributors: The contribution process has slightly
>>>>> changed to align with the main Flink project. To report bugs or suggest 
>>>>> new
>>>>> features, please open tickets
>>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>>>> longer accept GitHub issues for these purposes.
>>>>>
>>>>>
>>>>> Welcome to explore the new repository and documentation. Your feedback
>>>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>>>
>>>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>>>
>>>>> Best,
>>>>> Leonard
>>>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>>>
>>>>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Hang Ruan
Congrattulations!

Best,
Hang

Lincoln Lee  于2024年3月21日周四 09:54写道:

>
> Congrats, thanks for the great work!
>
>
> Best,
> Lincoln Lee
>
>
> Peter Huang  于2024年3月20日周三 22:48写道:
>
>> Congratulations
>>
>>
>> Best Regards
>> Peter Huang
>>
>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>>
>>>
>>> Congratulations
>>>
>>>
>>>
>>> Best,
>>> Huajie Wang
>>>
>>>
>>>
>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>
>>>> Hi devs and users,
>>>>
>>>> We are thrilled to announce that the donation of Flink CDC as a
>>>> sub-project of Apache Flink has completed. We invite you to explore the new
>>>> resources available:
>>>>
>>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>>> - Flink CDC Documentation:
>>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>>
>>>> After Flink community accepted this donation[1], we have completed
>>>> software copyright signing, code repo migration, code cleanup, website
>>>> migration, CI migration and github issues migration etc.
>>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>>>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors
>>>> for their contributions and help during this process!
>>>>
>>>>
>>>> For all previous contributors: The contribution process has slightly
>>>> changed to align with the main Flink project. To report bugs or suggest new
>>>> features, please open tickets
>>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>>> longer accept GitHub issues for these purposes.
>>>>
>>>>
>>>> Welcome to explore the new repository and documentation. Your feedback
>>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>>
>>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>>
>>>> Best,
>>>> Leonard
>>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>>
>>>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Lincoln Lee
Congrats, thanks for the great work!


Best,
Lincoln Lee


Peter Huang  于2024年3月20日周三 22:48写道:

> Congratulations
>
>
> Best Regards
> Peter Huang
>
> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>
>>
>> Congratulations
>>
>>
>>
>> Best,
>> Huajie Wang
>>
>>
>>
>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>
>>> Hi devs and users,
>>>
>>> We are thrilled to announce that the donation of Flink CDC as a
>>> sub-project of Apache Flink has completed. We invite you to explore the new
>>> resources available:
>>>
>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>> - Flink CDC Documentation:
>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>
>>> After Flink community accepted this donation[1], we have completed
>>> software copyright signing, code repo migration, code cleanup, website
>>> migration, CI migration and github issues migration etc.
>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng
>>> Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their
>>> contributions and help during this process!
>>>
>>>
>>> For all previous contributors: The contribution process has slightly
>>> changed to align with the main Flink project. To report bugs or suggest new
>>> features, please open tickets
>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>> longer accept GitHub issues for these purposes.
>>>
>>>
>>> Welcome to explore the new repository and documentation. Your feedback
>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>
>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>
>>> Best,
>>> Leonard
>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>
>>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Huajie Wang
Congratulations



Best,
Huajie Wang



Leonard Xu  于2024年3月20日周三 21:36写道:

> Hi devs and users,
>
> We are thrilled to announce that the donation of Flink CDC as a
> sub-project of Apache Flink has completed. We invite you to explore the new
> resources available:
>
> - GitHub Repository: https://github.com/apache/flink-cdc
> - Flink CDC Documentation:
> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>
> After Flink community accepted this donation[1], we have completed
> software copyright signing, code repo migration, code cleanup, website
> migration, CI migration and github issues migration etc.
> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng
> Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their
> contributions and help during this process!
>
>
> For all previous contributors: The contribution process has slightly
> changed to align with the main Flink project. To report bugs or suggest new
> features, please open tickets
> Apache Jira (https://issues.apache.org/jira).  Note that we will no
> longer accept GitHub issues for these purposes.
>
>
> Welcome to explore the new repository and documentation. Your feedback and
> contributions are invaluable as we continue to improve Flink CDC.
>
> Thanks everyone for your support and happy exploring Flink CDC!
>
> Best,
> Leonard
> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>
>


[ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Leonard Xu
Hi devs and users,

We are thrilled to announce that the donation of Flink CDC as a sub-project of 
Apache Flink has completed. We invite you to explore the new resources 
available:

- GitHub Repository: https://github.com/apache/flink-cdc
- Flink CDC Documentation: 
https://nightlies.apache.org/flink/flink-cdc-docs-stable

After Flink community accepted this donation[1], we have completed software 
copyright signing, code repo migration, code cleanup, website migration, CI 
migration and github issues migration etc. 
Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng Ren, 
Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their 
contributions and help during this process!


For all previous contributors: The contribution process has slightly changed to 
align with the main Flink project. To report bugs or suggest new features, 
please open tickets 
Apache Jira (https://issues.apache.org/jira).  Note that we will no longer 
accept GitHub issues for these purposes.


Welcome to explore the new repository and documentation. Your feedback and 
contributions are invaluable as we continue to improve Flink CDC.

Thanks everyone for your support and happy exploring Flink CDC!

Best,
Leonard
[1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob



Re: flink operator 高可用任务偶发性报错unable to update ConfigMapLock

2024-03-20 文章 Yang Wang
这种一般是因为APIServer那边有问题导致单次的ConfigMap renew lease annotation的操作失败,Flink默认会重试的

如果你发现因为这个SocketTimeoutException原因导致了任务Failover,可以把下面两个参数调大
high-availability.kubernetes.leader-election.lease-duration: 60s
high-availability.kubernetes.leader-election.renew-deadline: 60s


Best,
Yang

On Tue, Mar 12, 2024 at 11:38 AM kellygeorg...@163.com <
kellygeorg...@163.com> wrote:

> 有没有高手指点一二???在线等
>
>
>
>  回复的原邮件 
> | 发件人 | kellygeorg...@163.com |
> | 日期 | 2024年03月11日 20:29 |
> | 收件人 | user-zh |
> | 抄送至 | |
> | 主题 | flink operator 高可用任务偶发性报错unable to update ConfigMapLock |
> jobmanager的报错如下所示,请问是什么原因?
> Exception occurred while renewing lock:Unable to update ConfigMapLock
>
> Caused by:io.fabric8.kubernetes.client.Kubernetes Client
> Exception:Operation:[replace] for kind:[ConfigMap] with name:[flink task
> xx- configmap] in namespace:[default]
>
>
> Caused by: Java.net.SocketTimeoutException:timeout
>
>
>
>
>
>
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Yu Li
Congrats and thanks all for the efforts!

Best Regards,
Yu

On Tue, 19 Mar 2024 at 11:51, gongzhongqiang  wrote:
>
> Congrats! Thanks to everyone involved!
>
> Best,
> Zhongqiang Gong
>
> Lincoln Lee  于2024年3月18日周一 16:27写道:
>>
>> The Apache Flink community is very happy to announce the release of Apache
>> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>>
>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data streaming
>> applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please check out the release blog post for an overview of the improvements
>> for this bugfix release:
>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>>
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>>
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>>
>>
>> Best,
>> Yun, Jing, Martijn and Lincoln


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 gongzhongqiang
Congrats! Thanks to everyone involved!

Best,
Zhongqiang Gong

Lincoln Lee  于2024年3月18日周一 16:27写道:

> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements
> for this bugfix release:
>
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
>
> Best,
> Yun, Jing, Martijn and Lincoln
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Weihua Hu
Congratulations

Best,
Weihua


On Tue, Mar 19, 2024 at 10:56 AM Rodrigo Meneses  wrote:

> Congratulations
>
> On Mon, Mar 18, 2024 at 7:43 PM Yu Chen  wrote:
>
> > Congratulations!
> > Thanks to release managers and everyone involved!
> >
> > Best,
> > Yu Chen
> >
> >
> > > 2024年3月19日 01:01,Jeyhun Karimov  写道:
> > >
> > > Congrats!
> > > Thanks to release managers and everyone involved.
> > >
> > > Regards,
> > > Jeyhun
> > >
> > > On Mon, Mar 18, 2024 at 9:25 AM Lincoln Lee 
> > wrote:
> > >
> > >> The Apache Flink community is very happy to announce the release of
> > Apache
> > >> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19
> > series.
> > >>
> > >> Apache Flink® is an open-source stream processing framework for
> > >> distributed, high-performing, always-available, and accurate data
> > streaming
> > >> applications.
> > >>
> > >> The release is available for download at:
> > >> https://flink.apache.org/downloads.html
> > >>
> > >> Please check out the release blog post for an overview of the
> > improvements
> > >> for this bugfix release:
> > >>
> > >>
> >
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> > >>
> > >> The full release notes are available in Jira:
> > >>
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > >>
> > >> We would like to thank all contributors of the Apache Flink community
> > who
> > >> made this release possible!
> > >>
> > >>
> > >> Best,
> > >> Yun, Jing, Martijn and Lincoln
> > >>
> >
> >
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Yu Chen
Congratulations!
Thanks to release managers and everyone involved!

Best,
Yu Chen
 

> 2024年3月19日 01:01,Jeyhun Karimov  写道:
> 
> Congrats!
> Thanks to release managers and everyone involved.
> 
> Regards,
> Jeyhun
> 
> On Mon, Mar 18, 2024 at 9:25 AM Lincoln Lee  wrote:
> 
>> The Apache Flink community is very happy to announce the release of Apache
>> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>> 
>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data streaming
>> applications.
>> 
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>> 
>> Please check out the release blog post for an overview of the improvements
>> for this bugfix release:
>> 
>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>> 
>> The full release notes are available in Jira:
>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>> 
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>> 
>> 
>> Best,
>> Yun, Jing, Martijn and Lincoln
>> 



Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Ron liu
Congratulations

Best,
Ron

Yanfei Lei  于2024年3月18日周一 20:01写道:

> Congrats, thanks for the great work!
>
> Sergey Nuyanzin  于2024年3月18日周一 19:30写道:
> >
> > Congratulations, thanks release managers and everyone involved for the
> great work!
> >
> > On Mon, Mar 18, 2024 at 12:15 PM Benchao Li 
> wrote:
> >>
> >> Congratulations! And thanks to all release managers and everyone
> >> involved in this release!
> >>
> >> Yubin Li  于2024年3月18日周一 18:11写道:
> >> >
> >> > Congratulations!
> >> >
> >> > Thanks to release managers and everyone involved.
> >> >
> >> > On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu 
> wrote:
> >> > >
> >> > > Congratulations!
> >> > > Thanks release managers and all involved!
> >> > >
> >> > > On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan 
> wrote:
> >> > >
> >> > > > Congratulations!
> >> > > >
> >> > > > Best,
> >> > > > Hang
> >> > > >
> >> > > > Paul Lam  于2024年3月18日周一 17:18写道:
> >> > > >
> >> > > > > Congrats! Thanks to everyone involved!
> >> > > > >
> >> > > > > Best,
> >> > > > > Paul Lam
> >> > > > >
> >> > > > > > 2024年3月18日 16:37,Samrat Deb  写道:
> >> > > > > >
> >> > > > > > Congratulations !
> >> > > > > >
> >> > > > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li <
> jingsongl...@gmail.com>
> >> > > > > wrote:
> >> > > > > >
> >> > > > > >> Congratulations!
> >> > > > > >>
> >> > > > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <
> 1996fan...@gmail.com> wrote:
> >> > > > > >>>
> >> > > > > >>> Congratulations, thanks for the great work!
> >> > > > > >>>
> >> > > > > >>> Best,
> >> > > > > >>> Rui
> >> > > > > >>>
> >> > > > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee <
> lincoln.8...@gmail.com>
> >> > > > > >> wrote:
> >> > > > > >>>>
> >> > > > > >>>> The Apache Flink community is very happy to announce the
> release of
> >> > > > > >> Apache Flink 1.19.0, which is the fisrt release for the
> Apache Flink
> >> > > > > 1.19
> >> > > > > >> series.
> >> > > > > >>>>
> >> > > > > >>>> Apache Flink® is an open-source stream processing
> framework for
> >> > > > > >> distributed, high-performing, always-available, and accurate
> data
> >> > > > > streaming
> >> > > > > >> applications.
> >> > > > > >>>>
> >> > > > > >>>> The release is available for download at:
> >> > > > > >>>> https://flink.apache.org/downloads.html
> >> > > > > >>>>
> >> > > > > >>>> Please check out the release blog post for an overview of
> the
> >> > > > > >> improvements for this bugfix release:
> >> > > > > >>>>
> >> > > > > >>
> >> > > > >
> >> > > >
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> >> > > > > >>>>
> >> > > > > >>>> The full release notes are available in Jira:
> >> > > > > >>>>
> >> > > > > >>
> >> > > > >
> >> > > >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> >> > > > > >>>>
> >> > > > > >>>> We would like to thank all contributors of the Apache Flink
> >> > > > community
> >> > > > > >> who made this release possible!
> >> > > > > >>>>
> >> > > > > >>>>
> >> > > > > >>>> Best,
> >> > > > > >>>> Yun, Jing, Martijn and Lincoln
> >> > > > > >>
> >> > > > >
> >> > > > >
> >> > > >
> >> > >
> >> > >
> >> > > --
> >> > > Best,
> >> > > Hangxiang.
> >>
> >>
> >>
> >> --
> >>
> >> Best,
> >> Benchao Li
> >
> >
> >
> > --
> > Best regards,
> > Sergey
>
>
>
> --
> Best,
> Yanfei
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Yanfei Lei
Congrats, thanks for the great work!

Sergey Nuyanzin  于2024年3月18日周一 19:30写道:
>
> Congratulations, thanks release managers and everyone involved for the great 
> work!
>
> On Mon, Mar 18, 2024 at 12:15 PM Benchao Li  wrote:
>>
>> Congratulations! And thanks to all release managers and everyone
>> involved in this release!
>>
>> Yubin Li  于2024年3月18日周一 18:11写道:
>> >
>> > Congratulations!
>> >
>> > Thanks to release managers and everyone involved.
>> >
>> > On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu  wrote:
>> > >
>> > > Congratulations!
>> > > Thanks release managers and all involved!
>> > >
>> > > On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan  wrote:
>> > >
>> > > > Congratulations!
>> > > >
>> > > > Best,
>> > > > Hang
>> > > >
>> > > > Paul Lam  于2024年3月18日周一 17:18写道:
>> > > >
>> > > > > Congrats! Thanks to everyone involved!
>> > > > >
>> > > > > Best,
>> > > > > Paul Lam
>> > > > >
>> > > > > > 2024年3月18日 16:37,Samrat Deb  写道:
>> > > > > >
>> > > > > > Congratulations !
>> > > > > >
>> > > > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li 
>> > > > > > 
>> > > > > wrote:
>> > > > > >
>> > > > > >> Congratulations!
>> > > > > >>
>> > > > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> 
>> > > > > >> wrote:
>> > > > > >>>
>> > > > > >>> Congratulations, thanks for the great work!
>> > > > > >>>
>> > > > > >>> Best,
>> > > > > >>> Rui
>> > > > > >>>
>> > > > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
>> > > > > >>> 
>> > > > > >> wrote:
>> > > > > >>>>
>> > > > > >>>> The Apache Flink community is very happy to announce the 
>> > > > > >>>> release of
>> > > > > >> Apache Flink 1.19.0, which is the fisrt release for the Apache 
>> > > > > >> Flink
>> > > > > 1.19
>> > > > > >> series.
>> > > > > >>>>
>> > > > > >>>> Apache Flink® is an open-source stream processing framework for
>> > > > > >> distributed, high-performing, always-available, and accurate data
>> > > > > streaming
>> > > > > >> applications.
>> > > > > >>>>
>> > > > > >>>> The release is available for download at:
>> > > > > >>>> https://flink.apache.org/downloads.html
>> > > > > >>>>
>> > > > > >>>> Please check out the release blog post for an overview of the
>> > > > > >> improvements for this bugfix release:
>> > > > > >>>>
>> > > > > >>
>> > > > >
>> > > > https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>> > > > > >>>>
>> > > > > >>>> The full release notes are available in Jira:
>> > > > > >>>>
>> > > > > >>
>> > > > >
>> > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>> > > > > >>>>
>> > > > > >>>> We would like to thank all contributors of the Apache Flink
>> > > > community
>> > > > > >> who made this release possible!
>> > > > > >>>>
>> > > > > >>>>
>> > > > > >>>> Best,
>> > > > > >>>> Yun, Jing, Martijn and Lincoln
>> > > > > >>
>> > > > >
>> > > > >
>> > > >
>> > >
>> > >
>> > > --
>> > > Best,
>> > > Hangxiang.
>>
>>
>>
>> --
>>
>> Best,
>> Benchao Li
>
>
>
> --
> Best regards,
> Sergey



-- 
Best,
Yanfei


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Sergey Nuyanzin
Congratulations, thanks release managers and everyone involved for the
great work!

On Mon, Mar 18, 2024 at 12:15 PM Benchao Li  wrote:

> Congratulations! And thanks to all release managers and everyone
> involved in this release!
>
> Yubin Li  于2024年3月18日周一 18:11写道:
> >
> > Congratulations!
> >
> > Thanks to release managers and everyone involved.
> >
> > On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu 
> wrote:
> > >
> > > Congratulations!
> > > Thanks release managers and all involved!
> > >
> > > On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan 
> wrote:
> > >
> > > > Congratulations!
> > > >
> > > > Best,
> > > > Hang
> > > >
> > > > Paul Lam  于2024年3月18日周一 17:18写道:
> > > >
> > > > > Congrats! Thanks to everyone involved!
> > > > >
> > > > > Best,
> > > > > Paul Lam
> > > > >
> > > > > > 2024年3月18日 16:37,Samrat Deb  写道:
> > > > > >
> > > > > > Congratulations !
> > > > > >
> > > > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li <
> jingsongl...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > >> Congratulations!
> > > > > >>
> > > > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com>
> wrote:
> > > > > >>>
> > > > > >>> Congratulations, thanks for the great work!
> > > > > >>>
> > > > > >>> Best,
> > > > > >>> Rui
> > > > > >>>
> > > > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee <
> lincoln.8...@gmail.com>
> > > > > >> wrote:
> > > > > >>>>
> > > > > >>>> The Apache Flink community is very happy to announce the
> release of
> > > > > >> Apache Flink 1.19.0, which is the fisrt release for the Apache
> Flink
> > > > > 1.19
> > > > > >> series.
> > > > > >>>>
> > > > > >>>> Apache Flink® is an open-source stream processing framework
> for
> > > > > >> distributed, high-performing, always-available, and accurate
> data
> > > > > streaming
> > > > > >> applications.
> > > > > >>>>
> > > > > >>>> The release is available for download at:
> > > > > >>>> https://flink.apache.org/downloads.html
> > > > > >>>>
> > > > > >>>> Please check out the release blog post for an overview of the
> > > > > >> improvements for this bugfix release:
> > > > > >>>>
> > > > > >>
> > > > >
> > > >
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> > > > > >>>>
> > > > > >>>> The full release notes are available in Jira:
> > > > > >>>>
> > > > > >>
> > > > >
> > > >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > > > > >>>>
> > > > > >>>> We would like to thank all contributors of the Apache Flink
> > > > community
> > > > > >> who made this release possible!
> > > > > >>>>
> > > > > >>>>
> > > > > >>>> Best,
> > > > > >>>> Yun, Jing, Martijn and Lincoln
> > > > > >>
> > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > Best,
> > > Hangxiang.
>
>
>
> --
>
> Best,
> Benchao Li
>


-- 
Best regards,
Sergey


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Benchao Li
Congratulations! And thanks to all release managers and everyone
involved in this release!

Yubin Li  于2024年3月18日周一 18:11写道:
>
> Congratulations!
>
> Thanks to release managers and everyone involved.
>
> On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu  wrote:
> >
> > Congratulations!
> > Thanks release managers and all involved!
> >
> > On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan  wrote:
> >
> > > Congratulations!
> > >
> > > Best,
> > > Hang
> > >
> > > Paul Lam  于2024年3月18日周一 17:18写道:
> > >
> > > > Congrats! Thanks to everyone involved!
> > > >
> > > > Best,
> > > > Paul Lam
> > > >
> > > > > 2024年3月18日 16:37,Samrat Deb  写道:
> > > > >
> > > > > Congratulations !
> > > > >
> > > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li 
> > > > wrote:
> > > > >
> > > > >> Congratulations!
> > > > >>
> > > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
> > > > >>>
> > > > >>> Congratulations, thanks for the great work!
> > > > >>>
> > > > >>> Best,
> > > > >>> Rui
> > > > >>>
> > > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> > > > >> wrote:
> > > > >>>>
> > > > >>>> The Apache Flink community is very happy to announce the release of
> > > > >> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink
> > > > 1.19
> > > > >> series.
> > > > >>>>
> > > > >>>> Apache Flink® is an open-source stream processing framework for
> > > > >> distributed, high-performing, always-available, and accurate data
> > > > streaming
> > > > >> applications.
> > > > >>>>
> > > > >>>> The release is available for download at:
> > > > >>>> https://flink.apache.org/downloads.html
> > > > >>>>
> > > > >>>> Please check out the release blog post for an overview of the
> > > > >> improvements for this bugfix release:
> > > > >>>>
> > > > >>
> > > >
> > > https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> > > > >>>>
> > > > >>>> The full release notes are available in Jira:
> > > > >>>>
> > > > >>
> > > >
> > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > > > >>>>
> > > > >>>> We would like to thank all contributors of the Apache Flink
> > > community
> > > > >> who made this release possible!
> > > > >>>>
> > > > >>>>
> > > > >>>> Best,
> > > > >>>> Yun, Jing, Martijn and Lincoln
> > > > >>
> > > >
> > > >
> > >
> >
> >
> > --
> > Best,
> > Hangxiang.



-- 

Best,
Benchao Li


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Yubin Li
Congratulations!

Thanks to release managers and everyone involved.

On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu  wrote:
>
> Congratulations!
> Thanks release managers and all involved!
>
> On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan  wrote:
>
> > Congratulations!
> >
> > Best,
> > Hang
> >
> > Paul Lam  于2024年3月18日周一 17:18写道:
> >
> > > Congrats! Thanks to everyone involved!
> > >
> > > Best,
> > > Paul Lam
> > >
> > > > 2024年3月18日 16:37,Samrat Deb  写道:
> > > >
> > > > Congratulations !
> > > >
> > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li 
> > > wrote:
> > > >
> > > >> Congratulations!
> > > >>
> > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
> > > >>>
> > > >>> Congratulations, thanks for the great work!
> > > >>>
> > > >>> Best,
> > > >>> Rui
> > > >>>
> > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> > > >> wrote:
> > > >>>>
> > > >>>> The Apache Flink community is very happy to announce the release of
> > > >> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink
> > > 1.19
> > > >> series.
> > > >>>>
> > > >>>> Apache Flink® is an open-source stream processing framework for
> > > >> distributed, high-performing, always-available, and accurate data
> > > streaming
> > > >> applications.
> > > >>>>
> > > >>>> The release is available for download at:
> > > >>>> https://flink.apache.org/downloads.html
> > > >>>>
> > > >>>> Please check out the release blog post for an overview of the
> > > >> improvements for this bugfix release:
> > > >>>>
> > > >>
> > >
> > https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> > > >>>>
> > > >>>> The full release notes are available in Jira:
> > > >>>>
> > > >>
> > >
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > > >>>>
> > > >>>> We would like to thank all contributors of the Apache Flink
> > community
> > > >> who made this release possible!
> > > >>>>
> > > >>>>
> > > >>>> Best,
> > > >>>> Yun, Jing, Martijn and Lincoln
> > > >>
> > >
> > >
> >
>
>
> --
> Best,
> Hangxiang.


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Zakelly Lan
Congratulations!

Thanks Lincoln, Yun, Martijn and Jing for driving this release.
Thanks everyone involved.


Best,
Zakelly

On Mon, Mar 18, 2024 at 5:05 PM weijie guo 
wrote:

> Congratulations!
>
> Thanks release managers and all the contributors involved.
>
> Best regards,
>
> Weijie
>
>
> Leonard Xu  于2024年3月18日周一 16:45写道:
>
>> Congratulations, thanks release managers and all involved for the great
>> work!
>>
>>
>> Best,
>> Leonard
>>
>> > 2024年3月18日 下午4:32,Jingsong Li  写道:
>> >
>> > Congratulations!
>> >
>> > On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
>> >>
>> >> Congratulations, thanks for the great work!
>> >>
>> >> Best,
>> >> Rui
>> >>
>> >> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
>> wrote:
>> >>>
>> >>> The Apache Flink community is very happy to announce the release of
>> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19
>> series.
>> >>>
>> >>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data streaming
>> applications.
>> >>>
>> >>> The release is available for download at:
>> >>> https://flink.apache.org/downloads.html
>> >>>
>> >>> Please check out the release blog post for an overview of the
>> improvements for this bugfix release:
>> >>>
>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>> >>>
>> >>> The full release notes are available in Jira:
>> >>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>> >>>
>> >>> We would like to thank all contributors of the Apache Flink community
>> who made this release possible!
>> >>>
>> >>>
>> >>> Best,
>> >>> Yun, Jing, Martijn and Lincoln
>>
>>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 weijie guo
Congratulations!

Thanks release managers and all the contributors involved.

Best regards,

Weijie


Leonard Xu  于2024年3月18日周一 16:45写道:

> Congratulations, thanks release managers and all involved for the great
> work!
>
>
> Best,
> Leonard
>
> > 2024年3月18日 下午4:32,Jingsong Li  写道:
> >
> > Congratulations!
> >
> > On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
> >>
> >> Congratulations, thanks for the great work!
> >>
> >> Best,
> >> Rui
> >>
> >> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> wrote:
> >>>
> >>> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19
> series.
> >>>
> >>> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> >>>
> >>> The release is available for download at:
> >>> https://flink.apache.org/downloads.html
> >>>
> >>> Please check out the release blog post for an overview of the
> improvements for this bugfix release:
> >>>
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> >>>
> >>> The full release notes are available in Jira:
> >>>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> >>>
> >>> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> >>>
> >>>
> >>> Best,
> >>> Yun, Jing, Martijn and Lincoln
>
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Leonard Xu
Congratulations, thanks release managers and all involved for the great work!


Best,
Leonard

> 2024年3月18日 下午4:32,Jingsong Li  写道:
> 
> Congratulations!
> 
> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
>> 
>> Congratulations, thanks for the great work!
>> 
>> Best,
>> Rui
>> 
>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee  wrote:
>>> 
>>> The Apache Flink community is very happy to announce the release of Apache 
>>> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>>> 
>>> Apache Flink® is an open-source stream processing framework for 
>>> distributed, high-performing, always-available, and accurate data streaming 
>>> applications.
>>> 
>>> The release is available for download at:
>>> https://flink.apache.org/downloads.html
>>> 
>>> Please check out the release blog post for an overview of the improvements 
>>> for this bugfix release:
>>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>>> 
>>> The full release notes are available in Jira:
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>>> 
>>> We would like to thank all contributors of the Apache Flink community who 
>>> made this release possible!
>>> 
>>> 
>>> Best,
>>> Yun, Jing, Martijn and Lincoln



Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Jark Wu
Congrats! Thanks Lincoln, Jing, Yun and Martijn driving this release.
Thanks all who involved this release!

Best,
Jark


On Mon, 18 Mar 2024 at 16:31, Rui Fan <1996fan...@gmail.com> wrote:

> Congratulations, thanks for the great work!
>
> Best,
> Rui
>
> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> wrote:
>
> > The Apache Flink community is very happy to announce the release of
> Apache
> > Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19
> series.
> >
> > Apache Flink® is an open-source stream processing framework for
> > distributed, high-performing, always-available, and accurate data
> streaming
> > applications.
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Please check out the release blog post for an overview of the
> improvements
> > for this bugfix release:
> >
> >
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> >
> > The full release notes are available in Jira:
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> >
> > We would like to thank all contributors of the Apache Flink community who
> > made this release possible!
> >
> >
> > Best,
> > Yun, Jing, Martijn and Lincoln
> >
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Jingsong Li
Congratulations!

On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
>
> Congratulations, thanks for the great work!
>
> Best,
> Rui
>
> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee  wrote:
>>
>> The Apache Flink community is very happy to announce the release of Apache 
>> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>>
>> Apache Flink® is an open-source stream processing framework for distributed, 
>> high-performing, always-available, and accurate data streaming applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please check out the release blog post for an overview of the improvements 
>> for this bugfix release:
>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>>
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>>
>> We would like to thank all contributors of the Apache Flink community who 
>> made this release possible!
>>
>>
>> Best,
>> Yun, Jing, Martijn and Lincoln


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Rui Fan
Congratulations, thanks for the great work!

Best,
Rui

On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee  wrote:

> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements
> for this bugfix release:
>
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
>
> Best,
> Yun, Jing, Martijn and Lincoln
>


[ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Lincoln Lee
The Apache Flink community is very happy to announce the release of Apache
Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements
for this bugfix release:
https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282

We would like to thank all contributors of the Apache Flink community who
made this release possible!


Best,
Yun, Jing, Martijn and Lincoln


Re: 急 [FLINK-34170] 何时能够修复?

2024-03-14 文章 Benchao Li
FLINK-34170 只是一个UI的展示问题,并不影响实际的运行。

JDBC Connector 维表下推的 filter 不生效问题,已经在 FLINK-33365 中修复了,最新的 JDBC
Connector 版本中已经带上了这个修复,你可以试一下~

casel.chen  于2024年3月15日周五 10:39写道:
>
> 我们最近在使用Flink 1.17.1开发flink sql作业维表关联使用复合主键时遇到FLINK-34170描述一样的问题,请问这个major 
> issue什么时候在哪个版本后能够修复呢?谢谢!
>
>
> select xxx from kafka_table as kt
> left join phoenix_table FORSYSTEM_TIMEASOFphoenix_table.proctime as pt
> on kt.trans_id=pt.trans_id and pt.trans_date = 
> DATE_FORMAT(CURRENT_TIMESTAMP,'MMdd');
>
>
> phoenix表主键是 trans_id + trans_date 
> 复合主键,实际作业运行发现flink只会带trans_id字段对phoenix表进行scan查询,再根据scan查询结果按trans_date字段值进行过滤
>
>
> https://issues.apache.org/jira/browse/FLINK-34170



-- 

Best,
Benchao Li


急 [FLINK-34170] 何时能够修复?

2024-03-14 文章 casel.chen
我们最近在使用Flink 1.17.1开发flink sql作业维表关联使用复合主键时遇到FLINK-34170描述一样的问题,请问这个major 
issue什么时候在哪个版本后能够修复呢?谢谢!


select xxx from kafka_table as kt 
left join phoenix_table FORSYSTEM_TIMEASOFphoenix_table.proctime as pt
on kt.trans_id=pt.trans_id and pt.trans_date = 
DATE_FORMAT(CURRENT_TIMESTAMP,'MMdd');


phoenix表主键是 trans_id + trans_date 
复合主键,实际作业运行发现flink只会带trans_id字段对phoenix表进行scan查询,再根据scan查询结果按trans_date字段值进行过滤


https://issues.apache.org/jira/browse/FLINK-34170

flink k8s operator chk config interval bug.inoperative

2024-03-14 文章 kcz
kcz
573693...@qq.com





Re: flink写kafka时,并行度和分区数的设置问题

2024-03-13 文章 Zhanghao Chen
你好,

写 Kafka 分区的策略取决于使用的 Kafka Sink 的 Partitioner [1],默认使用的是 Kafka 的 Default 
Partitioner,底层使用了一种称之为黏性分区的策略:对于指定 key 的数据按照对 key hash 的方式选择分区写入,对于未指定 key 
的数据则随机选择一个分区,然后“黏住”这个分区一段时间以提升攒批效果,然后攒批结束写完后再随机换一个分区,来在攒批效果和均匀写入间做一个平衡。
具体可以参考 [2]。

因此,默认配置下不存在你说的遍历导致攒批效果下降的问题,在达到 Kafka 
单分区写入瓶颈前,只是扩大写入并发就会有比较好的提升写入吞吐的效果。不过在一些特殊情况下,比如如果你并发很高,单并发写入 QPS 
极低,以致于单次攒批周期内甚至只有一两条消息,导致攒批效果差,打到 Kafka 写入瓶颈,那么降低并发可能反而能通过提升攒批效果的形式,配合写入压缩降低写入 
Kafka 流量,提升写入吞吐。

[1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/kafka/#sink-partitioning
[2] https://www.cnblogs.com/huxi2b/p/12540092.html



From: chenyu_opensource 
Sent: Wednesday, March 13, 2024 15:25
To: user-zh@flink.apache.org 
Subject: flink写kafka时,并行度和分区数的设置问题

您好:
 flink将数据写入kafka【kafka为sink】,当kafka 
topic分区数【设置的60】小于设置的并行度【设置的300】时,task是轮询写入这些分区吗,是否会影响写入效率?【是否存在遍历时的耗时情况】。
 此时,如果扩大topic的分区数【添加至200,或者直接到300】,写入的效率是否会有明显的提升?

 是否有相关的源码可以查看。
期待回复,祝好,谢谢!





Re: flink集群如何将日志直接写入elasticsearch中?

2024-03-13 文章 Jiabao Sun
比较简单的方式是启动一个filebeat进程,抓取 jobmanager.log 和t askmanager.log

Best,
Jiabao

kellygeorg...@163.com  于2024年3月13日周三 15:30写道:

> 有没有比较方便快捷的解决方案?
>
>
>


flink集群如何将日志直接写入elasticsearch中?

2024-03-13 文章 kellygeorg...@163.com
有没有比较方便快捷的解决方案?




flink写kafka时,并行度和分区数的设置问题

2024-03-13 文章 chenyu_opensource
您好:
 flink将数据写入kafka【kafka为sink】,当kafka 
topic分区数【设置的60】小于设置的并行度【设置的300】时,task是轮询写入这些分区吗,是否会影响写入效率?【是否存在遍历时的耗时情况】。
 此时,如果扩大topic的分区数【添加至200,或者直接到300】,写入的效率是否会有明显的提升?

 是否有相关的源码可以查看。
期待回复,祝好,谢谢!





回复:flink operator 高可用任务偶发性报错unable to update ConfigMapLock

2024-03-11 文章 kellygeorg...@163.com
有没有高手指点一二???在线等



 回复的原邮件 
| 发件人 | kellygeorg...@163.com |
| 日期 | 2024年03月11日 20:29 |
| 收件人 | user-zh |
| 抄送至 | |
| 主题 | flink operator 高可用任务偶发性报错unable to update ConfigMapLock |
jobmanager的报错如下所示,请问是什么原因?
Exception occurred while renewing lock:Unable to update ConfigMapLock

Caused by:io.fabric8.kubernetes.client.Kubernetes Client 
Exception:Operation:[replace] for kind:[ConfigMap] with name:[flink task 
xx- configmap] in namespace:[default]


Caused by: Java.net.SocketTimeoutException:timeout








flink operator 高可用任务偶发性报错unable to update ConfigMapLock

2024-03-11 文章 kellygeorg...@163.com
jobmanager的报错如下所示,请问是什么原因?
Exception occurred while renewing lock:Unable to update ConfigMapLock

Caused by:io.fabric8.kubernetes.client.Kubernetes Client 
Exception:Operation:[replace] for kind:[ConfigMap] with name:[flink task 
xx- configmap] in namespace:[default]


Caused by: Java.net.SocketTimeoutException:timeout








Re: 回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 文章 Zhanghao Chen
事实上是可行的。你可以直接修改 StreamExecutionEnvironment 的源码,默认给作业作业注册上一个你们定制的 
listener,然后通过某种那个方式把这个信息透出来。在 FLIP-314 [1] 中,我们计划直接在 Flink 里原生提供一个这样的接口让你去注册自己的 
listener 获取血缘信息,不过还没发布,可以先自己做。

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-314:+Support+Customized+Job+Lineage+Listener

From: 阿华田 
Sent: Friday, March 8, 2024 18:47
To: user-zh@flink.apache.org 
Subject: 回复: Flink DataStream 作业如何获取到作业血缘?

我们想修改源码 实现任意任务提交实时平台,初始化DAG的时候获取到血缘信息,StreamExecutionEnvironment注册 这种只能写在任务里 
不满足需求




| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制


在2024年03月8日 18:23,Zhanghao Chen 写道:
你可以看下 OpenLineage 和 Flink 的集成方法 [1],它是在 StreamExecutionEnvironment 里注册了一个 
JobListener(通过这个可以拿到 JobClient 进而拿到 job id)。然后从 execution environment 里可以抽取到 
transformation 信息处理 [2]。

[1] https://openlineage.io/docs/integrations/flink/
[2] 
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/app/src/main/java/io/openlineage/flink/OpenLineageFlinkJobListener.java


Best,
Zhanghao Chen

From: 阿华田 
Sent: Friday, March 8, 2024 16:48
To: user-zh@flink.apache.org 
Subject: 回复: Flink DataStream 作业如何获取到作业血缘?



”JobGraph 可以获得 transformation 信息“, JobGraph可以直接获取transformation的信息吗?, 我们是在
SourceTransformation 和SinkTransformation反射拿到链接信息 ,但是这个地方拿不到flinkJobid,  
JobGraph可以拿到source和sink的链接信息和flinkJobid?
| |
阿华田
|
|
a15733178...@163.com
|
JobGraph 可以获得 transformation 信息transformation
签名由网易邮箱大师定制


在2024年03月8日 16:18,Zhanghao Chen 写道:
JobGraph 里有个字段就是 jobid。

Best,
Zhanghao Chen

From: 阿华田 
Sent: Friday, March 8, 2024 14:14
To: user-zh@flink.apache.org 
Subject: 回复: Flink DataStream 作业如何获取到作业血缘?

获取到Source 或者 DorisSink信息之后, 如何知道来自那个flink任务,好像不能获取到flinkJobId


| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制


在2024年02月26日 20:04,Feng Jin 写道:
通过 JobGraph 可以获得 transformation 信息,可以获得具体的 Source 或者 Doris
Sink,之后再通过反射获取里面的 properties 信息进行提取。

可以参考 OpenLineage[1] 的实现.


1.
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/shared/src/main/java/io/openlineage/flink/visitor/wrapper/FlinkKafkaConsumerWrapper.java


Best,
Feng


On Mon, Feb 26, 2024 at 6:20 PM casel.chen  wrote:

一个Flink DataStream 作业从mysql cdc消费处理后写入apache
doris,请问有没有办法(从JobGraph/StreamGraph)获取到source/sink
connector信息,包括连接字符串、数据库名、表名等?


回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 文章 阿华田
我们想修改源码 实现任意任务提交实时平台,初始化DAG的时候获取到血缘信息,StreamExecutionEnvironment注册 这种只能写在任务里 
不满足需求




| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制


在2024年03月8日 18:23,Zhanghao Chen 写道:
你可以看下 OpenLineage 和 Flink 的集成方法 [1],它是在 StreamExecutionEnvironment 里注册了一个 
JobListener(通过这个可以拿到 JobClient 进而拿到 job id)。然后从 execution environment 里可以抽取到 
transformation 信息处理 [2]。

[1] https://openlineage.io/docs/integrations/flink/
[2] 
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/app/src/main/java/io/openlineage/flink/OpenLineageFlinkJobListener.java


Best,
Zhanghao Chen

From: 阿华田 
Sent: Friday, March 8, 2024 16:48
To: user-zh@flink.apache.org 
Subject: 回复: Flink DataStream 作业如何获取到作业血缘?



”JobGraph 可以获得 transformation 信息“, JobGraph可以直接获取transformation的信息吗?, 我们是在
SourceTransformation 和SinkTransformation反射拿到链接信息 ,但是这个地方拿不到flinkJobid,  
JobGraph可以拿到source和sink的链接信息和flinkJobid?
| |
阿华田
|
|
a15733178...@163.com
|
JobGraph 可以获得 transformation 信息transformation
签名由网易邮箱大师定制


在2024年03月8日 16:18,Zhanghao Chen 写道:
JobGraph 里有个字段就是 jobid。

Best,
Zhanghao Chen

From: 阿华田 
Sent: Friday, March 8, 2024 14:14
To: user-zh@flink.apache.org 
Subject: 回复: Flink DataStream 作业如何获取到作业血缘?

获取到Source 或者 DorisSink信息之后, 如何知道来自那个flink任务,好像不能获取到flinkJobId


| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制


在2024年02月26日 20:04,Feng Jin 写道:
通过 JobGraph 可以获得 transformation 信息,可以获得具体的 Source 或者 Doris
Sink,之后再通过反射获取里面的 properties 信息进行提取。

可以参考 OpenLineage[1] 的实现.


1.
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/shared/src/main/java/io/openlineage/flink/visitor/wrapper/FlinkKafkaConsumerWrapper.java


Best,
Feng


On Mon, Feb 26, 2024 at 6:20 PM casel.chen  wrote:

一个Flink DataStream 作业从mysql cdc消费处理后写入apache
doris,请问有没有办法(从JobGraph/StreamGraph)获取到source/sink
connector信息,包括连接字符串、数据库名、表名等?


Re: 回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 文章 Zhanghao Chen
你可以看下 OpenLineage 和 Flink 的集成方法 [1],它是在 StreamExecutionEnvironment 里注册了一个 
JobListener(通过这个可以拿到 JobClient 进而拿到 job id)。然后从 execution environment 里可以抽取到 
transformation 信息处理 [2]。

[1] https://openlineage.io/docs/integrations/flink/
[2] 
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/app/src/main/java/io/openlineage/flink/OpenLineageFlinkJobListener.java


Best,
Zhanghao Chen

From: 阿华田 
Sent: Friday, March 8, 2024 16:48
To: user-zh@flink.apache.org 
Subject: 回复: Flink DataStream 作业如何获取到作业血缘?



 ”JobGraph 可以获得 transformation 信息“, JobGraph可以直接获取transformation的信息吗?, 我们是在
SourceTransformation 和SinkTransformation反射拿到链接信息 ,但是这个地方拿不到flinkJobid,  
JobGraph可以拿到source和sink的链接信息和flinkJobid?
| |
阿华田
|
|
a15733178...@163.com
|
 JobGraph 可以获得 transformation 信息transformation
签名由网易邮箱大师定制


在2024年03月8日 16:18,Zhanghao Chen 写道:
JobGraph 里有个字段就是 jobid。

Best,
Zhanghao Chen

From: 阿华田 
Sent: Friday, March 8, 2024 14:14
To: user-zh@flink.apache.org 
Subject: 回复: Flink DataStream 作业如何获取到作业血缘?

获取到Source 或者 DorisSink信息之后, 如何知道来自那个flink任务,好像不能获取到flinkJobId


| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制


在2024年02月26日 20:04,Feng Jin 写道:
通过 JobGraph 可以获得 transformation 信息,可以获得具体的 Source 或者 Doris
Sink,之后再通过反射获取里面的 properties 信息进行提取。

可以参考 OpenLineage[1] 的实现.


1.
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/shared/src/main/java/io/openlineage/flink/visitor/wrapper/FlinkKafkaConsumerWrapper.java


Best,
Feng


On Mon, Feb 26, 2024 at 6:20 PM casel.chen  wrote:

一个Flink DataStream 作业从mysql cdc消费处理后写入apache
doris,请问有没有办法(从JobGraph/StreamGraph)获取到source/sink
connector信息,包括连接字符串、数据库名、表名等?


回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 文章 阿华田


 ”JobGraph 可以获得 transformation 信息“, JobGraph可以直接获取transformation的信息吗?, 我们是在
SourceTransformation 和SinkTransformation反射拿到链接信息 ,但是这个地方拿不到flinkJobid,  
JobGraph可以拿到source和sink的链接信息和flinkJobid?
| |
阿华田
|
|
a15733178...@163.com
|
 JobGraph 可以获得 transformation 信息transformation
签名由网易邮箱大师定制


在2024年03月8日 16:18,Zhanghao Chen 写道:
JobGraph 里有个字段就是 jobid。

Best,
Zhanghao Chen

From: 阿华田 
Sent: Friday, March 8, 2024 14:14
To: user-zh@flink.apache.org 
Subject: 回复: Flink DataStream 作业如何获取到作业血缘?

获取到Source 或者 DorisSink信息之后, 如何知道来自那个flink任务,好像不能获取到flinkJobId


| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制


在2024年02月26日 20:04,Feng Jin 写道:
通过 JobGraph 可以获得 transformation 信息,可以获得具体的 Source 或者 Doris
Sink,之后再通过反射获取里面的 properties 信息进行提取。

可以参考 OpenLineage[1] 的实现.


1.
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/shared/src/main/java/io/openlineage/flink/visitor/wrapper/FlinkKafkaConsumerWrapper.java


Best,
Feng


On Mon, Feb 26, 2024 at 6:20 PM casel.chen  wrote:

一个Flink DataStream 作业从mysql cdc消费处理后写入apache
doris,请问有没有办法(从JobGraph/StreamGraph)获取到source/sink
connector信息,包括连接字符串、数据库名、表名等?


Re: 回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 文章 Zhanghao Chen
JobGraph 里有个字段就是 jobid。

Best,
Zhanghao Chen

From: 阿华田 
Sent: Friday, March 8, 2024 14:14
To: user-zh@flink.apache.org 
Subject: 回复: Flink DataStream 作业如何获取到作业血缘?

获取到Source 或者 DorisSink信息之后, 如何知道来自那个flink任务,好像不能获取到flinkJobId


| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制


在2024年02月26日 20:04,Feng Jin 写道:
通过 JobGraph 可以获得 transformation 信息,可以获得具体的 Source 或者 Doris
Sink,之后再通过反射获取里面的 properties 信息进行提取。

可以参考 OpenLineage[1] 的实现.


1.
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/shared/src/main/java/io/openlineage/flink/visitor/wrapper/FlinkKafkaConsumerWrapper.java


Best,
Feng


On Mon, Feb 26, 2024 at 6:20 PM casel.chen  wrote:

一个Flink DataStream 作业从mysql cdc消费处理后写入apache
doris,请问有没有办法(从JobGraph/StreamGraph)获取到source/sink
connector信息,包括连接字符串、数据库名、表名等?


回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-07 文章 阿华田
 获取到Source 或者 DorisSink信息之后, 如何知道来自那个flink任务,好像不能获取到flinkJobId


| |
阿华田
|
|
a15733178...@163.com
|
签名由网易邮箱大师定制


在2024年02月26日 20:04,Feng Jin 写道:
通过 JobGraph 可以获得 transformation 信息,可以获得具体的 Source 或者 Doris
Sink,之后再通过反射获取里面的 properties 信息进行提取。

可以参考 OpenLineage[1] 的实现.


1.
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/shared/src/main/java/io/openlineage/flink/visitor/wrapper/FlinkKafkaConsumerWrapper.java


Best,
Feng


On Mon, Feb 26, 2024 at 6:20 PM casel.chen  wrote:

一个Flink DataStream 作业从mysql cdc消费处理后写入apache
doris,请问有没有办法(从JobGraph/StreamGraph)获取到source/sink
connector信息,包括连接字符串、数据库名、表名等?


Re:Re: flink sql关联维表在lookup执行计划中的关联条件问题

2024-03-07 文章 iasiuide
你好,我们用的是1.13.2和1.15.4版本的,看了下flink ui,这两种版本针对下面sql片段的lookup执行计划中的关联维表条件是一样的


在 2024-03-08 11:08:51,"Yu Chen"  写道:
>Hi iasiuide,
>方便share一下你使用的flink版本与jdbc connector的版本吗?据我所了解,jdbc 
>connector在FLINK-33365[1]解决了lookup join条件丢失的相关问题。
>
>[1] https://issues.apache.org/jira/browse/FLINK-33365
>
>祝好~
>
>> 2024年3月8日 11:02,iasiuide  写道:
>> 
>> 
>> 
>> 
>> 图片可能加载不出来,下面是图片中的sql片段 
>> ..
>> END AS trans_type,
>> 
>>  a.div_fee_amt,
>> 
>>  a.ts
>> 
>>FROM
>> 
>>  ods_ymfz_prod_sys_divide_order a
>> 
>>  LEFT JOIN dim_ymfz_prod_sys_trans_log FOR SYSTEM_TIME AS OF a.proc_time 
>> AS b ON a.bg_rel_trans_id = b.bg_rel_trans_id
>> 
>>  AND b.trans_date = DATE_FORMAT (CURRENT_TIMESTAMP, 'MMdd')
>> 
>>  LEFT JOIN dim_ptfz_ymfz_merchant_info FOR SYSTEM_TIME AS OF a.proc_time 
>> AS c ON b.member_id = c.pk_id
>> 
>>  AND c.data_source = 'merch'
>> 
>>  LEFT JOIN dim_ptfz_ymfz_merchant_info FOR SYSTEM_TIME AS OF a.proc_time 
>> AS d ON c.agent_id = d.pk_id
>> 
>>  AND (
>> 
>>d.data_source = 'ex_agent'
>> 
>>OR d.data_source = 'agent'
>> 
>>  ) 
>> 
>>  LEFT JOIN dim_ptfz_ymfz_merchant_info FOR SYSTEM_TIME AS OF a.proc_time 
>> AS d1 ON d.fagent_id = d1.pk_id
>> 
>>  AND d1.data_source = 'agent'
>> 
>>WHERE 
>> 
>>  a.order_state = '2' 
>> 
>>  AND a.divide_fee_amt > 0
>> 
>>  ) dat
>> 
>> WHERE
>> 
>>  trans_date = DATE_FORMAT (CURRENT_TIMESTAMP, '-MM-dd')
>> 
>>  AND CHAR_LENGTH(member_id) > 1;
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 在 2024-03-08 10:54:19,"iasiuide"  写道:
>> 
>> 
>> 
>> 
>> 
>> 下面的sql片段中
>> ods_ymfz_prod_sys_divide_order  为kafka source表
>> dim_ymfz_prod_sys_trans_log   为mysql为表
>> dim_ptfz_ymfz_merchant_info   为mysql为表
>> 
>> 
>> 
>> flink web ui界面的执行计划片段如下:
>> 
>> [1]:TableSourceScan(table=[[default_catalog, default_database, 
>> ods_ymfz_prod_sys_divide_order, watermark=[-(CASE(IS NULL(create_time), 
>> 1970-01-01 00:00:00:TIMESTAMP(3), CAST(create_time AS TIMESTAMP(3))), 
>> 5000:INTERVAL SECOND)]]], fields=[row_kind, id, sys_date, bg_rel_trans_id, 
>> order_state, create_time, update_time, divide_fee_amt, divide_fee_flag])
>> +- [2]:Calc(select=[sys_date, bg_rel_trans_id, create_time, 
>> IF(SEARCH(row_kind, Sarg[_UTF-16LE'-D', _UTF-16LE'-U']), (-1 * 
>> divide_fee_amt), divide_fee_amt) AS div_fee_amt, 
>> Reinterpret(CASE(create_time IS NULL, 1970-01-01 00:00:00, CAST(create_time 
>> AS TIMESTAMP(3 AS ts], where=[((order_state = '2') AND (divide_fee_amt 
>>  0) AND (sys_date = DATE_FORMAT(CAST(CURRENT_TIMESTAMP() AS 
>> TIMESTAMP(9)), '-MM-dd')))])
>>   +- 
>> [3]:LookupJoin(table=[default_catalog.default_database.dim_ymfz_prod_sys_trans_log],
>>  joinType=[LeftOuterJoin], async=[false], 
>> lookup=[bg_rel_trans_id=bg_rel_trans_id], where=[(trans_date = 
>> DATE_FORMAT(CAST(CURRENT_TIMESTAMP() AS TIMESTAMP(9)), 'MMdd'))], 
>> select=[sys_date, bg_rel_trans_id, create_time, div_fee_amt, ts, 
>> bg_rel_trans_id, pay_type, member_id, mer_name])
>>  +- [4]:Calc(select=[sys_date, create_time, div_fee_amt, ts, pay_type, 
>> member_id, mer_name], where=[(CHAR_LENGTH(member_id)  1)])
>> +- 
>> [5]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
>>  joinType=[LeftOuterJoin], async=[false], 
>> lookup=[data_source=_UTF-16LE'merch', pk_id=member_id], where=[(data_source 
>> = 'merch')], select=[sys_date, create_time, div_fee_amt, ts, pay_type, 
>> member_id, mer_name, pk_id, agent_id, bagent_id])
>>+- [6]:Calc(select=[sys_date, create_time, div_fee_amt, ts, 
>> pay_type, member_id, mer_name, agent_id, bagent_id])
>>   +- 
>> [7]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
>>  joinType=[LeftOuterJoin], async=[false], lookup=[pk_id=agent_id], 
>> where=[SEARCH(data_source, Sarg[_UTF-16LE'agent', _UTF-16LE'ex_agent'])], 
>> select=[sys_date, create_time, div_fee_amt, ts, pay_type, member_id, 
>> mer_name, agent_id, bagent_id, pk_id, bagent_id, fagent_id])
>>  +- [8]:Calc(select=[sys_date, create_time, div_fee_amt, ts, 
>> pay_type, member_id, mer_name, bagent_id, bagent_id0, fagent_id AS 
&

Re: flink sql关联维表在lookup执行计划中的关联条件问题

2024-03-07 文章 Yu Chen
Hi iasiuide,
方便share一下你使用的flink版本与jdbc connector的版本吗?据我所了解,jdbc 
connector在FLINK-33365[1]解决了lookup join条件丢失的相关问题。

[1] https://issues.apache.org/jira/browse/FLINK-33365

祝好~

> 2024年3月8日 11:02,iasiuide  写道:
> 
> 
> 
> 
> 图片可能加载不出来,下面是图片中的sql片段 
> ..
> END AS trans_type,
> 
>  a.div_fee_amt,
> 
>  a.ts
> 
>FROM
> 
>  ods_ymfz_prod_sys_divide_order a
> 
>  LEFT JOIN dim_ymfz_prod_sys_trans_log FOR SYSTEM_TIME AS OF a.proc_time 
> AS b ON a.bg_rel_trans_id = b.bg_rel_trans_id
> 
>  AND b.trans_date = DATE_FORMAT (CURRENT_TIMESTAMP, 'MMdd')
> 
>  LEFT JOIN dim_ptfz_ymfz_merchant_info FOR SYSTEM_TIME AS OF a.proc_time 
> AS c ON b.member_id = c.pk_id
> 
>  AND c.data_source = 'merch'
> 
>  LEFT JOIN dim_ptfz_ymfz_merchant_info FOR SYSTEM_TIME AS OF a.proc_time 
> AS d ON c.agent_id = d.pk_id
> 
>  AND (
> 
>d.data_source = 'ex_agent'
> 
>OR d.data_source = 'agent'
> 
>  ) 
> 
>  LEFT JOIN dim_ptfz_ymfz_merchant_info FOR SYSTEM_TIME AS OF a.proc_time 
> AS d1 ON d.fagent_id = d1.pk_id
> 
>  AND d1.data_source = 'agent'
> 
>WHERE 
> 
>  a.order_state = '2' 
> 
>  AND a.divide_fee_amt > 0
> 
>  ) dat
> 
> WHERE
> 
>  trans_date = DATE_FORMAT (CURRENT_TIMESTAMP, '-MM-dd')
> 
>  AND CHAR_LENGTH(member_id) > 1;
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 在 2024-03-08 10:54:19,"iasiuide"  写道:
> 
> 
> 
> 
> 
> 下面的sql片段中
> ods_ymfz_prod_sys_divide_order  为kafka source表
> dim_ymfz_prod_sys_trans_log   为mysql为表
> dim_ptfz_ymfz_merchant_info   为mysql为表
> 
> 
> 
> flink web ui界面的执行计划片段如下:
> 
> [1]:TableSourceScan(table=[[default_catalog, default_database, 
> ods_ymfz_prod_sys_divide_order, watermark=[-(CASE(IS NULL(create_time), 
> 1970-01-01 00:00:00:TIMESTAMP(3), CAST(create_time AS TIMESTAMP(3))), 
> 5000:INTERVAL SECOND)]]], fields=[row_kind, id, sys_date, bg_rel_trans_id, 
> order_state, create_time, update_time, divide_fee_amt, divide_fee_flag])
> +- [2]:Calc(select=[sys_date, bg_rel_trans_id, create_time, 
> IF(SEARCH(row_kind, Sarg[_UTF-16LE'-D', _UTF-16LE'-U']), (-1 * 
> divide_fee_amt), divide_fee_amt) AS div_fee_amt, Reinterpret(CASE(create_time 
> IS NULL, 1970-01-01 00:00:00, CAST(create_time AS TIMESTAMP(3 AS ts], 
> where=[((order_state = '2') AND (divide_fee_amt  0) AND (sys_date = 
> DATE_FORMAT(CAST(CURRENT_TIMESTAMP() AS TIMESTAMP(9)), '-MM-dd')))])
>   +- 
> [3]:LookupJoin(table=[default_catalog.default_database.dim_ymfz_prod_sys_trans_log],
>  joinType=[LeftOuterJoin], async=[false], 
> lookup=[bg_rel_trans_id=bg_rel_trans_id], where=[(trans_date = 
> DATE_FORMAT(CAST(CURRENT_TIMESTAMP() AS TIMESTAMP(9)), 'MMdd'))], 
> select=[sys_date, bg_rel_trans_id, create_time, div_fee_amt, ts, 
> bg_rel_trans_id, pay_type, member_id, mer_name])
>  +- [4]:Calc(select=[sys_date, create_time, div_fee_amt, ts, pay_type, 
> member_id, mer_name], where=[(CHAR_LENGTH(member_id)  1)])
> +- 
> [5]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
>  joinType=[LeftOuterJoin], async=[false], 
> lookup=[data_source=_UTF-16LE'merch', pk_id=member_id], where=[(data_source = 
> 'merch')], select=[sys_date, create_time, div_fee_amt, ts, pay_type, 
> member_id, mer_name, pk_id, agent_id, bagent_id])
>+- [6]:Calc(select=[sys_date, create_time, div_fee_amt, ts, 
> pay_type, member_id, mer_name, agent_id, bagent_id])
>   +- 
> [7]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
>  joinType=[LeftOuterJoin], async=[false], lookup=[pk_id=agent_id], 
> where=[SEARCH(data_source, Sarg[_UTF-16LE'agent', _UTF-16LE'ex_agent'])], 
> select=[sys_date, create_time, div_fee_amt, ts, pay_type, member_id, 
> mer_name, agent_id, bagent_id, pk_id, bagent_id, fagent_id])
>  +- [8]:Calc(select=[sys_date, create_time, div_fee_amt, ts, 
> pay_type, member_id, mer_name, bagent_id, bagent_id0, fagent_id AS 
> fagent_id0])
> +- 
> [9]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
>  joinType=[LeftOuterJoin], async=[false], 
> lookup=[data_source=_UTF-16LE'agent', pk_id=fagent_id0], where=[(data_source 
> = 'agent')], select=[sys_date, create_time, div_fee_amt, ts, pay_type, 
> member_id, mer_name, bagent_id, bagent_id0, fagent_id0, pk_id, agent_name, 
> bagent_name])
>  
> 
> 
> 为什么关联第一张维表dim_ymfz_prod_sys_trans_log的限制条件AND b.trans_date = DATE_FORMAT 
> (CURRENT_TIMESTAMP, 'MMdd') 在执行计划中,不作为 lookup的条件 ==> 
> lookup=[bg_rel_trans_id=bg_r

flink sql关联维表在lookup执行计划中的关联条件问题

2024-03-07 文章 iasiuide




下面的sql片段中
ods_ymfz_prod_sys_divide_order  为kafka source表
dim_ymfz_prod_sys_trans_log   为mysql为表
dim_ptfz_ymfz_merchant_info   为mysql为表



flink web ui界面的执行计划片段如下:

 [1]:TableSourceScan(table=[[default_catalog, default_database, 
ods_ymfz_prod_sys_divide_order, watermark=[-(CASE(IS NULL(create_time), 
1970-01-01 00:00:00:TIMESTAMP(3), CAST(create_time AS TIMESTAMP(3))), 
5000:INTERVAL SECOND)]]], fields=[row_kind, id, sys_date, bg_rel_trans_id, 
order_state, create_time, update_time, divide_fee_amt, divide_fee_flag])
+- [2]:Calc(select=[sys_date, bg_rel_trans_id, create_time, IF(SEARCH(row_kind, 
Sarg[_UTF-16LE'-D', _UTF-16LE'-U']), (-1 * divide_fee_amt), divide_fee_amt) AS 
div_fee_amt, Reinterpret(CASE(create_time IS NULL, 1970-01-01 00:00:00, 
CAST(create_time AS TIMESTAMP(3 AS ts], where=[((order_state = '2') AND 
(divide_fee_amt  0) AND (sys_date = DATE_FORMAT(CAST(CURRENT_TIMESTAMP() AS 
TIMESTAMP(9)), '-MM-dd')))])
   +- 
[3]:LookupJoin(table=[default_catalog.default_database.dim_ymfz_prod_sys_trans_log],
 joinType=[LeftOuterJoin], async=[false], 
lookup=[bg_rel_trans_id=bg_rel_trans_id], where=[(trans_date = 
DATE_FORMAT(CAST(CURRENT_TIMESTAMP() AS TIMESTAMP(9)), 'MMdd'))], 
select=[sys_date, bg_rel_trans_id, create_time, div_fee_amt, ts, 
bg_rel_trans_id, pay_type, member_id, mer_name])
  +- [4]:Calc(select=[sys_date, create_time, div_fee_amt, ts, pay_type, 
member_id, mer_name], where=[(CHAR_LENGTH(member_id)  1)])
 +- 
[5]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
 joinType=[LeftOuterJoin], async=[false], lookup=[data_source=_UTF-16LE'merch', 
pk_id=member_id], where=[(data_source = 'merch')], select=[sys_date, 
create_time, div_fee_amt, ts, pay_type, member_id, mer_name, pk_id, agent_id, 
bagent_id])
+- [6]:Calc(select=[sys_date, create_time, div_fee_amt, ts, 
pay_type, member_id, mer_name, agent_id, bagent_id])
   +- 
[7]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
 joinType=[LeftOuterJoin], async=[false], lookup=[pk_id=agent_id], 
where=[SEARCH(data_source, Sarg[_UTF-16LE'agent', _UTF-16LE'ex_agent'])], 
select=[sys_date, create_time, div_fee_amt, ts, pay_type, member_id, mer_name, 
agent_id, bagent_id, pk_id, bagent_id, fagent_id])
  +- [8]:Calc(select=[sys_date, create_time, div_fee_amt, ts, 
pay_type, member_id, mer_name, bagent_id, bagent_id0, fagent_id AS fagent_id0])
 +- 
[9]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
 joinType=[LeftOuterJoin], async=[false], lookup=[data_source=_UTF-16LE'agent', 
pk_id=fagent_id0], where=[(data_source = 'agent')], select=[sys_date, 
create_time, div_fee_amt, ts, pay_type, member_id, mer_name, bagent_id, 
bagent_id0, fagent_id0, pk_id, agent_name, bagent_name])
  


为什么关联第一张维表dim_ymfz_prod_sys_trans_log的限制条件AND b.trans_date = DATE_FORMAT 
(CURRENT_TIMESTAMP, 'MMdd') 在执行计划中,不作为 lookup的条件 ==> 
lookup=[bg_rel_trans_id=bg_rel_trans_id],
关联第二张维表 dim_ptfz_ymfz_merchant_info 的限制条件ON b.member_id = c.pk_id AND 
c.data_source = 'merch' 在执行计划中,都是作为lookup的条件 ==> 
lookup=[data_source=_UTF-16LE'merch', pk_id=member_id],
关联第三张维表dim_ptfz_ymfz_merchant_info的限制条件 ON c.agent_id = d.pk_id AND 
(d.data_source = 'ex_agent' OR d.data_source = 'agent') 
中关于data_source的条件,在执行计划中不是lookup的条件 ==> lookup=[pk_id=agent_id],
关联维表的限制条件有的会作为关联条件,有的不作为关联条件吗? 这种有什么规律吗? 因为这个会关乎维表的索引字段的设置。







Re: Re:RE: RE: flink cdc动态加表不生效

2024-03-07 文章 Hongshun Wang
Hi, casel chan,
社区已经对增量框架实现动态加表(https://github.com/apache/flink-cdc/pull/3024
),预计3.1对mongodb和postgres暴露出来,但是Oracle和Sqlserver目前并没暴露,你可以去社区参照这两个框架,将参数打开,并且测试和适配。
Best,
Hongshun


Re: flink sql作业如何统计端到端延迟

2024-03-04 文章 Shawn Huang
Flink有一个端到端延迟的指标,可以参考以下文档[1],看看是否有帮助。

[1]
https://nightlies.apache.org/flink/flink-docs-release-1.18/zh/docs/ops/metrics/#end-to-end-latency-tracking

Best,
Shawn Huang


casel.chen  于2024年2月21日周三 15:31写道:

> flink sql作业从kafka消费mysql过来的canal
> json消息,经过复杂处理后写入doris,请问如何统计doris表记录的端到端时延?mysql表有update_time字段代表业务更新记录时间。
> doris系统可以在表schema新增一个更新时间列ingest_time,所以在doris表上可以通过ingest_time -
> update_time算出端到端时延,但这种方法只能离线统计,有没有实时统计以方便实时监控的方法?
>
> 查了SinkFunction类的invoke方法虽然带有Context类型参数可以获取当前处理时间和事件时间,但因为大部分sink都是采用攒微批方式再批量写入的,所以这两个时间直接相减得到的时间差并不能代表真实落库的时延。有没有精确获取时延的方法呢?


Re: 根据flink job web url可以获取到JobGraph信息么?

2024-03-03 文章 Zhanghao Chen
我在 Yanquan 的回答基础上补充下,通过 /jobs/:jobid/plan 实际上拿到的就是 JSON 表示的 JobGraph 信息(通过 
JsonPlanGenerator 这个类生成,包含了绝大部分 jobgraph 里常用的信息),应该能满足你的需要

From: casel.chen 
Sent: Saturday, March 2, 2024 14:17
To: user-zh@flink.apache.org 
Subject: 根据flink job web url可以获取到JobGraph信息么?

正在运行的flink作业能够通过其对外暴露的web url获取到JobGraph信息么?


Re: 根据flink job web url可以获取到JobGraph信息么?

2024-03-01 文章 Yanquan Lv
https://nightlies.apache.org/flink/flink-docs-master/docs/ops/rest_api/#jobs-jobid-plan
通过 /jobs/:jobid/plan  能获得 ExecutionGraph 的信息,不知道能不能包含你需要的信息。

casel.chen  于2024年3月2日周六 14:19写道:

> 正在运行的flink作业能够通过其对外暴露的web url获取到JobGraph信息么?


根据flink job web url可以获取到JobGraph信息么?

2024-03-01 文章 casel.chen
正在运行的flink作业能够通过其对外暴露的web url获取到JobGraph信息么?

Re: flink cdc底层的debezium是如何注册schema到confluent schema registry的?

2024-02-29 文章 Hang Ruan
Hi,casel.chen。

这个部分应该是在 CDC 项目里没有涉及到,CDC 依赖 debezium 的 engine 部分直接读取出变更数据,并没有像 debezium
本身一样去写入到 Kafka 中。
可以考虑去 Debezium 社区咨询一下这部分的内容,Debezium开发者们应该更熟悉这部分的内容。

祝好,
Hang

casel.chen  于2024年2月29日周四 18:11写道:

> 搜索了debezium源码但没有发现哪里有调用
> SchemaRegistryClient.register方法的地方,请问它是如何注册schema到confluent schema
> registry的?


flink cdc底层的debezium是如何注册schema到confluent schema registry的?

2024-02-29 文章 casel.chen
搜索了debezium源码但没有发现哪里有调用 
SchemaRegistryClient.register方法的地方,请问它是如何注册schema到confluent schema registry的?

Re: flink重启机制

2024-02-27 文章 Yanquan Lv
图片没有显示出来。container 调度是由 yarn 控制的,yarn 会优先选择运行中的节点。按理说 container
不会调度到下线的节点,你通过 yarn web 或者 yarn node -list 确认了吗?

chenyu_opensource  于2024年2月27日周二 18:30写道:

> 你好,flink任务提交到yarn上,由于某个节点下线导致flink任务失败,如下:
>
> 同时重试超过次数,任务失败,如下图:
>
> 我想问一下,flink重试机制中
> 任务不会重新调度到新节点的container吗?为什么一直在同一个节点从而导致整体任务失败。这个调度是由yarn控制还是flink自身代码控制的?如有相关代码也请告知,谢谢。
>
> 期待回复,谢谢!
>


flink重启机制

2024-02-27 文章 chenyu_opensource
你好,flink任务提交到yarn上,由于某个节点下线导致flink任务失败,如下:


同时重试超过次数,任务失败,如下图:


我想问一下,flink重试机制中 
任务不会重新调度到新节点的container吗?为什么一直在同一个节点从而导致整体任务失败。这个调度是由yarn控制还是flink自身代码控制的?如有相关代码也请告知,谢谢。


期待回复,谢谢!

Re: Flink DataStream 作业如何获取到作业血缘?

2024-02-26 文章 Feng Jin
通过 JobGraph 可以获得 transformation 信息,可以获得具体的 Source 或者 Doris
Sink,之后再通过反射获取里面的 properties 信息进行提取。

可以参考 OpenLineage[1] 的实现.


1.
https://github.com/OpenLineage/OpenLineage/blob/main/integration/flink/shared/src/main/java/io/openlineage/flink/visitor/wrapper/FlinkKafkaConsumerWrapper.java


Best,
Feng


On Mon, Feb 26, 2024 at 6:20 PM casel.chen  wrote:

> 一个Flink DataStream 作业从mysql cdc消费处理后写入apache
> doris,请问有没有办法(从JobGraph/StreamGraph)获取到source/sink
> connector信息,包括连接字符串、数据库名、表名等?


Flink DataStream 作业如何获取到作业血缘?

2024-02-26 文章 casel.chen
一个Flink DataStream 作业从mysql cdc消费处理后写入apache 
doris,请问有没有办法(从JobGraph/StreamGraph)获取到source/sink connector信息,包括连接字符串、数据库名、表名等?

Re: Flink Prometheus Connector问题

2024-02-23 文章 Feng Jin
我理解可以参考 FLIP 中的设计, 基于 Prometheus Remote-Write API v1.0
<https://prometheus.io/docs/concepts/remote_write_spec/>  来初步实现一个
SinkFunction 实现写入 Prometheus


Best,
Feng

On Fri, Feb 23, 2024 at 7:36 PM 17610775726 <17610775...@163.com> wrote:

> Hi
> 参考官网,
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/metric_reporters/#prometheuspushgateway
>
>
> Best
> JasonLee
>
>
>  回复的原邮件 
> | 发件人 | casel.chen |
> | 发送日期 | 2024年02月23日 17:35 |
> | 收件人 | user-zh@flink.apache.org |
> | 主题 | Flink Prometheus Connector问题 |
> 场景:使用Flink实时生成指标写入Prometheus进行监控告警
> 网上搜索到 https://github.com/apache/flink-connector-prometheus 项目,但内容是空的
> 另外找到FLIP-312 是关于flink prometheus connector的,
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-312%3A+Prometheus+Sink+Connector
> 请问Flink官方有没有出flink prometheus connector?
> 如果现在要实时写入prometheus的话,推荐的方式是什么?谢谢!


回复:Flink Prometheus Connector问题

2024-02-23 文章 17610775726
Hi 
参考官网,https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/deployment/metric_reporters/#prometheuspushgateway


Best
JasonLee


 回复的原邮件 
| 发件人 | casel.chen |
| 发送日期 | 2024年02月23日 17:35 |
| 收件人 | user-zh@flink.apache.org |
| 主题 | Flink Prometheus Connector问题 |
场景:使用Flink实时生成指标写入Prometheus进行监控告警
网上搜索到 https://github.com/apache/flink-connector-prometheus 项目,但内容是空的
另外找到FLIP-312 是关于flink prometheus 
connector的,https://cwiki.apache.org/confluence/display/FLINK/FLIP-312%3A+Prometheus+Sink+Connector
请问Flink官方有没有出flink prometheus connector?
如果现在要实时写入prometheus的话,推荐的方式是什么?谢谢!

Flink Prometheus Connector问题

2024-02-23 文章 casel.chen
场景:使用Flink实时生成指标写入Prometheus进行监控告警
网上搜索到 https://github.com/apache/flink-connector-prometheus 项目,但内容是空的 
另外找到FLIP-312 是关于flink prometheus 
connector的,https://cwiki.apache.org/confluence/display/FLINK/FLIP-312%3A+Prometheus+Sink+Connector
请问Flink官方有没有出flink prometheus connector?
如果现在要实时写入prometheus的话,推荐的方式是什么?谢谢!

  1   2   3   4   5   6   7   8   9   10   >