Re: 退订

2024-05-11 文章 Hang Ruan
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅来自
 user-zh@flink.apache.org 
邮件组的邮件,你可以参考[1][2]
管理你的邮件订阅。

Best,
Hang

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2] https://flink.apache.org/community.html#mailing-lists

爱看书不识字  于2024年5月11日周六 10:06写道:

> 退订


Re: use flink 1.19 JDBC Driver can find jdbc connector

2024-05-09 文章 abc15606
I've solved it. You need to register the number of connections in the jar of 
gateway. But this is inconvenient, and I still hope to improve it.
发自我的 iPhone

> 在 2024年5月10日,11:56,Xuyang  写道:
> 
> Hi, can you print the classloader and verify if the jdbc connector exists in 
> it?
> 
> 
> 
> 
> --
> 
>Best!
>Xuyang
> 
> 
> 
> 
> 
> At 2024-05-09 17:48:33, "McClone"  wrote:
>> I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not 
>>  find jdbc connector,but use sql-client is normal.



Re:use flink 1.19 JDBC Driver can find jdbc connector

2024-05-09 文章 Xuyang
Hi, can you print the classloader and verify if the jdbc connector exists in it?




--

Best!
Xuyang





At 2024-05-09 17:48:33, "McClone"  wrote:
>I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not  
>find jdbc connector,but use sql-client is normal.


请问有没有公司可以提供开源Flink维保服务?

2024-05-09 文章 LIU Xiao
如题


Re: 退订

2024-05-09 文章 Yunfeng Zhou
Hi,

退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.

Best,
yunfeng

On Thu, May 9, 2024 at 5:58 PM xpfei0811  wrote:
>
> 退订
>
>  回复的原邮件 
> | 发件人 | wangfengyang |
> | 发送日期 | 2024年04月23日 18:10 |
> | 收件人 | user-zh  |
> | 主题 | 退订 |
> 退订


??????????

2024-05-09 文章 xpfei0811


  
| ?? | wangfengyang |
|  | 2024??04??23?? 18:10 |
| ?? | user-zh  |
|  |  |


回复:退订

2024-05-09 文章 xpfei0811
退订

 回复的原邮件 
| 发件人 | jh...@163.com |
| 发送日期 | 2024年04月20日 22:01 |
| 收件人 | user-zh  |
| 主题 | Re: 退订 |


| |
jhg22
|
|
jh...@163.com
|


jh...@163.com

发件人: 冮雪程
发送时间: 2024-04-19 18:01
收件人: user-zh@flink.apache.org
主题: 退订




| |
冮雪程
|
|
gxc_bigd...@163.com
|


 回复的原邮件 
| 发件人 | jh...@163.com |
| 发送日期 | 2024年04月18日 16:17 |
| 收件人 | user-zh |
| 主题 | Re: 回复:退订 |
退订



jh...@163.com

发件人: 我的意中人是个盖世英雄
发送时间: 2024-04-18 16:03
收件人: user-zh
主题: 回复:退订
退订



---原始邮件---
发件人: "willluzheng"

use flink 1.19 JDBC Driver can find jdbc connector

2024-05-09 文章 McClone
I put flink-connector-jdbc into flink\lib.use flink 1.19 JDBC Driver can not  
find jdbc connector,but use sql-client is normal.

Re: Re: java.io.IOException: Could not load the native RocksDB library

2024-05-06 文章 Yanfei Lei
或许您可以尝试参考下[1] 再验证下加载的问题。

BTW,目前看起来是有些依赖库找不到,librocksdbjni-win64.dll 当时是基于 VS2022
编译出来的,您也尝试下在本地安装下VS2022后重试。

[1] https://github.com/facebook/rocksdb/issues/2531#issuecomment-313209314

ha.fen...@aisino.com  于2024年5月7日周二 10:22写道:
>
> idea工具,win10操作系统
> java.io.IOException: Could not load the native RocksDB library
> at 
> org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.ensureRocksDBIsLoaded(EmbeddedRocksDBStateBackend.java:940)
>  ~[flink-statebackend-rocksdb-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.ensureRocksDBIsLoaded(EmbeddedRocksDBStateBackend.java:870)
>  ~[flink-statebackend-rocksdb-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:400)
>  ~[flink-statebackend-rocksdb-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.createKeyedStateBackend(EmbeddedRocksDBStateBackend.java:90)
>  ~[flink-statebackend-rocksdb-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.lambda$keyedStatedBackend$3(StreamTaskStateInitializerImpl.java:393)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:173)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:137)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:399)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:180)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:266)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreStateAndGates(StreamTask.java:799)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$restoreInternal$3(StreamTask.java:753)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:753)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:712)
>  ~[flink-streaming-java-1.19.0.jar:1.19.0]
> at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
>  [flink-runtime-1.19.0.jar:1.19.0]
> at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927) 
> [flink-runtime-1.19.0.jar:1.19.0]
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:751) 
> [flink-runtime-1.19.0.jar:1.19.0]
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) 
> [flink-runtime-1.19.0.jar:1.19.0]
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]
> Caused by: java.lang.UnsatisfiedLinkError: 
> C:\Users\Administrator\AppData\Local\Temp\minicluster_3997ce9addcd45323f4b8d2891c63181\tm_0\tmp\rocksdb-lib-b92bf66b523726cc074235a82f4c40f1\librocksdbjni-win64.dll:
>  Can't find dependent libraries
> at java.lang.ClassLoader$NativeLibrary.load(Native Method) ~[?:1.8.0_60]
> at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1938) ~[?:1.8.0_60]
> at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1821) ~[?:1.8.0_60]
> at java.lang.Runtime.load0(Runtime.java:809) ~[?:1.8.0_60]
> at java.lang.System.load(System.java:1086) ~[?:1.8.0_60]
> at 
> org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:102)
>  ~[frocksdbjni-6.20.3-ververica-2.0.jar:?]
> at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:82) 
> ~[frocksdbjni-6.20.3-ververica-2.0.jar:?]
> at 
> org.apache.flink.contrib.streaming.state.EmbeddedRocksDBStateBackend.ensureRocksDBIsLoaded(EmbeddedRocksDBStateBackend.java:914)
>  ~[flink-statebackend-rocksdb-1.19.0.jar:1.19.0]
> ... 20 more
> 10:18:44,556 WARN  org.apache.flink.runtime.taskmanager.Task  
>   [] - TumblingEventTimeWindows -> Sink: Print to Std. Out (2/4)#0 
> (2500c455c9c458780199da504300da05_90bea66de1c231edf33913ecd54406c1_1_0) 
> switched from INITIALIZING to FAILED with failure cause:
> java.lang.Exception: Exception while creating StreamOperatorStateContext.
> 

Re: java.io.IOException: Could not load the native RocksDB library

2024-05-06 文章 Yanfei Lei
请问是什么开发环境呢? windows吗?
可以分享一下更详细的报错吗?比如.dll 找不到

ha.fen...@aisino.com  于2024年5月7日周二 09:34写道:
>
> Configuration config = new Configuration();
> config.set(StateBackendOptions.STATE_BACKEND, "rocksdb");
> config.set(CheckpointingOptions.CHECKPOINT_STORAGE, "filesystem");
> config.set(CheckpointingOptions.CHECKPOINTS_DIRECTORY, "file:\\d:\\cdc");
>
> 开发环境Flink1.17包中运行没有问题
> 开发环境Flink1.19包中运行提示
>
> java.io.IOException: Could not load the native RocksDB library



-- 
Best,
Yanfei


Re: Flink sql retract to append

2024-04-30 文章 Zijun Zhao
以处理时间为升序,处理结果肯定不会出现回撤的,因为往后的时间不会比当前时间小了,你可以在试试这个去重

On Tue, Apr 30, 2024 at 3:35 PM 焦童  wrote:

> 谢谢你的建议  但是top-1也会产生回撤信息
>
> > 2024年4月30日 15:27,ha.fen...@aisino.com 写道:
> >
> > 可以参考这个
> >
> https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/dev/table/sql/queries/deduplication/
> > 1.11版本不知道是不是支持
> >
> > From: 焦童
> > Date: 2024-04-30 11:25
> > To: user-zh
> > Subject: Flink sql retract to append
> > Hello ,
> > 我使用Flink 1.11 版本 sql  进行数据去重(通过 group by
> 形式)但是这会产生回撤流,下游存储不支持回撤流信息仅支持append,在DataStream
> 中我可以通过状态进行去重,但是在sql中如何做到去重且不产生回撤流呢。谢谢各位
>
>


Re:Re: 在idea中用CliFrontend提交job 报错 java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;

2024-04-30 文章 z_mmG
您好,根据您的指点,配置之后,重启了StandaloneSessionClusterEntrypoint和TaskManagerrunner,这个问题解决了,谢谢您

















在 2024-04-30 15:45:18,"Biao Geng"  写道:
>Hi,
>
>这个报错一般是JDK版本不一致导致的。建议统一build flink和执行flink作业时的Java版本,(都用JDK8 或者 都用JDK11)。
>用JDK11时没有sun.misc的问题可以试试勾选掉Idea的Settings-> Build, Execution and Deployment
>-> Compiler-> Java Compiler的Use '--release' option for cross-compilation'
>选项。
>
>Best,
>Biao Geng
>
>
>z_mmG <13520871...@163.com> 于2024年4月30日周二 15:08写道:
>
>>
>> JDK11 编译的flink1.19的源码
>> 因为他说没有sun.misc,所以启动用的jdk8
>>
>> 已连接到地址为 ''127.0.0.1:8339',传输: '套接字'' 的目标虚拟机
>>
>> Job has been submitted with JobID 0975ec264edfd11d236dd190e7708d70
>>
>>
>> 
>>
>>  The program finished with the following exception:
>>
>>
>> org.apache.flink.client.program.ProgramInvocationException: The main
>> method caused an error:
>> org.apache.flink.client.program.ProgramInvocationException: Job failed
>> (JobID: 0975ec264edfd11d236dd190e7708d70)
>>
>> at
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:373)
>>
>> at
>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:223)
>>
>> at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:113)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:1026)
>>
>> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:247)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1270)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$10(CliFrontend.java:1367)
>>
>> at
>> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1367)
>>
>> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1335)
>>
>> Caused by: java.util.concurrent.ExecutionException:
>> org.apache.flink.client.program.ProgramInvocationException: Job failed
>> (JobID: 0975ec264edfd11d236dd190e7708d70)
>>
>> at
>> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>>
>> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>>
>> at
>> org.apache.flink.client.program.StreamContextEnvironment.getJobExecutionResult(StreamContextEnvironment.java:170)
>>
>> at
>> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:121)
>>
>> at
>> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2325)
>>
>> at
>> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2303)
>>
>> at org.apache.flink.streaming.examples.ys.WordCount.main(WordCount.java:34)
>>
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:498)
>>
>> at
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:356)
>>
>> ... 9 more
>>
>> Caused by: org.apache.flink.client.program.ProgramInvocationException: Job
>> failed (JobID: 0975ec264edfd11d236dd190e7708d70)
>>
>> at
>> org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$getJobExecutionResult$6(ClusterClientJobClientAdapter.java:130)
>>
>> at
>> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>>
>> at
>> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>>
>> at
>> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>>
>> at
>> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>>
>> at
>> org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$6(FutureUtils.java:302)
>>
>> at
>> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>>
>> at
>> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>>
>> at
>> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>>
>> at
>> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>>
>> at
>> org.apache.flink.client.program.rest.RestClusterClient.lambda$pollResourceAsync$35(RestClusterClient.java:901)
>>
>> at
>> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>>
>> at
>> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>>
>> at
>> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>>
>> at
>> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>>
>> at
>> 

Re:Re: 在idea中用CliFrontend提交job 报错 java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;

2024-04-30 文章 z_mmG
您好,按照您的指点,现在运行和编译都用的jdk11,已经没有sun.misc的报错了,但是提交job 还是报相同的错误







D:\software\jdk-11.0.7\bin\java.exe 
-agentlib:jdwp=transport=dt_socket,address=127.0.0.1:11039,suspend=y,server=n 
-Dlog.file=./log/flink-client.log 
-Dlog4j.configuration=./conf/log4j-cli.properties 
-Dlog4j.configurationFile=./conf/log4j-cli.properties 
-Dlogback.configurationFile=./conf/logback.xml 
-javaagent:C:\Users\10575\AppData\Local\JetBrains\IntelliJIdea2023.2\captureAgent\debugger-agent.jar=file:/C:/Users/10575/AppData/Local/Temp/capture.props
 -Dfile.encoding=UTF-8 -classpath 
"D:\flink\ayslib\log4j-slf4j-impl-2.17.1.jar;D:\flink\ayslib\log4j-core-2.17.1.jar;D:\flink\ayslib\log4j-api-2.17.1.jar;D:\flink\ayslib\log4j-1.2-api-2.17.1.jar;D:\flink\ayslib\flink-table-runtime-1.20-SNAPSHOT.jar;D:\flink\ayslib\flink-table-planner-loader-1.20-SNAPSHOT.jar;D:\flink\ayslib\flink-table-api-java-uber-1.20-SNAPSHOT.jar;D:\flink\ayslib\flink-scala_2.12-1.20-SNAPSHOT.jar;D:\flink\ayslib\flink-json-1.20-SNAPSHOT.jar;D:\flink\ayslib\flink-dist-1.20-SNAPSHOT.jar;D:\flink\ayslib\flink-csv-1.20-SNAPSHOT.jar;D:\flink\ayslib\flink-connector-files-1.20-SNAPSHOT.jar;D:\flink\ayslib\flink-cep-1.20-SNAPSHOT.jar;D:\flink\flink-clients\target\classes;D:\flink\flink-core\target\classes;D:\flink\flink-core-api\target\classes;D:\flink\flink-annotations\target\classes;D:\flink\flink-metrics\flink-metrics-core\target\classes;D:\software\apache-maven-3.8.6\repo\org\apache\flink\flink-shaded-asm-9\9.5-17.0\flink-shaded-asm-9-9.5-17.0.jar;D:\software\apache-maven-3.8.6\repo\org\apache\flink\flink-shaded-jackson\2.14.2-17.0\flink-shaded-jackson-2.14.2-17.0.jar;D:\software\apache-maven-3.8.6\repo\org\apache\commons\commons-lang3\3.12.0\commons-lang3-3.12.0.jar;D:\software\apache-maven-3.8.6\repo\org\snakeyaml\snakeyaml-engine\2.6\snakeyaml-engine-2.6.jar;D:\software\apache-maven-3.8.6\repo\org\apache\commons\commons-text\1.10.0\commons-text-1.10.0.jar;D:\software\apache-maven-3.8.6\repo\com\esotericsoftware\kryo\kryo\2.24.0\kryo-2.24.0.jar;D:\software\apache-maven-3.8.6\repo\com\esotericsoftware\minlog\minlog\1.2\minlog-1.2.jar;D:\software\apache-maven-3.8.6\repo\commons-collections\commons-collections\3.2.2\commons-collections-3.2.2.jar;D:\software\apache-maven-3.8.6\repo\org\apache\commons\commons-compress\1.26.0\commons-compress-1.26.0.jar;D:\software\apache-maven-3.8.6\repo\org\apache\flink\flink-shaded-guava\31.1-jre-17.0\flink-shaded-guava-31.1-jre-17.0.jar;D:\flink\flink-runtime\target\classes;D:\flink\flink-rpc\flink-rpc-core\target\classes;D:\flink\flink-rpc\flink-rpc-akka-loader\target\classes;D:\flink\flink-queryable-state\flink-queryable-state-client-java\target\classes;D:\flink\flink-filesystems\flink-hadoop-fs\target\classes;D:\software\apache-maven-3.8.6\repo\commons-io\commons-io\2.15.1\commons-io-2.15.1.jar;D:\software\apache-maven-3.8.6\repo\org\apache\flink\flink-shaded-netty\4.1.91.Final-17.0\flink-shaded-netty-4.1.91.Final-17.0.jar;D:\software\apache-maven-3.8.6\repo\org\apache\flink\flink-shaded-zookeeper-3\3.7.1-17.0\flink-shaded-zookeeper-3-3.7.1-17.0.jar;D:\software\apache-maven-3.8.6\repo\org\javassist\javassist\3.24.0-GA\javassist-3.24.0-GA.jar;D:\software\apache-maven-3.8.6\repo\org\xerial\snappy\snappy-java\1.1.10.4\snappy-java-1.1.10.4.jar;D:\software\apache-maven-3.8.6\repo\tools\profiler\async-profiler\2.9\async-profiler-2.9.jar;D:\software\apache-maven-3.8.6\repo\org\lz4\lz4-java\1.8.0\lz4-java-1.8.0.jar;D:\software\apache-maven-3.8.6\repo\io\airlift\aircompressor\0.21\aircompressor-0.21.jar;D:\flink\flink-optimizer\target\classes;D:\flink\flink-java\target\classes;D:\software\apache-maven-3.8.6\repo\org\apache\commons\commons-math3\3.6.1\commons-math3-3.6.1.jar;D:\software\apache-maven-3.8.6\repo\com\twitter\chill-java\0.7.6\chill-java-0.7.6.jar;D:\software\apache-maven-3.8.6\repo\commons-cli\commons-cli\1.5.0\commons-cli-1.5.0.jar;D:\flink\flink-streaming-java\target\classes;D:\flink\flink-connectors\flink-file-sink-common\target\classes;D:\flink\flink-connectors\flink-connector-datagen\target\classes;D:\flink\flink-datastream\target\classes;D:\flink\flink-datastream-api\target\classes;D:\software\apache-maven-3.8.6\repo\org\apache\flink\flink-shaded-force-shading\17.0\flink-shaded-force-shading-17.0.jar;D:\software\apache-maven-3.8.6\repo\org\slf4j\slf4j-api\1.7.36\slf4j-api-1.7.36.jar;D:\software\apache-maven-3.8.6\repo\com\google\code\findbugs\jsr305\1.3.9\jsr305-1.3.9.jar;D:\software\apache-maven-3.8.6\repo\org\objenesis\objenesis\2.1\objenesis-2.1.jar;D:\software\JetBrains\IntelliJ
 IDEA 2023.2.5\lib\idea_rt.jar" org.apache.flink.client.cli.CliFrontend run -c 
org.apache.flink.streaming.examples.ys.WordCount ./WordCount.jar

已连接到地址为 ''127.0.0.1:11039',传输: '套接字'' 的目标虚拟机

WARNING: An illegal reflective access operation has occurred

WARNING: Illegal reflective access by 
org.apache.flink.streaming.runtime.translators.DataStreamV2SinkTransformationTranslator
 (file:/D:/flink/ayslib/flink-dist-1.20-SNAPSHOT.jar) to field 

Re: 在idea中用CliFrontend提交job 报错 java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;

2024-04-30 文章 Biao Geng
Hi,

这个报错一般是JDK版本不一致导致的。建议统一build flink和执行flink作业时的Java版本,(都用JDK8 或者 都用JDK11)。
用JDK11时没有sun.misc的问题可以试试勾选掉Idea的Settings-> Build, Execution and Deployment
-> Compiler-> Java Compiler的Use '--release' option for cross-compilation'
选项。

Best,
Biao Geng


z_mmG <13520871...@163.com> 于2024年4月30日周二 15:08写道:

>
> JDK11 编译的flink1.19的源码
> 因为他说没有sun.misc,所以启动用的jdk8
>
> 已连接到地址为 ''127.0.0.1:8339',传输: '套接字'' 的目标虚拟机
>
> Job has been submitted with JobID 0975ec264edfd11d236dd190e7708d70
>
>
> 
>
>  The program finished with the following exception:
>
>
> org.apache.flink.client.program.ProgramInvocationException: The main
> method caused an error:
> org.apache.flink.client.program.ProgramInvocationException: Job failed
> (JobID: 0975ec264edfd11d236dd190e7708d70)
>
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:373)
>
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:223)
>
> at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:113)
>
> at
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:1026)
>
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:247)
>
> at
> org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1270)
>
> at
> org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$10(CliFrontend.java:1367)
>
> at
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>
> at
> org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1367)
>
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1335)
>
> Caused by: java.util.concurrent.ExecutionException:
> org.apache.flink.client.program.ProgramInvocationException: Job failed
> (JobID: 0975ec264edfd11d236dd190e7708d70)
>
> at
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>
> at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>
> at
> org.apache.flink.client.program.StreamContextEnvironment.getJobExecutionResult(StreamContextEnvironment.java:170)
>
> at
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:121)
>
> at
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2325)
>
> at
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2303)
>
> at org.apache.flink.streaming.examples.ys.WordCount.main(WordCount.java:34)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:498)
>
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:356)
>
> ... 9 more
>
> Caused by: org.apache.flink.client.program.ProgramInvocationException: Job
> failed (JobID: 0975ec264edfd11d236dd190e7708d70)
>
> at
> org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$getJobExecutionResult$6(ClusterClientJobClientAdapter.java:130)
>
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>
> at
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>
> at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>
> at
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>
> at
> org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$6(FutureUtils.java:302)
>
> at
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>
> at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>
> at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>
> at
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>
> at
> org.apache.flink.client.program.rest.RestClusterClient.lambda$pollResourceAsync$35(RestClusterClient.java:901)
>
> at
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>
> at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>
> at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>
> at
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>
> at
> org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$6(FutureUtils.java:302)
>
> at
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>
> at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>
> at
> 

Re: Flink sql retract to append

2024-04-30 文章 焦童
谢谢你的建议  但是top-1也会产生回撤信息  

> 2024年4月30日 15:27,ha.fen...@aisino.com 写道:
> 
> 可以参考这个
> https://nightlies.apache.org/flink/flink-docs-release-1.19/zh/docs/dev/table/sql/queries/deduplication/
> 1.11版本不知道是不是支持
> 
> From: 焦童
> Date: 2024-04-30 11:25
> To: user-zh
> Subject: Flink sql retract to append
> Hello ,
> 我使用Flink 1.11 版本 sql  进行数据去重(通过 group by 
> 形式)但是这会产生回撤流,下游存储不支持回撤流信息仅支持append,在DataStream 
> 中我可以通过状态进行去重,但是在sql中如何做到去重且不产生回撤流呢。谢谢各位



在idea中用CliFrontend提交job 报错 java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;

2024-04-30 文章 z_mmG


JDK11 编译的flink1.19的源码
因为他说没有sun.misc,所以启动用的jdk8  


已连接到地址为 ''127.0.0.1:8339',传输: '套接字'' 的目标虚拟机

Job has been submitted with JobID 0975ec264edfd11d236dd190e7708d70






 The program finished with the following exception:




org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: org.apache.flink.client.program.ProgramInvocationException: 
Job failed (JobID: 0975ec264edfd11d236dd190e7708d70)

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:373)

at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:223)

at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:113)

at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:1026)

at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:247)

at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1270)

at 
org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$10(CliFrontend.java:1367)

at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)

at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1367)

at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1335)

Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 
0975ec264edfd11d236dd190e7708d70)

at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)

at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)

at 
org.apache.flink.client.program.StreamContextEnvironment.getJobExecutionResult(StreamContextEnvironment.java:170)

at 
org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:121)

at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2325)

at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2303)

at org.apache.flink.streaming.examples.ys.WordCount.main(WordCount.java:34)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:356)

... 9 more

Caused by: org.apache.flink.client.program.ProgramInvocationException: Job 
failed (JobID: 0975ec264edfd11d236dd190e7708d70)

at 
org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$getJobExecutionResult$6(ClusterClientJobClientAdapter.java:130)

at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)

at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)

at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)

at 
org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$6(FutureUtils.java:302)

at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)

at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)

at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)

at 
org.apache.flink.client.program.rest.RestClusterClient.lambda$pollResourceAsync$35(RestClusterClient.java:901)

at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)

at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)

at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)

at 
org.apache.flink.util.concurrent.FutureUtils.lambda$retryOperationWithDelay$6(FutureUtils.java:302)

at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)

at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)

at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)

at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)

at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)

at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)


Flink sql retract to append

2024-04-29 文章 焦童
Hello ,
 我使用Flink 1.11 版本 sql  进行数据去重(通过 group by 
形式)但是这会产生回撤流,下游存储不支持回撤流信息仅支持append,在DataStream 
中我可以通过状态进行去重,但是在sql中如何做到去重且不产生回撤流呢。谢谢各位

Re:Flink 截止到1.18,是否有办法在Table API上添加uid?

2024-04-24 文章 Xuyang
Hi, 如果在中间添加了op,或者修改了处理逻辑,那么代表拓扑图会变,那么基于拓扑序所确定的uid也会变,从状态恢复就可能失败。具体可以参考[1]


目前table api应该是没有开放自定义uid的能力,可以在jira[2]上新建一个feature的jira,然后在dev邮件里发起讨论下。




[1] 
https://github.com/apache/flink/blob/92eef24d4cc531d6474252ef909fc6d431285dd9/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/ExecNodeBase.java#L243C38-L243C62
[2] https://issues.apache.org/jira/projects/FLINK/issues/



--

Best!
Xuyang





在 2024-04-25 01:18:55,"Guanlin Zhang"  写道:
>Hi Team,
>
>我们这边的业务使用 Flink MySQL CDC到 OpenSearch并且使用TABLE API: INSERT INTO t1 SELECT * 
>FROM t2 这种方式。
>
>由于我们这边可能会在运行过程中添加额外的Operator,我们有办法在使用snapshot 恢复后保留之前src和sink 
>operator的状态么?我看到在DataStream API可以通过设定uid。Table API有同样的方法吗?我看到Flink 
>jira:https://issues.apache.org/jira/browse/FLINK-28861 
>可以设置table.exec.uid.generation=PLAN_ONLY。请问默认配置下,中间添加transformation 
>operator或者其他变更后从snapshot恢复会保留之前的状态么?
>
>


Flink 截止到1.18,是否有办法在Table API上添加uid?

2024-04-24 文章 Guanlin Zhang
Hi Team,

我们这边的业务使用 Flink MySQL CDC到 OpenSearch并且使用TABLE API: INSERT INTO t1 SELECT * 
FROM t2 这种方式。

由于我们这边可能会在运行过程中添加额外的Operator,我们有办法在使用snapshot 恢复后保留之前src和sink 
operator的状态么?我看到在DataStream API可以通过设定uid。Table API有同样的方法吗?我看到Flink 
jira:https://issues.apache.org/jira/browse/FLINK-28861 
可以设置table.exec.uid.generation=PLAN_ONLY。请问默认配置下,中间添加transformation 
operator或者其他变更后从snapshot恢复会保留之前的状态么?




Re:处理时间的滚动窗口提前触发

2024-04-23 文章 Xuyang
Hi, 我看你使用了System.currentTimeMillis(),有可能是分布式的情况下,多台TM上的机器时间不一致导致的吗?




--

Best!
Xuyang





在 2024-04-20 19:04:14,"hhq" <424028...@qq.com.INVALID> 写道:
>我使用了一个基于处理时间的滚动窗口,窗口大小设置为60s,但是我在窗口的处理函数中比较窗口的结束时间和系统时间,偶尔会发现获取到的系统时间早于窗口结束时间(这里的提前量不大,只有几毫秒,但是我不清楚,这是flink窗口本身的原因还是我代码的问题)我没有找到原因,请求帮助
>
>public static void main(String[] args) throws Exception {
>
>StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>DataStreamSource integerDataStreamSource = env.addSource(new 
> IntegerSource());
>
>integerDataStreamSource
>.keyBy(Integer::intValue)
>.window(TumblingProcessingTimeWindows.of(Time.seconds(60)))
>.process(new IntegerProcessFunction())
>.setParallelism(1);
>
>env.execute();
>}
>
>
>public class IntegerProcessFunction extends ProcessWindowFunctionObject, Integer, TimeWindow> {
>private Logger log;
>@Override
>public void open(Configuration parameters) throws Exception {
>super.open(parameters);
>this.log = Logger.getLogger(IntegerProcessFunction.class);
>}
>
>@Override
>public void process(Integer integer, ProcessWindowFunction Object, Integer, TimeWindow>.Context context, Iterable elements, 
> Collector out) throws Exception {
>long currentTimeMillis = System.currentTimeMillis();
>long end = context.window().getEnd();
>
>if (currentTimeMillis < end) {
>log .info ("bad");
>} else {
>log .info ("good");
>}
>}
>}
>


申请退订邮件

2024-04-21 文章 Steven Shi
退订


> 下面是被转发的邮件:
> 
> 发件人: Biao Geng 
> 主题: 回复:申请退订邮件申请,谢谢
> 日期: 2024年4月2日 GMT+8 10:17:20
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi,
> 
> 退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
> .
> 
> 
> Best,
> Biao Geng
> 
>  于2024年3月31日周日 22:20写道:
> 
>> 申请退订邮件申请,谢谢
> 下面是被转发的邮件:
> 
> 发件人: "史鹏飞" <904148...@qq.com.INVALID>
> 主题: (无主题)
> 日期: 2024年4月17日 GMT+8 16:33:48
> 收件人: "user-zh" 
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 下面是被转发的邮件:
> 
> 发件人: "王广邦" 
> 主题: HBase SQL连接器为啥不支持ARRAY/MAP/ROW类型
> 日期: 2024年4月1日 GMT+8 19:37:57
> 收件人: "Flink" 
> 回复-收件人: user-zh@flink.apache.org
> 
> HBase SQL 连接器(flink-connector-hbase_2.11) 为啥不支持数据类型:ARRAY、MAP / MULTISET、ROW 
> 不支持?
> https://nightlies.apache.org/flink/flink-docs-release-1.11/zh/dev/table/connectors/hbase.html
> 另外这3种类型的需求处理思路是什么?
> 
> 
> 
> 
> 发自我的iPhone
> 下面是被转发的邮件:
> 
> 发件人: willluzheng 
> 主题: 回复:退订
> 日期: 2024年4月14日 GMT+8 15:43:24
> 收件人: user-zh 
> 抄送: user-zh 
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
>  回复的原邮件 
> | 发件人 | jimandlice |
> | 发送日期 | 2024年04月13日 19:50 |
> | 收件人 | user-zh |
> | 主题 | 退订 |
> 退订
> 
> 
> 
> 
> jimandlice
> jimandl...@163.com
> 
> 
> 
> 
> 下面是被转发的邮件:
> 
> 发件人: 
> 主题: 申请退订邮件申请,谢谢
> 日期: 2024年3月31日 GMT+8 22:20:09
> 收件人: "user-zh" 
> 回复-收件人: user-zh@flink.apache.org
> 
> 申请退订邮件申请,谢谢
> 下面是被转发的邮件:
> 
> 发件人: "ha.fen...@aisino.com" 
> 主题: ProcessWindowFunction中使用per-window state
> 日期: 2024年4月12日 GMT+8 14:31:41
> 收件人: user-zh 
> 回复-收件人: user-zh@flink.apache.org
> 
> 文档中关于窗口里面有一部分描述 在 ProcessWindowFunction 中使用 per-window 
> state。这一部分没有看懂,如果有迟到数据会再次触发窗口计算,就是全部重新算一遍吧,存状态是为了不重新计算?有没有关于这方面的参考资料?
> 下面是被转发的邮件:
> 
> 发件人: "casel.chen" 
> 主题: flink cdc metrics 问题
> 日期: 2024年4月8日 GMT+8 11:59:27
> 收件人: "user-zh@flink.apache.org" 
> 回复-收件人: user-zh@flink.apache.org
> 
> 请问flink cdc对外有暴露一些监控metrics么?
> 我希望能够监控到使用flink cdc的实时作业当前未消费的binlog数据条数,类似于kafka topic消费积压监控。
> 想通过这个监控防止flink cdc实时作业消费慢而被套圈(最大binlog条数如何获取?)
> 下面是被转发的邮件:
> 
> 发件人: "995626544" <995626...@qq.com.INVALID>
> 主题: 退订
> 日期: 2024年4月7日 GMT+8 16:06:11
> 收件人: "user-zh" 
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 
> 
> 
> 
> 995626544
> 995626...@qq.com
> 
> 
> 
> 
> 下面是被转发的邮件:
> 
> 发件人: "bai年" <827931...@qq.com.INVALID>
> 主题: (无主题)
> 日期: 2024年4月1日 GMT+8 17:03:12
> 收件人: "user-zh@flink.apache.org" 
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 下面是被转发的邮件:
> 
> 发件人: Biao Geng 
> 主题: 回复:配置hadoop依赖问题
> 日期: 2024年4月2日 GMT+8 10:52:15
> 收件人: user-zh@flink.apache.org, ha.fen...@aisino.com
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi fengqi,
> 
> “Hadoop is not in the
> classpath/dependencies.”报错说明org.apache.hadoop.conf.Configuration和org.apache.hadoop.fs.FileSystem这些hdfs所需的类没有找到。
> 
> 如果你的系统环境中有hadoop的话,通常是用这种方式来设置classpath:
> export HADOOP_CLASSPATH=`hadoop classpath`
> 
> 如果你的提交方式是提交到本地一个standalone的flink集群的话,可以检查下flink生成的日志文件,里面会打印classpath,可以看下是否有Hadoop相关的class。
> 
> Best,
> Biao Geng
> 
> 
> ha.fen...@aisino.com  于2024年4月2日周二 10:24写道:
> 
>> 1、在开发环境下,添加的有hadoop-client依赖,checkpoint时可以访问到hdfs的路径
>> 2、flink1.19.0,hadoop3.3.1,jar提交到单机flink系统中,提示如下错误
>> Caused by: java.lang.RuntimeException:
>> org.apache.flink.runtime.client.JobInitializationException: Could not start
>> the JobMaster.
>> at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
>> at
>> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
>> at
>> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>> at
>> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>> at
>> java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
>> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>> at
>> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>> at
>> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
>> Caused by: org.apache.flink.runtime.client.JobInitializationException:
>> Could not start the JobMaster.
>> at
>> org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
>> at
>> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>> at
>> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>> at
>> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>> at
>> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>> at java.lang.Thread.run(Thread.java:748)
>> Caused by: java.util.concurrent.CompletionException:
>> org.apache.flink.util.FlinkRuntimeException: Failed to create checkpoint
>> storage at checkpoint coordinator side.
>> at
>> 

处理时间的滚动窗口提前触发

2024-04-20 文章 hhq
我使用了一个基于处理时间的滚动窗口,窗口大小设置为60s,但是我在窗口的处理函数中比较窗口的结束时间和系统时间,偶尔会发现获取到的系统时间早于窗口结束时间(这里的提前量不大,只有几毫秒,但是我不清楚,这是flink窗口本身的原因还是我代码的问题)我没有找到原因,请求帮助

public static void main(String[] args) throws Exception {

StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource integerDataStreamSource = env.addSource(new 
IntegerSource());

integerDataStreamSource
.keyBy(Integer::intValue)
.window(TumblingProcessingTimeWindows.of(Time.seconds(60)))
.process(new IntegerProcessFunction())
.setParallelism(1);

env.execute();
}


public class IntegerProcessFunction extends ProcessWindowFunction {
private Logger log;
@Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
this.log = Logger.getLogger(IntegerProcessFunction.class);
}

@Override
public void process(Integer integer, ProcessWindowFunction.Context context, Iterable elements, 
Collector out) throws Exception {
long currentTimeMillis = System.currentTimeMillis();
long end = context.window().getEnd();

if (currentTimeMillis < end) {
log .info ("bad");
} else {
log .info ("good");
}
}
}



退订

2024-04-19 文章 冮雪程




| |
冮雪程
|
|
gxc_bigd...@163.com
|


 回复的原邮件 
| 发件人 | jh...@163.com |
| 发送日期 | 2024年04月18日 16:17 |
| 收件人 | user-zh |
| 主题 | Re: 回复:退订 |
退订



jh...@163.com

发件人: 我的意中人是个盖世英雄
发送时间: 2024-04-18 16:03
收件人: user-zh
主题: 回复:退订
退订



---原始邮件---
发件人: "willluzheng"

Re: Flink流批一体应用在实时数仓数据核对场景下有哪些注意事项?

2024-04-18 文章 Yunfeng Zhou
流模式和批模式在watermark和一些算子语义等方面上有一些不同,但没看到Join和Window算子上有什么差异,这方面应该在batch
mode下应该是支持的。具体的两种模式的比较可以看一下这个文档

https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/execution_mode/

On Thu, Apr 18, 2024 at 9:44 AM casel.chen  wrote:
>
> 有人尝试这么实践过么?可以给一些建议么?谢谢!
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> 在 2024-04-15 11:15:34,"casel.chen"  写道:
> >我最近在调研Flink实时数仓数据质量保障,需要定期(每10/20/30分钟)跑批核对实时数仓产生的数据,传统方式是通过spark作业跑批,如Apache
> > DolphinScheduler的数据质量模块。
> >但这种方式的最大缺点是需要使用spark sql重写flink 
> >sql业务逻辑,难以确保二者一致性。所以我在考虑能否使用Flink流批一体特性,复用flink 
> >sql,只需要将数据源从cdc或kafka换成hologres或starrocks表,再新建跑批结果表,最后只需要比较相同时间段内实时结果表和跑批结果表的数据即可。不过有几点疑问:
> >1. 原实时flink sql表定义中包含的watermark, process_time和event_time这些字段可以复用在batch 
> >mode下么?
> >2. 实时双流关联例如interval join和temporal join能够用于batch mode下么?
> >3. 实时流作业中的窗口函数能够复用于batch mode下么?
> >4. 其他需要关注的事项有哪些?


Re:Flink流批一体应用在实时数仓数据核对场景下有哪些注意事项?

2024-04-17 文章 casel.chen
有人尝试这么实践过么?可以给一些建议么?谢谢!

















在 2024-04-15 11:15:34,"casel.chen"  写道:
>我最近在调研Flink实时数仓数据质量保障,需要定期(每10/20/30分钟)跑批核对实时数仓产生的数据,传统方式是通过spark作业跑批,如Apache 
>DolphinScheduler的数据质量模块。
>但这种方式的最大缺点是需要使用spark sql重写flink 
>sql业务逻辑,难以确保二者一致性。所以我在考虑能否使用Flink流批一体特性,复用flink 
>sql,只需要将数据源从cdc或kafka换成hologres或starrocks表,再新建跑批结果表,最后只需要比较相同时间段内实时结果表和跑批结果表的数据即可。不过有几点疑问:
>1. 原实时flink sql表定义中包含的watermark, process_time和event_time这些字段可以复用在batch mode下么?
>2. 实时双流关联例如interval join和temporal join能够用于batch mode下么?
>3. 实时流作业中的窗口函数能够复用于batch mode下么?
>4. 其他需要关注的事项有哪些?


Flink流批一体应用在实时数仓数据核对场景下有哪些注意事项?

2024-04-14 文章 casel.chen
我最近在调研Flink实时数仓数据质量保障,需要定期(每10/20/30分钟)跑批核对实时数仓产生的数据,传统方式是通过spark作业跑批,如Apache 
DolphinScheduler的数据质量模块。
但这种方式的最大缺点是需要使用spark sql重写flink sql业务逻辑,难以确保二者一致性。所以我在考虑能否使用Flink流批一体特性,复用flink 
sql,只需要将数据源从cdc或kafka换成hologres或starrocks表,再新建跑批结果表,最后只需要比较相同时间段内实时结果表和跑批结果表的数据即可。不过有几点疑问:
1. 原实时flink sql表定义中包含的watermark, process_time和event_time这些字段可以复用在batch mode下么?
2. 实时双流关联例如interval join和temporal join能够用于batch mode下么?
3. 实时流作业中的窗口函数能够复用于batch mode下么?
4. 其他需要关注的事项有哪些?

回复:退订

2024-04-14 文章 willluzheng
退订
 回复的原邮件 
| 发件人 | jimandlice |
| 发送日期 | 2024年04月13日 19:50 |
| 收件人 | user-zh |
| 主题 | 退订 |
退订




jimandlice
jimandl...@163.com





退订

2024-04-13 文章 jimandlice
退订




jimandlice
jimandl...@163.com





Re: ProcessWindowFunction中使用per-window state

2024-04-12 文章 gongzhongqiang
你好,

可以通过使用  globalState / windowState 获取之前的状态进行增量计算。

下面这个 demo 可以方便理解:

public class ProcessWindowFunctionDemo {

public static void main(String[] args) throws Exception {

final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
// 使用处理时间
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
// 并行度为1
env.setParallelism(1);
// 设置数据源,一共三个元素
DataStream> dataStream = env.addSource(new
SourceFunction>() {
@Override
public void run(SourceContext> ctx)
throws Exception {
int xxxNum = 0;
int yyyNum = 0;
for (int i = 1; i < Integer.MAX_VALUE; i++) {
// 只有XXX和YYY两种name
String name = (0 == i % 2) ? "XXX" : "YYY";
//更新aaa和bbb元素的总数
if (0 == i % 2) {
xxxNum++;
} else {
yyyNum++;
}
// 使用当前时间作为时间戳
long timeStamp = System.currentTimeMillis();
// 将数据和时间戳打印出来,用来验证数据
System.out.println(String.format("source,%s, %s,XXX
total : %d,YYY total : %d\n",
name,
time(timeStamp),
xxxNum,
yyyNum));
// 发射一个元素,并且戴上了时间戳
ctx.collectWithTimestamp(new Tuple2(name, 1), timeStamp);
// 每发射一次就延时1秒
Thread.sleep(1000);
}
}

@Override
public void cancel() {
}
});

// 将数据用5秒的滚动窗口做划分,再用ProcessWindowFunction
SingleOutputStreamOperator mainDataStream = dataStream
// 以Tuple2的f0字段作为key,本例中实际上key只有aaa和bbb两种
.keyBy(value -> value.f0)
// 5秒一次的滚动窗口
.timeWindow(Time.seconds(5))
// 统计每个key当前窗口内的元素数量,然后把key、数量、窗口起止时间整理成字符串发送给下游算子
.process(new ProcessWindowFunction,
String, String, TimeWindow>() {
// 自定义状态
private ValueState state;
@Override
public void open(Configuration parameters) throws
Exception {
// 初始化状态,name是myState
state = getRuntimeContext().getState(new
ValueStateDescriptor<>("myState", KeyCount.class));
}

public void clear(Context context){
ValueState contextWindowValueState =
context.windowState().getState(new ValueStateDescriptor<>("myWindowState",
KeyCount.class));
contextWindowValueState.clear();
}

@Override
public void process(String s, Context context,
Iterable> iterable,
Collector collector) throws Exception {
// 从backend取得当前单词的myState状态
KeyCount current = state.value();
// 如果myState还从未没有赋值过,就在此初始化
if (current == null) {
current = new KeyCount();
current.key = s;
current.count = 0;
}
int count = 0;
// iterable可以访问该key当前窗口内的所有数据,
// 这里简单处理,只统计了元素数量
for (Tuple2 tuple2 : iterable) {
count++;
}
// 更新当前key的元素总数
current.count += count;
// 更新状态到backend
state.update(current);
System.out.println("getRuntimeContext() == context
:" + (getRuntimeContext() == context));
ValueState contextWindowValueState =
context.windowState().getState(new ValueStateDescriptor<>("myWindowState",
KeyCount.class));
ValueState contextGlobalValueState =
context.globalState().getState(new ValueStateDescriptor<>("myGlobalState",
KeyCount.class));
KeyCount windowValue =
contextWindowValueState.value();
if (windowValue == null) {
windowValue = new KeyCount();
windowValue.key = s;
windowValue.count = 0;
}
windowValue.count += count;
contextWindowValueState.update(windowValue);

KeyCount globalValue =
contextGlobalValueState.value();
if (globalValue == null) {
globalValue = new KeyCount();
globalValue.key = s;
globalValue.count = 0;
 

Re:Unable to use Table API in AWS Managed Flink 1.18

2024-04-10 文章 Xuyang
Hi, Perez.
Flink use SPI to find the jdbc connector in the classloader and when starting, 
the dir '${FLINK_ROOT}/lib' will be added 
into the classpath. That is why in AWS the exception throws. IMO there are two 
ways to solve this question.


1. upload the connector jar to AWS to let the classloader keep this jar. As for 
how to upload connector jars, you need to check 
the relevant documents of AWS.
2. package the jdbc connector jar into your job jar and submit it again.




--

Best!
Xuyang




At 2024-04-10 17:32:19, "Enrique Alberto Perez Delgado" 
 wrote:

Hi all,


I am using AWS Managed Flink 1.18, where I am getting this error when trying to 
submit my job:


```
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option: 'connector'='jdbc' at 
org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:798)
 at 
org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:772)
 at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:317)
 ... 32 more Caused by: org.apache.flink.table.api.ValidationException: Could 
not find any factory for identifier 'jdbc' that implements 
'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.
```


I used to get this error when testing locally until I added the 
`flink-connector-jdbc-3.1.2-1.18`.jar to `/opt/flink/lib` in my local docker 
image, which I thought would be provided by AWS. apparently, it isn’t. Has 
anyone encountered this error before?


I highly appreciate any help you could give me,


Best regards, 


Enrique Perez
Data Engineer
HelloFresh SE | Prinzenstraße 89 | 10969 Berlin, Germany
Phone:  +4917625622422











| |
HelloFresh SE, Berlin (Sitz der Gesellschaft) | Vorstände: Dominik S. Richter 
(Vorsitzender), Thomas W. Griesel, Christian Gärtner, Edward Boyes | 
Vorsitzender des Aufsichtsrats: John H. Rittenhouse | Eingetragen beim 
Amtsgericht Charlottenburg, HRB 182382 B | USt-Id Nr.: DE 302210417

CONFIDENTIALITY NOTICE: This message (including any attachments) is 
confidential and may be privileged. It may be read, copied and used only by the 
intended recipient. If you have received it in error please contact the sender 
(by return e-mail) immediately and delete this message. Any unauthorized use or 
dissemination of this message in whole or in parts is strictly prohibited.

Unable to use Table API in AWS Managed Flink 1.18

2024-04-10 文章 Enrique Alberto Perez Delgado
Hi all,

I am using AWS Managed Flink 1.18, where I am getting this error when trying to 
submit my job:

```
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option: 'connector'='jdbc'
at 
org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:798)
at 
org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:772)
at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:317)
... 32 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
factory for identifier 'jdbc' that implements 
'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.
```

I used to get this error when testing locally until I added the 
`flink-connector-jdbc-3.1.2-1.18`.jar to `/opt/flink/lib` in my local docker 
image, which I thought would be provided by AWS. apparently, it isn’t. Has 
anyone encountered this error before?

I highly appreciate any help you could give me,

Best regards, 

Enrique Perez
Data Engineer
HelloFresh SE | Prinzenstraße 89 | 10969 Berlin, Germany
Phone:  +4917625622422





-- 




 



HelloFresh SE, Berlin (Sitz der Gesellschaft) | Vorstände: Dominik S. 
Richter (Vorsitzender), Thomas W. Griesel, Christian Gärtner, Edward Boyes 
| Vorsitzender des Aufsichtsrats: John H. Rittenhouse | Eingetragen beim 
Amtsgericht Charlottenburg, HRB 182382 B | USt-Id Nr.: DE 302210417

*CONFIDENTIALITY NOTICE:* This message (including any attachments) is 
confidential and may be privileged. It may be read, copied and used only by 
the intended recipient. If you have received it in error please contact the 
sender (by return e-mail) immediately and delete this message. Any 
unauthorized use or dissemination of this message in whole or in parts is 
strictly prohibited.




Re: flink 已完成job等一段时间会消失

2024-04-09 文章 gongzhongqiang
你好:

如果想长期保留已完成的任务,推荐使用  History Server :
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#history-server

Best,

Zhongqiang Gong

ha.fen...@aisino.com  于2024年4月9日周二 10:39写道:

> 在WEBUI里面,已完成的任务会在completed jobs里面能够看到,过了一会再进去看数据就没有了,是有什么配置自动删除吗?
>


Re: Re: 采集mysql全量的时候出现oom问题

2024-04-09 文章 gongzhongqiang
可以尝试的解决办法:

   - 调大 JM 内存 (如  Shawn Huang 所说)
   - 调整快照期间批读的大小,以降低 state 大小从而减轻 checkpiont 过程中 JM 内存压力


Best,
Zhongqiang Gong

wyk  于2024年4月9日周二 16:56写道:

>
> 是的,分片比较大,有一万七千多个分片
>
> jm内存目前是2g,我调整到4g之后还是会有这么问题,我在想如果我一直调整jm内存,后面增量的时候内存会有所浪费,在flink官网上找到了flink堆内存的相关参数,但是对这个不太了解,不知道具体该怎么调试合适,麻烦帮忙看一下如下图这些参数调整那个合适呢?
>
> flink官网地址为:
> https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup_jobmanager/
>
>
>
>
>   ComponentConfiguration optionsDescription
> JVM Heap
> 
> jobmanager.memory.heap.size
> 
>  *JVM
> Heap* memory size for job manager.
> Off-heap Memory
> 
> jobmanager.memory.off-heap.size
> 
> *Off-heap* memory size for job manager. This option covers all off-heap
> memory usage including direct and native memory allocation.
> JVM metaspace
> 
> jobmanager.memory.jvm-metaspace.size
> 
>  Metaspace
> size of the Flink JVM process
> JVM Overhead jobmanager.memory.jvm-overhead.min
> 
> jobmanager.memory.jvm-overhead.max
> 
> jobmanager.memory.jvm-overhead.fraction
> 
>  Native
> memory reserved for other JVM overhead: e.g. thread stacks, code cache,
> garbage collection space etc, it is a capped fractionated component
> 
>  of
> the total process memory
> 
>
>
>
>
> 在 2024-04-09 11:28:57,"Shawn Huang"  写道:
>
>
> 从报错信息看,是由于JM的堆内存不够,可以尝试把JM内存调大,一种可能的原因是mysql表全量阶段分片较多,导致SourceEnumerator状态较大。
>
> Best,
> Shawn Huang
>
>
> wyk  于2024年4月8日周一 17:46写道:
>
>>
>>
>> 开发者们好:
>> flink版本1.14.5
>> flink-cdc版本 2.2.0
>>
>>  在使用flink-cdc-mysql采集全量的时候,全量阶段会做checkpoint,但是checkpoint的时候会出现oom问题,这个有什么办法吗?
>>具体报错如附件文本以及下图所示:
>>
>>
>>


Re:Re: 采集mysql全量的时候出现oom问题

2024-04-09 文章 wyk



是的,分片比较大,有一万七千多个分片
jm内存目前是2g,我调整到4g之后还是会有这么问题,我在想如果我一直调整jm内存,后面增量的时候内存会有所浪费,在flink官网上找到了flink堆内存的相关参数,但是对这个不太了解,不知道具体该怎么调试合适,麻烦帮忙看一下如下图这些参数调整那个合适呢?


flink官网地址为: 
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup_jobmanager/










|   Component   |   Configuration options   |   Description   |
| JVM Heap | jobmanager.memory.heap.size | JVM Heap memory size for job 
manager. |
| Off-heap Memory | jobmanager.memory.off-heap.size | Off-heap memory size for 
job manager. This option covers all off-heap memory usage including direct and 
native memory allocation. |
| JVM metaspace | jobmanager.memory.jvm-metaspace.size | Metaspace size of the 
Flink JVM process |
| JVM Overhead | jobmanager.memory.jvm-overhead.min
jobmanager.memory.jvm-overhead.max
jobmanager.memory.jvm-overhead.fraction | Native memory reserved for other JVM 
overhead: e.g. thread stacks, code cache, garbage collection space etc, it is a 
capped fractionated component of the total process memory

|









在 2024-04-09 11:28:57,"Shawn Huang"  写道:

从报错信息看,是由于JM的堆内存不够,可以尝试把JM内存调大,一种可能的原因是mysql表全量阶段分片较多,导致SourceEnumerator状态较大。


Best,
Shawn Huang




wyk  于2024年4月8日周一 17:46写道:





开发者们好:
flink版本1.14.5 
flink-cdc版本 2.2.0
   
在使用flink-cdc-mysql采集全量的时候,全量阶段会做checkpoint,但是checkpoint的时候会出现oom问题,这个有什么办法吗?
   具体报错如附件文本以及下图所示:





Re: 采集mysql全量的时候出现oom问题

2024-04-08 文章 Shawn Huang
从报错信息看,是由于JM的堆内存不够,可以尝试把JM内存调大,一种可能的原因是mysql表全量阶段分片较多,导致SourceEnumerator状态较大。

Best,
Shawn Huang


wyk  于2024年4月8日周一 17:46写道:

>
>
> 开发者们好:
> flink版本1.14.5
> flink-cdc版本 2.2.0
>
>  在使用flink-cdc-mysql采集全量的时候,全量阶段会做checkpoint,但是checkpoint的时候会出现oom问题,这个有什么办法吗?
>具体报错如附件文本以及下图所示:
>
>
>


回复:flink 已完成job等一段时间会消失

2024-04-08 文章 spoon_lz
有一个过期时间的配置
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobstore-expiration-time



| |
spoon_lz
|
|
spoon...@126.com
|


 回复的原邮件 
| 发件人 | ha.fen...@aisino.com |
| 发送日期 | 2024年04月9日 10:38 |
| 收件人 | user-zh |
| 主题 | flink 已完成job等一段时间会消失 |
在WEBUI里面,已完成的任务会在completed jobs里面能够看到,过了一会再进去看数据就没有了,是有什么配置自动删除吗?


Re: flink cdc metrics 问题

2024-04-07 文章 Shawn Huang
你好,目前flink cdc没有提供未消费binlog数据条数这样的指标,你可以通过 currentFetchEventTimeLag
这个指标(表示消费到的binlog数据中时间与当前时间延迟)来判断当前消费情况。

[1]
https://github.com/apache/flink-cdc/blob/master/flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/metrics/MySqlSourceReaderMetrics.java

Best,
Shawn Huang


casel.chen  于2024年4月8日周一 12:01写道:

> 请问flink cdc对外有暴露一些监控metrics么?
> 我希望能够监控到使用flink cdc的实时作业当前未消费的binlog数据条数,类似于kafka topic消费积压监控。
> 想通过这个监控防止flink cdc实时作业消费慢而被套圈(最大binlog条数如何获取?)


flink cdc metrics 问题

2024-04-07 文章 casel.chen
请问flink cdc对外有暴露一些监控metrics么?
我希望能够监控到使用flink cdc的实时作业当前未消费的binlog数据条数,类似于kafka topic消费积压监控。
想通过这个监控防止flink cdc实时作业消费慢而被套圈(最大binlog条数如何获取?)

Re: 退订

2024-04-07 文章 Biao Geng
Hi,

If you want to unsubscribe to user-zh mailing list, please send an email
with any content to user-zh-unsubscr...@flink.apache.org
.
退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.


Best,
Biao Geng


995626544 <995626...@qq.com.invalid> 于2024年4月7日周日 16:06写道:

> 退订
>
>
>
>
> 995626544
> 995626...@qq.com
>
>
>
> 


Re: HBase SQL连接器为啥不支持ARRAY/MAP/ROW类型

2024-04-06 文章 Yunfeng Zhou
应该是由于这些复杂集合在HBase中没有一个直接与之对应的数据类型,所以Flink SQL没有直接支持的。

一种思路是把这些数据类型按照某种格式(比如json)转换成字符串/序列化成byte array,把字符串存到HBase中,读取出来的时候也再解析/反序列化。

On Mon, Apr 1, 2024 at 7:38 PM 王广邦  wrote:
>
> HBase SQL 连接器(flink-connector-hbase_2.11) 为啥不支持数据类型:ARRAY、MAP / MULTISET、ROW 
> 不支持?
> https://nightlies.apache.org/flink/flink-docs-release-1.11/zh/dev/table/connectors/hbase.html
> 另外这3种类型的需求处理思路是什么?
>
>
>
>
> 发自我的iPhone


Re: 配置hadoop依赖问题

2024-04-01 文章 Biao Geng
Hi fengqi,

“Hadoop is not in the
classpath/dependencies.”报错说明org.apache.hadoop.conf.Configuration和org.apache.hadoop.fs.FileSystem这些hdfs所需的类没有找到。

如果你的系统环境中有hadoop的话,通常是用这种方式来设置classpath:
export HADOOP_CLASSPATH=`hadoop classpath`

如果你的提交方式是提交到本地一个standalone的flink集群的话,可以检查下flink生成的日志文件,里面会打印classpath,可以看下是否有Hadoop相关的class。

Best,
Biao Geng


ha.fen...@aisino.com  于2024年4月2日周二 10:24写道:

> 1、在开发环境下,添加的有hadoop-client依赖,checkpoint时可以访问到hdfs的路径
> 2、flink1.19.0,hadoop3.3.1,jar提交到单机flink系统中,提示如下错误
> Caused by: java.lang.RuntimeException:
> org.apache.flink.runtime.client.JobInitializationException: Could not start
> the JobMaster.
> at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
> at
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
> at
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at
> java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> at
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> at
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> Caused by: org.apache.flink.runtime.client.JobInitializationException:
> Could not start the JobMaster.
> at
> org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
> at
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> at
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException:
> org.apache.flink.util.FlinkRuntimeException: Failed to create checkpoint
> storage at checkpoint coordinator side.
> at
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> at
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)
> ... 3 more
> Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to create
> checkpoint storage at checkpoint coordinator side.
> at
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.(CheckpointCoordinator.java:364)
> at
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.(CheckpointCoordinator.java:273)
> at
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.enableCheckpointing(DefaultExecutionGraph.java:503)
> at
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:334)
> at
> org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:173)
> at
> org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:381)
> at
> org.apache.flink.runtime.scheduler.SchedulerBase.(SchedulerBase.java:224)
> at
> org.apache.flink.runtime.scheduler.DefaultScheduler.(DefaultScheduler.java:140)
> at
> org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:162)
> at
> org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:121)
> at
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:379)
> at org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:356)
> at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128)
> at
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100)
> at
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> ... 3 more
> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
> Could not find a file system implementation for scheme 'hdfs'. The scheme
> is not directly supported by Flink and no Hadoop file system to support
> this scheme could be loaded. For a full list of supported file systems,
> please see
> https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
> at
> 

Re: 退订

2024-04-01 文章 Biao Geng
Hi,

退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.


Best,
Biao Geng

CloudFunny  于2024年3月31日周日 22:25写道:

>
>


Re: 退订

2024-04-01 文章 Biao Geng
Hi,

退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.


Best,
Biao Geng

戴少  于2024年4月1日周一 11:09写道:

> 退订
>
> --
>
> Best Regards,
>
>
>
>
>  回复的原邮件 
> | 发件人 | wangfengyang |
> | 发送日期 | 2024年03月22日 17:28 |
> | 收件人 | user-zh  |
> | 主题 | 退订 |
> 退订


Re: 退订

2024-04-01 文章 Biao Geng
Hi,

退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.


Best,
Biao Geng


杨东树  于2024年3月31日周日 20:23写道:

> 申请退订邮件通知,谢谢!


Re: 申请退订邮件申请,谢谢

2024-04-01 文章 Biao Geng
Hi,

退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
.


Best,
Biao Geng

 于2024年3月31日周日 22:20写道:

> 申请退订邮件申请,谢谢


退订

2024-04-01 文章 薛礼彬
退订

Re: 回复:退订

2024-03-31 文章 Zhanghao Chen
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅。你可以参考[1] 来管理你的邮件订阅。

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8

Best,
Zhanghao Chen

From: 戴少 
Sent: Monday, April 1, 2024 11:10
To: user-zh 
Cc: user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163.com 
;
 user-zh-subscribe ; user-zh 

Subject: 回复:退订

退订

--

Best Regards,




 回复的原邮件 
| 发件人 | 李一飞 |
| 发送日期 | 2024年03月14日 00:09 |
| 收件人 | 
user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163.com,
user-zh-subscribe ,
user-zh  |
| 主题 | 退订 |
退订




Re: 退订

2024-03-31 文章 Zhanghao Chen
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅。你可以参考[1] 来管理你的邮件订阅。

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8

Best,
Zhanghao Chen

From: zjw 
Sent: Monday, April 1, 2024 11:05
To: user-zh@flink.apache.org 
Subject: 退订




Re: Re:Re: Re: 1.19自定义数据源

2024-03-31 文章 Zhanghao Chen
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅。你可以参考[1] 来管理你的邮件订阅。

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8

Best,
Zhanghao Chen

From: 熊柱 <18428358...@163.com>
Sent: Monday, April 1, 2024 11:14
To: user-zh@flink.apache.org 
Subject: Re:Re: Re: 1.19自定义数据源

退订

















在 2024-03-28 19:56:06,"Zhanghao Chen"  写道:
>如果是用的 DataStream API 的话,也可以看下新增的 DataGen Connector [1] 是否能直接满足你的测试数据生成需求。
>
>
>[1] 
>https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/datastream/datagen/
>
>Best,
>Zhanghao Chen
>
>From: ha.fen...@aisino.com 
>Sent: Thursday, March 28, 2024 15:34
>To: user-zh 
>Subject: Re: Re: 1.19自定义数据源
>
>我想问的就是如果需要实现Source接口,应该怎么写,有没有具体的例子实现一个按照一定速度生成自定义的类?
>
>发件人: gongzhongqiang
>发送时间: 2024-03-28 15:05
>收件人: user-zh
>主题: Re: 1.19自定义数据源
>你好:
>
>当前 flink 1.19 版本只是标识为过时,在未来版本会移除 SourceFunction。所以对于你的应用而言为了支持长期 flink
>版本考虑,可以将这些SourceFunction用Source重新实现。
>
>ha.fen...@aisino.com  于2024年3月28日周四 14:18写道:
>
>>
>> 原来是继承SourceFunction实现一些简单的自动生成数据的方法,在1.19中已经标识为过期,好像是使用Source接口,这个和原来的SourceFunction完全不同,应该怎么样生成测试使用的自定义数据源呢?
>>


Re:Re: Re: 1.19自定义数据源

2024-03-31 文章 熊柱
退订

















在 2024-03-28 19:56:06,"Zhanghao Chen"  写道:
>如果是用的 DataStream API 的话,也可以看下新增的 DataGen Connector [1] 是否能直接满足你的测试数据生成需求。
>
>
>[1] 
>https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/datastream/datagen/
>
>Best,
>Zhanghao Chen
>
>From: ha.fen...@aisino.com 
>Sent: Thursday, March 28, 2024 15:34
>To: user-zh 
>Subject: Re: Re: 1.19自定义数据源
>
>我想问的就是如果需要实现Source接口,应该怎么写,有没有具体的例子实现一个按照一定速度生成自定义的类?
>
>发件人: gongzhongqiang
>发送时间: 2024-03-28 15:05
>收件人: user-zh
>主题: Re: 1.19自定义数据源
>你好:
>
>当前 flink 1.19 版本只是标识为过时,在未来版本会移除 SourceFunction。所以对于你的应用而言为了支持长期 flink
>版本考虑,可以将这些SourceFunction用Source重新实现。
>
>ha.fen...@aisino.com  于2024年3月28日周四 14:18写道:
>
>>
>> 原来是继承SourceFunction实现一些简单的自动生成数据的方法,在1.19中已经标识为过期,好像是使用Source接口,这个和原来的SourceFunction完全不同,应该怎么样生成测试使用的自定义数据源呢?
>>


回复:退订

2024-03-31 文章 戴少
退订

--

Best Regards,
 


 
 回复的原邮件 
| 发件人 | 李一飞 |
| 发送日期 | 2024年03月14日 00:09 |
| 收件人 | 
user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163.com,
user-zh-subscribe ,
user-zh  |
| 主题 | 退订 |
退订




??????????

2024-03-31 文章 ????


--

Best Regards,
 


 
  
| ?? | wangfengyang |
|  | 2024??03??22?? 17:28 |
| ?? | user-zh  |
|  |  |


退订

2024-03-31 文章 zjw



退订

2024-03-31 文章 CloudFunny



申请退订邮件申请,谢谢

2024-03-31 文章 wangwj03
申请退订邮件申请,谢谢

退订

2024-03-31 文章 杨东树
申请退订邮件通知,谢谢!

Re: Re: 1.19自定义数据源

2024-03-28 文章 Zhanghao Chen
如果是用的 DataStream API 的话,也可以看下新增的 DataGen Connector [1] 是否能直接满足你的测试数据生成需求。


[1] 
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/datastream/datagen/

Best,
Zhanghao Chen

From: ha.fen...@aisino.com 
Sent: Thursday, March 28, 2024 15:34
To: user-zh 
Subject: Re: Re: 1.19自定义数据源

我想问的就是如果需要实现Source接口,应该怎么写,有没有具体的例子实现一个按照一定速度生成自定义的类?

发件人: gongzhongqiang
发送时间: 2024-03-28 15:05
收件人: user-zh
主题: Re: 1.19自定义数据源
你好:

当前 flink 1.19 版本只是标识为过时,在未来版本会移除 SourceFunction。所以对于你的应用而言为了支持长期 flink
版本考虑,可以将这些SourceFunction用Source重新实现。

ha.fen...@aisino.com  于2024年3月28日周四 14:18写道:

>
> 原来是继承SourceFunction实现一些简单的自动生成数据的方法,在1.19中已经标识为过期,好像是使用Source接口,这个和原来的SourceFunction完全不同,应该怎么样生成测试使用的自定义数据源呢?
>


Re: Re: 1.19自定义数据源

2024-03-28 文章 Shawn Huang
你好,关于如何实现source接口可以参考以下资料:

[1] FLIP-27: Refactor Source Interface - Apache Flink - Apache Software
Foundation

[2] 如何高效接入 Flink:Connecter / Catalog API 核心设计与社区进展 (qq.com)



Best,
Shawn Huang


liuchao  于2024年3月28日周四 15:39写道:

> 找一个实现source接口的算子,参考一下
>
>
> 刘超
> liuchao1...@foxmail.com
>
>
>
> 
>
>
>
>
> --原始邮件--
> 发件人:
>   "user-zh"
> <
> ha.fen...@aisino.com;
> 发送时间:2024年3月28日(星期四) 下午3:34
> 收件人:"user-zh"
> 主题:Re: Re: 1.19自定义数据源
>
>
>
> 我想问的就是如果需要实现Source接口,应该怎么写,有没有具体的例子实现一个按照一定速度生成自定义的类?
> 
> 发件人: gongzhongqiang
> 发送时间: 2024-03-28 15:05
> 收件人: user-zh
> 主题: Re: 1.19自定义数据源
> 你好:
> 
> 当前 flink 1.19 版本只是标识为过时,在未来版本会移除 SourceFunction。所以对于你的应用而言为了支持长期 flink
> 版本考虑,可以将这些SourceFunction用Source重新实现。
> 
> ha.fen...@aisino.com  
> 
> 
> 原来是继承SourceFunction实现一些简单的自动生成数据的方法,在1.19中已经标识为过期,好像是使用Source接口,这个和原来的SourceFunction完全不同,应该怎么样生成测试使用的自定义数据源呢?
> 


Re: 1.19自定义数据源

2024-03-28 文章 gongzhongqiang
你好:

当前 flink 1.19 版本只是标识为过时,在未来版本会移除 SourceFunction。所以对于你的应用而言为了支持长期 flink
版本考虑,可以将这些SourceFunction用Source重新实现。

ha.fen...@aisino.com  于2024年3月28日周四 14:18写道:

>
> 原来是继承SourceFunction实现一些简单的自动生成数据的方法,在1.19中已经标识为过期,好像是使用Source接口,这个和原来的SourceFunction完全不同,应该怎么样生成测试使用的自定义数据源呢?
>


Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.8.0 released

2024-03-25 文章 Rui Fan
Congratulations! Thanks Max for the release and all involved for the great
work!

A gentle reminder to users: the maven artifact has just been released and
will take some time to complete.

Best,
Rui

On Mon, Mar 25, 2024 at 6:35 PM Maximilian Michels  wrote:

> The Apache Flink community is very happy to announce the release of
> the Apache Flink Kubernetes Operator version 1.8.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache
> Flink applications on Kubernetes through all aspects of their
> lifecycle.
>
> Release highlights:
> - Flink Autotuning automatically adjusts TaskManager memory
> - Flink Autoscaling metrics and decision accuracy improved
> - Improve standalone Flink Autoscaling
> - Savepoint trigger nonce for savepoint-based restarts
> - Operator stability improvements for cluster shutdown
>
> Blog post:
> https://flink.apache.org/2024/03/21/apache-flink-kubernetes-operator-1.8.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink Kubernetes Operator can be found at:
>
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>
> Official Docker image for Flink Kubernetes Operator can be found at:
> https://hub.docker.com/r/apache/flink-kubernetes-operator
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12353866=12315522
>
> We would like to thank the Apache Flink community and its contributors
> who made this release possible!
>
> Cheers,
> Max
>


[ANNOUNCE] Apache Flink Kubernetes Operator 1.8.0 released

2024-03-25 文章 Maximilian Michels
The Apache Flink community is very happy to announce the release of
the Apache Flink Kubernetes Operator version 1.8.0.

The Flink Kubernetes Operator allows users to manage their Apache
Flink applications on Kubernetes through all aspects of their
lifecycle.

Release highlights:
- Flink Autotuning automatically adjusts TaskManager memory
- Flink Autoscaling metrics and decision accuracy improved
- Improve standalone Flink Autoscaling
- Savepoint trigger nonce for savepoint-based restarts
- Operator stability improvements for cluster shutdown

Blog post: 
https://flink.apache.org/2024/03/21/apache-flink-kubernetes-operator-1.8.0-release-announcement/

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink Kubernetes Operator can be found at:
https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator

Official Docker image for Flink Kubernetes Operator can be found at:
https://hub.docker.com/r/apache/flink-kubernetes-operator

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12353866=12315522

We would like to thank the Apache Flink community and its contributors
who made this release possible!

Cheers,
Max


Re: 退订

2024-03-21 文章 gongzhongqiang
Hi,  scott

请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅来自
 user-zh@flink.apache.org 
邮件组的邮件,你可以参考[1][2]
管理你的邮件订阅。

Best,
Zhongqiang Gong

[1]
https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
[2] https://flink.apache.org/community.html#mailing-lists

己巳  于 2024年3月22日周五 10:21写道:

> 退订


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-21 文章 gongzhongqiang
Congrattulations! Thanks for the great work!


Best,
Zhongqiang Gong

Leonard Xu  于2024年3月20日周三 21:36写道:

> Hi devs and users,
>
> We are thrilled to announce that the donation of Flink CDC as a
> sub-project of Apache Flink has completed. We invite you to explore the new
> resources available:
>
> - GitHub Repository: https://github.com/apache/flink-cdc
> - Flink CDC Documentation:
> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>
> After Flink community accepted this donation[1], we have completed
> software copyright signing, code repo migration, code cleanup, website
> migration, CI migration and github issues migration etc.
> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng
> Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their
> contributions and help during this process!
>
>
> For all previous contributors: The contribution process has slightly
> changed to align with the main Flink project. To report bugs or suggest new
> features, please open tickets
> Apache Jira (https://issues.apache.org/jira).  Note that we will no
> longer accept GitHub issues for these purposes.
>
>
> Welcome to explore the new repository and documentation. Your feedback and
> contributions are invaluable as we continue to improve Flink CDC.
>
> Thanks everyone for your support and happy exploring Flink CDC!
>
> Best,
> Leonard
> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Zakelly Lan
Congratulations!


Best,
Zakelly

On Thu, Mar 21, 2024 at 12:05 PM weijie guo 
wrote:

> Congratulations! Well done.
>
>
> Best regards,
>
> Weijie
>
>
> Feng Jin  于2024年3月21日周四 11:40写道:
>
>> Congratulations!
>>
>>
>> Best,
>> Feng
>>
>>
>> On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:
>>
>> > Congratulations!
>> >
>> > Best,
>> > Ron
>> >
>> > Jark Wu  于2024年3月21日周四 10:46写道:
>> >
>> > > Congratulations and welcome!
>> > >
>> > > Best,
>> > > Jark
>> > >
>> > > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>> > >
>> > > > Congratulations!
>> > > >
>> > > > Best,
>> > > > Rui
>> > > >
>> > > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
>> > > wrote:
>> > > >
>> > > > > Congrattulations!
>> > > > >
>> > > > > Best,
>> > > > > Hang
>> > > > >
>> > > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
>> > > > >
>> > > > >>
>> > > > >> Congrats, thanks for the great work!
>> > > > >>
>> > > > >>
>> > > > >> Best,
>> > > > >> Lincoln Lee
>> > > > >>
>> > > > >>
>> > > > >> Peter Huang  于2024年3月20日周三 22:48写道:
>> > > > >>
>> > > > >>> Congratulations
>> > > > >>>
>> > > > >>>
>> > > > >>> Best Regards
>> > > > >>> Peter Huang
>> > > > >>>
>> > > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang > >
>> > > > wrote:
>> > > > >>>
>> > > > 
>> > > >  Congratulations
>> > > > 
>> > > > 
>> > > > 
>> > > >  Best,
>> > > >  Huajie Wang
>> > > > 
>> > > > 
>> > > > 
>> > > >  Leonard Xu  于2024年3月20日周三 21:36写道:
>> > > > 
>> > > > > Hi devs and users,
>> > > > >
>> > > > > We are thrilled to announce that the donation of Flink CDC as
>> a
>> > > > > sub-project of Apache Flink has completed. We invite you to
>> > explore
>> > > > the new
>> > > > > resources available:
>> > > > >
>> > > > > - GitHub Repository: https://github.com/apache/flink-cdc
>> > > > > - Flink CDC Documentation:
>> > > > > https://nightlies.apache.org/flink/flink-cdc-docs-stable
>> > > > >
>> > > > > After Flink community accepted this donation[1], we have
>> > completed
>> > > > > software copyright signing, code repo migration, code cleanup,
>> > > > website
>> > > > > migration, CI migration and github issues migration etc.
>> > > > > Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>> > > > > Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
>> > > > contributors
>> > > > > for their contributions and help during this process!
>> > > > >
>> > > > >
>> > > > > For all previous contributors: The contribution process has
>> > > slightly
>> > > > > changed to align with the main Flink project. To report bugs
>> or
>> > > > suggest new
>> > > > > features, please open tickets
>> > > > > Apache Jira (https://issues.apache.org/jira).  Note that we
>> will
>> > > no
>> > > > > longer accept GitHub issues for these purposes.
>> > > > >
>> > > > >
>> > > > > Welcome to explore the new repository and documentation. Your
>> > > > feedback
>> > > > > and contributions are invaluable as we continue to improve
>> Flink
>> > > CDC.
>> > > > >
>> > > > > Thanks everyone for your support and happy exploring Flink
>> CDC!
>> > > > >
>> > > > > Best,
>> > > > > Leonard
>> > > > > [1]
>> > > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 weijie guo
Congratulations! Well done.


Best regards,

Weijie


Feng Jin  于2024年3月21日周四 11:40写道:

> Congratulations!
>
>
> Best,
> Feng
>
>
> On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:
>
> > Congratulations!
> >
> > Best,
> > Ron
> >
> > Jark Wu  于2024年3月21日周四 10:46写道:
> >
> > > Congratulations and welcome!
> > >
> > > Best,
> > > Jark
> > >
> > > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
> > >
> > > > Congratulations!
> > > >
> > > > Best,
> > > > Rui
> > > >
> > > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> > > wrote:
> > > >
> > > > > Congrattulations!
> > > > >
> > > > > Best,
> > > > > Hang
> > > > >
> > > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > > > >
> > > > >>
> > > > >> Congrats, thanks for the great work!
> > > > >>
> > > > >>
> > > > >> Best,
> > > > >> Lincoln Lee
> > > > >>
> > > > >>
> > > > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > > > >>
> > > > >>> Congratulations
> > > > >>>
> > > > >>>
> > > > >>> Best Regards
> > > > >>> Peter Huang
> > > > >>>
> > > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > > > wrote:
> > > > >>>
> > > > 
> > > >  Congratulations
> > > > 
> > > > 
> > > > 
> > > >  Best,
> > > >  Huajie Wang
> > > > 
> > > > 
> > > > 
> > > >  Leonard Xu  于2024年3月20日周三 21:36写道:
> > > > 
> > > > > Hi devs and users,
> > > > >
> > > > > We are thrilled to announce that the donation of Flink CDC as a
> > > > > sub-project of Apache Flink has completed. We invite you to
> > explore
> > > > the new
> > > > > resources available:
> > > > >
> > > > > - GitHub Repository: https://github.com/apache/flink-cdc
> > > > > - Flink CDC Documentation:
> > > > > https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > > > >
> > > > > After Flink community accepted this donation[1], we have
> > completed
> > > > > software copyright signing, code repo migration, code cleanup,
> > > > website
> > > > > migration, CI migration and github issues migration etc.
> > > > > Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > > > > Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > > > contributors
> > > > > for their contributions and help during this process!
> > > > >
> > > > >
> > > > > For all previous contributors: The contribution process has
> > > slightly
> > > > > changed to align with the main Flink project. To report bugs or
> > > > suggest new
> > > > > features, please open tickets
> > > > > Apache Jira (https://issues.apache.org/jira).  Note that we
> will
> > > no
> > > > > longer accept GitHub issues for these purposes.
> > > > >
> > > > >
> > > > > Welcome to explore the new repository and documentation. Your
> > > > feedback
> > > > > and contributions are invaluable as we continue to improve
> Flink
> > > CDC.
> > > > >
> > > > > Thanks everyone for your support and happy exploring Flink CDC!
> > > > >
> > > > > Best,
> > > > > Leonard
> > > > > [1]
> > > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Feng Jin
Congratulations!


Best,
Feng


On Thu, Mar 21, 2024 at 11:37 AM Ron liu  wrote:

> Congratulations!
>
> Best,
> Ron
>
> Jark Wu  于2024年3月21日周四 10:46写道:
>
> > Congratulations and welcome!
> >
> > Best,
> > Jark
> >
> > On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
> >
> > > Congratulations!
> > >
> > > Best,
> > > Rui
> > >
> > > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> > wrote:
> > >
> > > > Congrattulations!
> > > >
> > > > Best,
> > > > Hang
> > > >
> > > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > > >
> > > >>
> > > >> Congrats, thanks for the great work!
> > > >>
> > > >>
> > > >> Best,
> > > >> Lincoln Lee
> > > >>
> > > >>
> > > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > > >>
> > > >>> Congratulations
> > > >>>
> > > >>>
> > > >>> Best Regards
> > > >>> Peter Huang
> > > >>>
> > > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > > wrote:
> > > >>>
> > > 
> > >  Congratulations
> > > 
> > > 
> > > 
> > >  Best,
> > >  Huajie Wang
> > > 
> > > 
> > > 
> > >  Leonard Xu  于2024年3月20日周三 21:36写道:
> > > 
> > > > Hi devs and users,
> > > >
> > > > We are thrilled to announce that the donation of Flink CDC as a
> > > > sub-project of Apache Flink has completed. We invite you to
> explore
> > > the new
> > > > resources available:
> > > >
> > > > - GitHub Repository: https://github.com/apache/flink-cdc
> > > > - Flink CDC Documentation:
> > > > https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > > >
> > > > After Flink community accepted this donation[1], we have
> completed
> > > > software copyright signing, code repo migration, code cleanup,
> > > website
> > > > migration, CI migration and github issues migration etc.
> > > > Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > > > Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > > contributors
> > > > for their contributions and help during this process!
> > > >
> > > >
> > > > For all previous contributors: The contribution process has
> > slightly
> > > > changed to align with the main Flink project. To report bugs or
> > > suggest new
> > > > features, please open tickets
> > > > Apache Jira (https://issues.apache.org/jira).  Note that we will
> > no
> > > > longer accept GitHub issues for these purposes.
> > > >
> > > >
> > > > Welcome to explore the new repository and documentation. Your
> > > feedback
> > > > and contributions are invaluable as we continue to improve Flink
> > CDC.
> > > >
> > > > Thanks everyone for your support and happy exploring Flink CDC!
> > > >
> > > > Best,
> > > > Leonard
> > > > [1]
> > https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > > >
> > > >
> > >
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Ron liu
Congratulations!

Best,
Ron

Jark Wu  于2024年3月21日周四 10:46写道:

> Congratulations and welcome!
>
> Best,
> Jark
>
> On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Congratulations!
> >
> > Best,
> > Rui
> >
> > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> wrote:
> >
> > > Congrattulations!
> > >
> > > Best,
> > > Hang
> > >
> > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > >
> > >>
> > >> Congrats, thanks for the great work!
> > >>
> > >>
> > >> Best,
> > >> Lincoln Lee
> > >>
> > >>
> > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > >>
> > >>> Congratulations
> > >>>
> > >>>
> > >>> Best Regards
> > >>> Peter Huang
> > >>>
> > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > wrote:
> > >>>
> > 
> >  Congratulations
> > 
> > 
> > 
> >  Best,
> >  Huajie Wang
> > 
> > 
> > 
> >  Leonard Xu  于2024年3月20日周三 21:36写道:
> > 
> > > Hi devs and users,
> > >
> > > We are thrilled to announce that the donation of Flink CDC as a
> > > sub-project of Apache Flink has completed. We invite you to explore
> > the new
> > > resources available:
> > >
> > > - GitHub Repository: https://github.com/apache/flink-cdc
> > > - Flink CDC Documentation:
> > > https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > >
> > > After Flink community accepted this donation[1], we have completed
> > > software copyright signing, code repo migration, code cleanup,
> > website
> > > migration, CI migration and github issues migration etc.
> > > Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > > Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > contributors
> > > for their contributions and help during this process!
> > >
> > >
> > > For all previous contributors: The contribution process has
> slightly
> > > changed to align with the main Flink project. To report bugs or
> > suggest new
> > > features, please open tickets
> > > Apache Jira (https://issues.apache.org/jira).  Note that we will
> no
> > > longer accept GitHub issues for these purposes.
> > >
> > >
> > > Welcome to explore the new repository and documentation. Your
> > feedback
> > > and contributions are invaluable as we continue to improve Flink
> CDC.
> > >
> > > Thanks everyone for your support and happy exploring Flink CDC!
> > >
> > > Best,
> > > Leonard
> > > [1]
> https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > >
> > >
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 shuai xu
Congratulations!


Best!
Xushuai

> 2024年3月21日 10:54,Yanquan Lv  写道:
> 
> Congratulations and  Looking forward to future versions!
> 
> Jark Wu  于2024年3月21日周四 10:47写道:
> 
>> Congratulations and welcome!
>> 
>> Best,
>> Jark
>> 
>> On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>> 
>>> Congratulations!
>>> 
>>> Best,
>>> Rui
>>> 
>>> On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
>> wrote:
>>> 
 Congrattulations!
 
 Best,
 Hang
 
 Lincoln Lee  于2024年3月21日周四 09:54写道:
 
> 
> Congrats, thanks for the great work!
> 
> 
> Best,
> Lincoln Lee
> 
> 
> Peter Huang  于2024年3月20日周三 22:48写道:
> 
>> Congratulations
>> 
>> 
>> Best Regards
>> Peter Huang
>> 
>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
>>> wrote:
>> 
>>> 
>>> Congratulations
>>> 
>>> 
>>> 
>>> Best,
>>> Huajie Wang
>>> 
>>> 
>>> 
>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>> 
 Hi devs and users,
 
 We are thrilled to announce that the donation of Flink CDC as a
 sub-project of Apache Flink has completed. We invite you to explore
>>> the new
 resources available:
 
 - GitHub Repository: https://github.com/apache/flink-cdc
 - Flink CDC Documentation:
 https://nightlies.apache.org/flink/flink-cdc-docs-stable
 
 After Flink community accepted this donation[1], we have completed
 software copyright signing, code repo migration, code cleanup,
>>> website
 migration, CI migration and github issues migration etc.
 Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
 Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
>>> contributors
 for their contributions and help during this process!
 
 
 For all previous contributors: The contribution process has
>> slightly
 changed to align with the main Flink project. To report bugs or
>>> suggest new
 features, please open tickets
 Apache Jira (https://issues.apache.org/jira).  Note that we will
>> no
 longer accept GitHub issues for these purposes.
 
 
 Welcome to explore the new repository and documentation. Your
>>> feedback
 and contributions are invaluable as we continue to improve Flink
>> CDC.
 
 Thanks everyone for your support and happy exploring Flink CDC!
 
 Best,
 Leonard
 [1]
>> https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
 
 
>>> 
>> 



Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Yanquan Lv
Congratulations and  Looking forward to future versions!

Jark Wu  于2024年3月21日周四 10:47写道:

> Congratulations and welcome!
>
> Best,
> Jark
>
> On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:
>
> > Congratulations!
> >
> > Best,
> > Rui
> >
> > On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan 
> wrote:
> >
> > > Congrattulations!
> > >
> > > Best,
> > > Hang
> > >
> > > Lincoln Lee  于2024年3月21日周四 09:54写道:
> > >
> > >>
> > >> Congrats, thanks for the great work!
> > >>
> > >>
> > >> Best,
> > >> Lincoln Lee
> > >>
> > >>
> > >> Peter Huang  于2024年3月20日周三 22:48写道:
> > >>
> > >>> Congratulations
> > >>>
> > >>>
> > >>> Best Regards
> > >>> Peter Huang
> > >>>
> > >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> > wrote:
> > >>>
> > 
> >  Congratulations
> > 
> > 
> > 
> >  Best,
> >  Huajie Wang
> > 
> > 
> > 
> >  Leonard Xu  于2024年3月20日周三 21:36写道:
> > 
> > > Hi devs and users,
> > >
> > > We are thrilled to announce that the donation of Flink CDC as a
> > > sub-project of Apache Flink has completed. We invite you to explore
> > the new
> > > resources available:
> > >
> > > - GitHub Repository: https://github.com/apache/flink-cdc
> > > - Flink CDC Documentation:
> > > https://nightlies.apache.org/flink/flink-cdc-docs-stable
> > >
> > > After Flink community accepted this donation[1], we have completed
> > > software copyright signing, code repo migration, code cleanup,
> > website
> > > migration, CI migration and github issues migration etc.
> > > Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > > Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> > contributors
> > > for their contributions and help during this process!
> > >
> > >
> > > For all previous contributors: The contribution process has
> slightly
> > > changed to align with the main Flink project. To report bugs or
> > suggest new
> > > features, please open tickets
> > > Apache Jira (https://issues.apache.org/jira).  Note that we will
> no
> > > longer accept GitHub issues for these purposes.
> > >
> > >
> > > Welcome to explore the new repository and documentation. Your
> > feedback
> > > and contributions are invaluable as we continue to improve Flink
> CDC.
> > >
> > > Thanks everyone for your support and happy exploring Flink CDC!
> > >
> > > Best,
> > > Leonard
> > > [1]
> https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> > >
> > >
> >
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Jark Wu
Congratulations and welcome!

Best,
Jark

On Thu, 21 Mar 2024 at 10:35, Rui Fan <1996fan...@gmail.com> wrote:

> Congratulations!
>
> Best,
> Rui
>
> On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan  wrote:
>
> > Congrattulations!
> >
> > Best,
> > Hang
> >
> > Lincoln Lee  于2024年3月21日周四 09:54写道:
> >
> >>
> >> Congrats, thanks for the great work!
> >>
> >>
> >> Best,
> >> Lincoln Lee
> >>
> >>
> >> Peter Huang  于2024年3月20日周三 22:48写道:
> >>
> >>> Congratulations
> >>>
> >>>
> >>> Best Regards
> >>> Peter Huang
> >>>
> >>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang 
> wrote:
> >>>
> 
>  Congratulations
> 
> 
> 
>  Best,
>  Huajie Wang
> 
> 
> 
>  Leonard Xu  于2024年3月20日周三 21:36写道:
> 
> > Hi devs and users,
> >
> > We are thrilled to announce that the donation of Flink CDC as a
> > sub-project of Apache Flink has completed. We invite you to explore
> the new
> > resources available:
> >
> > - GitHub Repository: https://github.com/apache/flink-cdc
> > - Flink CDC Documentation:
> > https://nightlies.apache.org/flink/flink-cdc-docs-stable
> >
> > After Flink community accepted this donation[1], we have completed
> > software copyright signing, code repo migration, code cleanup,
> website
> > migration, CI migration and github issues migration etc.
> > Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> > Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other
> contributors
> > for their contributions and help during this process!
> >
> >
> > For all previous contributors: The contribution process has slightly
> > changed to align with the main Flink project. To report bugs or
> suggest new
> > features, please open tickets
> > Apache Jira (https://issues.apache.org/jira).  Note that we will no
> > longer accept GitHub issues for these purposes.
> >
> >
> > Welcome to explore the new repository and documentation. Your
> feedback
> > and contributions are invaluable as we continue to improve Flink CDC.
> >
> > Thanks everyone for your support and happy exploring Flink CDC!
> >
> > Best,
> > Leonard
> > [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
> >
> >
>


Re:Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Xuyang
Cheers!




--

Best!
Xuyang

在 2024-03-21 10:28:45,"Rui Fan" <1996fan...@gmail.com> 写道:
>Congratulations!
>
>Best,
>Rui
>
>On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan  wrote:
>
>> Congrattulations!
>>
>> Best,
>> Hang
>>
>> Lincoln Lee  于2024年3月21日周四 09:54写道:
>>
>>>
>>> Congrats, thanks for the great work!
>>>
>>>
>>> Best,
>>> Lincoln Lee
>>>
>>>
>>> Peter Huang  于2024年3月20日周三 22:48写道:
>>>
 Congratulations


 Best Regards
 Peter Huang

 On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:

>
> Congratulations
>
>
>
> Best,
> Huajie Wang
>
>
>
> Leonard Xu  于2024年3月20日周三 21:36写道:
>
>> Hi devs and users,
>>
>> We are thrilled to announce that the donation of Flink CDC as a
>> sub-project of Apache Flink has completed. We invite you to explore the 
>> new
>> resources available:
>>
>> - GitHub Repository: https://github.com/apache/flink-cdc
>> - Flink CDC Documentation:
>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>
>> After Flink community accepted this donation[1], we have completed
>> software copyright signing, code repo migration, code cleanup, website
>> migration, CI migration and github issues migration etc.
>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
>> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other 
>> contributors
>> for their contributions and help during this process!
>>
>>
>> For all previous contributors: The contribution process has slightly
>> changed to align with the main Flink project. To report bugs or suggest 
>> new
>> features, please open tickets
>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>> longer accept GitHub issues for these purposes.
>>
>>
>> Welcome to explore the new repository and documentation. Your feedback
>> and contributions are invaluable as we continue to improve Flink CDC.
>>
>> Thanks everyone for your support and happy exploring Flink CDC!
>>
>> Best,
>> Leonard
>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>
>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Rui Fan
Congratulations!

Best,
Rui

On Thu, Mar 21, 2024 at 10:25 AM Hang Ruan  wrote:

> Congrattulations!
>
> Best,
> Hang
>
> Lincoln Lee  于2024年3月21日周四 09:54写道:
>
>>
>> Congrats, thanks for the great work!
>>
>>
>> Best,
>> Lincoln Lee
>>
>>
>> Peter Huang  于2024年3月20日周三 22:48写道:
>>
>>> Congratulations
>>>
>>>
>>> Best Regards
>>> Peter Huang
>>>
>>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>>>

 Congratulations



 Best,
 Huajie Wang



 Leonard Xu  于2024年3月20日周三 21:36写道:

> Hi devs and users,
>
> We are thrilled to announce that the donation of Flink CDC as a
> sub-project of Apache Flink has completed. We invite you to explore the 
> new
> resources available:
>
> - GitHub Repository: https://github.com/apache/flink-cdc
> - Flink CDC Documentation:
> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>
> After Flink community accepted this donation[1], we have completed
> software copyright signing, code repo migration, code cleanup, website
> migration, CI migration and github issues migration etc.
> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
> Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors
> for their contributions and help during this process!
>
>
> For all previous contributors: The contribution process has slightly
> changed to align with the main Flink project. To report bugs or suggest 
> new
> features, please open tickets
> Apache Jira (https://issues.apache.org/jira).  Note that we will no
> longer accept GitHub issues for these purposes.
>
>
> Welcome to explore the new repository and documentation. Your feedback
> and contributions are invaluable as we continue to improve Flink CDC.
>
> Thanks everyone for your support and happy exploring Flink CDC!
>
> Best,
> Leonard
> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>
>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Hang Ruan
Congrattulations!

Best,
Hang

Lincoln Lee  于2024年3月21日周四 09:54写道:

>
> Congrats, thanks for the great work!
>
>
> Best,
> Lincoln Lee
>
>
> Peter Huang  于2024年3月20日周三 22:48写道:
>
>> Congratulations
>>
>>
>> Best Regards
>> Peter Huang
>>
>> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>>
>>>
>>> Congratulations
>>>
>>>
>>>
>>> Best,
>>> Huajie Wang
>>>
>>>
>>>
>>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>>
 Hi devs and users,

 We are thrilled to announce that the donation of Flink CDC as a
 sub-project of Apache Flink has completed. We invite you to explore the new
 resources available:

 - GitHub Repository: https://github.com/apache/flink-cdc
 - Flink CDC Documentation:
 https://nightlies.apache.org/flink/flink-cdc-docs-stable

 After Flink community accepted this donation[1], we have completed
 software copyright signing, code repo migration, code cleanup, website
 migration, CI migration and github issues migration etc.
 Here I am particularly grateful to Hang Ruan, Zhongqaing Gong,
 Qingsheng Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors
 for their contributions and help during this process!


 For all previous contributors: The contribution process has slightly
 changed to align with the main Flink project. To report bugs or suggest new
 features, please open tickets
 Apache Jira (https://issues.apache.org/jira).  Note that we will no
 longer accept GitHub issues for these purposes.


 Welcome to explore the new repository and documentation. Your feedback
 and contributions are invaluable as we continue to improve Flink CDC.

 Thanks everyone for your support and happy exploring Flink CDC!

 Best,
 Leonard
 [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob




Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Lincoln Lee
Congrats, thanks for the great work!


Best,
Lincoln Lee


Peter Huang  于2024年3月20日周三 22:48写道:

> Congratulations
>
>
> Best Regards
> Peter Huang
>
> On Wed, Mar 20, 2024 at 6:56 AM Huajie Wang  wrote:
>
>>
>> Congratulations
>>
>>
>>
>> Best,
>> Huajie Wang
>>
>>
>>
>> Leonard Xu  于2024年3月20日周三 21:36写道:
>>
>>> Hi devs and users,
>>>
>>> We are thrilled to announce that the donation of Flink CDC as a
>>> sub-project of Apache Flink has completed. We invite you to explore the new
>>> resources available:
>>>
>>> - GitHub Repository: https://github.com/apache/flink-cdc
>>> - Flink CDC Documentation:
>>> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>>>
>>> After Flink community accepted this donation[1], we have completed
>>> software copyright signing, code repo migration, code cleanup, website
>>> migration, CI migration and github issues migration etc.
>>> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng
>>> Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their
>>> contributions and help during this process!
>>>
>>>
>>> For all previous contributors: The contribution process has slightly
>>> changed to align with the main Flink project. To report bugs or suggest new
>>> features, please open tickets
>>> Apache Jira (https://issues.apache.org/jira).  Note that we will no
>>> longer accept GitHub issues for these purposes.
>>>
>>>
>>> Welcome to explore the new repository and documentation. Your feedback
>>> and contributions are invaluable as we continue to improve Flink CDC.
>>>
>>> Thanks everyone for your support and happy exploring Flink CDC!
>>>
>>> Best,
>>> Leonard
>>> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>>>
>>>


Re: [ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Huajie Wang
Congratulations



Best,
Huajie Wang



Leonard Xu  于2024年3月20日周三 21:36写道:

> Hi devs and users,
>
> We are thrilled to announce that the donation of Flink CDC as a
> sub-project of Apache Flink has completed. We invite you to explore the new
> resources available:
>
> - GitHub Repository: https://github.com/apache/flink-cdc
> - Flink CDC Documentation:
> https://nightlies.apache.org/flink/flink-cdc-docs-stable
>
> After Flink community accepted this donation[1], we have completed
> software copyright signing, code repo migration, code cleanup, website
> migration, CI migration and github issues migration etc.
> Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng
> Ren, Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their
> contributions and help during this process!
>
>
> For all previous contributors: The contribution process has slightly
> changed to align with the main Flink project. To report bugs or suggest new
> features, please open tickets
> Apache Jira (https://issues.apache.org/jira).  Note that we will no
> longer accept GitHub issues for these purposes.
>
>
> Welcome to explore the new repository and documentation. Your feedback and
> contributions are invaluable as we continue to improve Flink CDC.
>
> Thanks everyone for your support and happy exploring Flink CDC!
>
> Best,
> Leonard
> [1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob
>
>


[ANNOUNCE] Donation Flink CDC into Apache Flink has Completed

2024-03-20 文章 Leonard Xu
Hi devs and users,

We are thrilled to announce that the donation of Flink CDC as a sub-project of 
Apache Flink has completed. We invite you to explore the new resources 
available:

- GitHub Repository: https://github.com/apache/flink-cdc
- Flink CDC Documentation: 
https://nightlies.apache.org/flink/flink-cdc-docs-stable

After Flink community accepted this donation[1], we have completed software 
copyright signing, code repo migration, code cleanup, website migration, CI 
migration and github issues migration etc. 
Here I am particularly grateful to Hang Ruan, Zhongqaing Gong, Qingsheng Ren, 
Jiabao Sun, LvYanquan, loserwang1024 and other contributors for their 
contributions and help during this process!


For all previous contributors: The contribution process has slightly changed to 
align with the main Flink project. To report bugs or suggest new features, 
please open tickets 
Apache Jira (https://issues.apache.org/jira).  Note that we will no longer 
accept GitHub issues for these purposes.


Welcome to explore the new repository and documentation. Your feedback and 
contributions are invaluable as we continue to improve Flink CDC.

Thanks everyone for your support and happy exploring Flink CDC!

Best,
Leonard
[1] https://lists.apache.org/thread/cw29fhsp99243yfo95xrkw82s5s418ob



Re: flink operator 高可用任务偶发性报错unable to update ConfigMapLock

2024-03-20 文章 Yang Wang
这种一般是因为APIServer那边有问题导致单次的ConfigMap renew lease annotation的操作失败,Flink默认会重试的

如果你发现因为这个SocketTimeoutException原因导致了任务Failover,可以把下面两个参数调大
high-availability.kubernetes.leader-election.lease-duration: 60s
high-availability.kubernetes.leader-election.renew-deadline: 60s


Best,
Yang

On Tue, Mar 12, 2024 at 11:38 AM kellygeorg...@163.com <
kellygeorg...@163.com> wrote:

> 有没有高手指点一二???在线等
>
>
>
>  回复的原邮件 
> | 发件人 | kellygeorg...@163.com |
> | 日期 | 2024年03月11日 20:29 |
> | 收件人 | user-zh |
> | 抄送至 | |
> | 主题 | flink operator 高可用任务偶发性报错unable to update ConfigMapLock |
> jobmanager的报错如下所示,请问是什么原因?
> Exception occurred while renewing lock:Unable to update ConfigMapLock
>
> Caused by:io.fabric8.kubernetes.client.Kubernetes Client
> Exception:Operation:[replace] for kind:[ConfigMap] with name:[flink task
> xx- configmap] in namespace:[default]
>
>
> Caused by: Java.net.SocketTimeoutException:timeout
>
>
>
>
>
>
>


退订

2024-03-19 文章 1
退订

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Yu Li
Congrats and thanks all for the efforts!

Best Regards,
Yu

On Tue, 19 Mar 2024 at 11:51, gongzhongqiang  wrote:
>
> Congrats! Thanks to everyone involved!
>
> Best,
> Zhongqiang Gong
>
> Lincoln Lee  于2024年3月18日周一 16:27写道:
>>
>> The Apache Flink community is very happy to announce the release of Apache
>> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>>
>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data streaming
>> applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please check out the release blog post for an overview of the improvements
>> for this bugfix release:
>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>>
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>>
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>>
>>
>> Best,
>> Yun, Jing, Martijn and Lincoln


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 gongzhongqiang
Congrats! Thanks to everyone involved!

Best,
Zhongqiang Gong

Lincoln Lee  于2024年3月18日周一 16:27写道:

> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements
> for this bugfix release:
>
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
>
> Best,
> Yun, Jing, Martijn and Lincoln
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Weihua Hu
Congratulations

Best,
Weihua


On Tue, Mar 19, 2024 at 10:56 AM Rodrigo Meneses  wrote:

> Congratulations
>
> On Mon, Mar 18, 2024 at 7:43 PM Yu Chen  wrote:
>
> > Congratulations!
> > Thanks to release managers and everyone involved!
> >
> > Best,
> > Yu Chen
> >
> >
> > > 2024年3月19日 01:01,Jeyhun Karimov  写道:
> > >
> > > Congrats!
> > > Thanks to release managers and everyone involved.
> > >
> > > Regards,
> > > Jeyhun
> > >
> > > On Mon, Mar 18, 2024 at 9:25 AM Lincoln Lee 
> > wrote:
> > >
> > >> The Apache Flink community is very happy to announce the release of
> > Apache
> > >> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19
> > series.
> > >>
> > >> Apache Flink® is an open-source stream processing framework for
> > >> distributed, high-performing, always-available, and accurate data
> > streaming
> > >> applications.
> > >>
> > >> The release is available for download at:
> > >> https://flink.apache.org/downloads.html
> > >>
> > >> Please check out the release blog post for an overview of the
> > improvements
> > >> for this bugfix release:
> > >>
> > >>
> >
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> > >>
> > >> The full release notes are available in Jira:
> > >>
> > >>
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > >>
> > >> We would like to thank all contributors of the Apache Flink community
> > who
> > >> made this release possible!
> > >>
> > >>
> > >> Best,
> > >> Yun, Jing, Martijn and Lincoln
> > >>
> >
> >
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Yu Chen
Congratulations!
Thanks to release managers and everyone involved!

Best,
Yu Chen
 

> 2024年3月19日 01:01,Jeyhun Karimov  写道:
> 
> Congrats!
> Thanks to release managers and everyone involved.
> 
> Regards,
> Jeyhun
> 
> On Mon, Mar 18, 2024 at 9:25 AM Lincoln Lee  wrote:
> 
>> The Apache Flink community is very happy to announce the release of Apache
>> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>> 
>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data streaming
>> applications.
>> 
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>> 
>> Please check out the release blog post for an overview of the improvements
>> for this bugfix release:
>> 
>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>> 
>> The full release notes are available in Jira:
>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>> 
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>> 
>> 
>> Best,
>> Yun, Jing, Martijn and Lincoln
>> 



Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Ron liu
Congratulations

Best,
Ron

Yanfei Lei  于2024年3月18日周一 20:01写道:

> Congrats, thanks for the great work!
>
> Sergey Nuyanzin  于2024年3月18日周一 19:30写道:
> >
> > Congratulations, thanks release managers and everyone involved for the
> great work!
> >
> > On Mon, Mar 18, 2024 at 12:15 PM Benchao Li 
> wrote:
> >>
> >> Congratulations! And thanks to all release managers and everyone
> >> involved in this release!
> >>
> >> Yubin Li  于2024年3月18日周一 18:11写道:
> >> >
> >> > Congratulations!
> >> >
> >> > Thanks to release managers and everyone involved.
> >> >
> >> > On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu 
> wrote:
> >> > >
> >> > > Congratulations!
> >> > > Thanks release managers and all involved!
> >> > >
> >> > > On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan 
> wrote:
> >> > >
> >> > > > Congratulations!
> >> > > >
> >> > > > Best,
> >> > > > Hang
> >> > > >
> >> > > > Paul Lam  于2024年3月18日周一 17:18写道:
> >> > > >
> >> > > > > Congrats! Thanks to everyone involved!
> >> > > > >
> >> > > > > Best,
> >> > > > > Paul Lam
> >> > > > >
> >> > > > > > 2024年3月18日 16:37,Samrat Deb  写道:
> >> > > > > >
> >> > > > > > Congratulations !
> >> > > > > >
> >> > > > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li <
> jingsongl...@gmail.com>
> >> > > > > wrote:
> >> > > > > >
> >> > > > > >> Congratulations!
> >> > > > > >>
> >> > > > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <
> 1996fan...@gmail.com> wrote:
> >> > > > > >>>
> >> > > > > >>> Congratulations, thanks for the great work!
> >> > > > > >>>
> >> > > > > >>> Best,
> >> > > > > >>> Rui
> >> > > > > >>>
> >> > > > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee <
> lincoln.8...@gmail.com>
> >> > > > > >> wrote:
> >> > > > > 
> >> > > > >  The Apache Flink community is very happy to announce the
> release of
> >> > > > > >> Apache Flink 1.19.0, which is the fisrt release for the
> Apache Flink
> >> > > > > 1.19
> >> > > > > >> series.
> >> > > > > 
> >> > > > >  Apache Flink® is an open-source stream processing
> framework for
> >> > > > > >> distributed, high-performing, always-available, and accurate
> data
> >> > > > > streaming
> >> > > > > >> applications.
> >> > > > > 
> >> > > > >  The release is available for download at:
> >> > > > >  https://flink.apache.org/downloads.html
> >> > > > > 
> >> > > > >  Please check out the release blog post for an overview of
> the
> >> > > > > >> improvements for this bugfix release:
> >> > > > > 
> >> > > > > >>
> >> > > > >
> >> > > >
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> >> > > > > 
> >> > > > >  The full release notes are available in Jira:
> >> > > > > 
> >> > > > > >>
> >> > > > >
> >> > > >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> >> > > > > 
> >> > > > >  We would like to thank all contributors of the Apache Flink
> >> > > > community
> >> > > > > >> who made this release possible!
> >> > > > > 
> >> > > > > 
> >> > > > >  Best,
> >> > > > >  Yun, Jing, Martijn and Lincoln
> >> > > > > >>
> >> > > > >
> >> > > > >
> >> > > >
> >> > >
> >> > >
> >> > > --
> >> > > Best,
> >> > > Hangxiang.
> >>
> >>
> >>
> >> --
> >>
> >> Best,
> >> Benchao Li
> >
> >
> >
> > --
> > Best regards,
> > Sergey
>
>
>
> --
> Best,
> Yanfei
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Yanfei Lei
Congrats, thanks for the great work!

Sergey Nuyanzin  于2024年3月18日周一 19:30写道:
>
> Congratulations, thanks release managers and everyone involved for the great 
> work!
>
> On Mon, Mar 18, 2024 at 12:15 PM Benchao Li  wrote:
>>
>> Congratulations! And thanks to all release managers and everyone
>> involved in this release!
>>
>> Yubin Li  于2024年3月18日周一 18:11写道:
>> >
>> > Congratulations!
>> >
>> > Thanks to release managers and everyone involved.
>> >
>> > On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu  wrote:
>> > >
>> > > Congratulations!
>> > > Thanks release managers and all involved!
>> > >
>> > > On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan  wrote:
>> > >
>> > > > Congratulations!
>> > > >
>> > > > Best,
>> > > > Hang
>> > > >
>> > > > Paul Lam  于2024年3月18日周一 17:18写道:
>> > > >
>> > > > > Congrats! Thanks to everyone involved!
>> > > > >
>> > > > > Best,
>> > > > > Paul Lam
>> > > > >
>> > > > > > 2024年3月18日 16:37,Samrat Deb  写道:
>> > > > > >
>> > > > > > Congratulations !
>> > > > > >
>> > > > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li 
>> > > > > > 
>> > > > > wrote:
>> > > > > >
>> > > > > >> Congratulations!
>> > > > > >>
>> > > > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> 
>> > > > > >> wrote:
>> > > > > >>>
>> > > > > >>> Congratulations, thanks for the great work!
>> > > > > >>>
>> > > > > >>> Best,
>> > > > > >>> Rui
>> > > > > >>>
>> > > > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
>> > > > > >>> 
>> > > > > >> wrote:
>> > > > > 
>> > > > >  The Apache Flink community is very happy to announce the 
>> > > > >  release of
>> > > > > >> Apache Flink 1.19.0, which is the fisrt release for the Apache 
>> > > > > >> Flink
>> > > > > 1.19
>> > > > > >> series.
>> > > > > 
>> > > > >  Apache Flink® is an open-source stream processing framework for
>> > > > > >> distributed, high-performing, always-available, and accurate data
>> > > > > streaming
>> > > > > >> applications.
>> > > > > 
>> > > > >  The release is available for download at:
>> > > > >  https://flink.apache.org/downloads.html
>> > > > > 
>> > > > >  Please check out the release blog post for an overview of the
>> > > > > >> improvements for this bugfix release:
>> > > > > 
>> > > > > >>
>> > > > >
>> > > > https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>> > > > > 
>> > > > >  The full release notes are available in Jira:
>> > > > > 
>> > > > > >>
>> > > > >
>> > > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>> > > > > 
>> > > > >  We would like to thank all contributors of the Apache Flink
>> > > > community
>> > > > > >> who made this release possible!
>> > > > > 
>> > > > > 
>> > > > >  Best,
>> > > > >  Yun, Jing, Martijn and Lincoln
>> > > > > >>
>> > > > >
>> > > > >
>> > > >
>> > >
>> > >
>> > > --
>> > > Best,
>> > > Hangxiang.
>>
>>
>>
>> --
>>
>> Best,
>> Benchao Li
>
>
>
> --
> Best regards,
> Sergey



-- 
Best,
Yanfei


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Sergey Nuyanzin
Congratulations, thanks release managers and everyone involved for the
great work!

On Mon, Mar 18, 2024 at 12:15 PM Benchao Li  wrote:

> Congratulations! And thanks to all release managers and everyone
> involved in this release!
>
> Yubin Li  于2024年3月18日周一 18:11写道:
> >
> > Congratulations!
> >
> > Thanks to release managers and everyone involved.
> >
> > On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu 
> wrote:
> > >
> > > Congratulations!
> > > Thanks release managers and all involved!
> > >
> > > On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan 
> wrote:
> > >
> > > > Congratulations!
> > > >
> > > > Best,
> > > > Hang
> > > >
> > > > Paul Lam  于2024年3月18日周一 17:18写道:
> > > >
> > > > > Congrats! Thanks to everyone involved!
> > > > >
> > > > > Best,
> > > > > Paul Lam
> > > > >
> > > > > > 2024年3月18日 16:37,Samrat Deb  写道:
> > > > > >
> > > > > > Congratulations !
> > > > > >
> > > > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li <
> jingsongl...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > >> Congratulations!
> > > > > >>
> > > > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com>
> wrote:
> > > > > >>>
> > > > > >>> Congratulations, thanks for the great work!
> > > > > >>>
> > > > > >>> Best,
> > > > > >>> Rui
> > > > > >>>
> > > > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee <
> lincoln.8...@gmail.com>
> > > > > >> wrote:
> > > > > 
> > > > >  The Apache Flink community is very happy to announce the
> release of
> > > > > >> Apache Flink 1.19.0, which is the fisrt release for the Apache
> Flink
> > > > > 1.19
> > > > > >> series.
> > > > > 
> > > > >  Apache Flink® is an open-source stream processing framework
> for
> > > > > >> distributed, high-performing, always-available, and accurate
> data
> > > > > streaming
> > > > > >> applications.
> > > > > 
> > > > >  The release is available for download at:
> > > > >  https://flink.apache.org/downloads.html
> > > > > 
> > > > >  Please check out the release blog post for an overview of the
> > > > > >> improvements for this bugfix release:
> > > > > 
> > > > > >>
> > > > >
> > > >
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> > > > > 
> > > > >  The full release notes are available in Jira:
> > > > > 
> > > > > >>
> > > > >
> > > >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > > > > 
> > > > >  We would like to thank all contributors of the Apache Flink
> > > > community
> > > > > >> who made this release possible!
> > > > > 
> > > > > 
> > > > >  Best,
> > > > >  Yun, Jing, Martijn and Lincoln
> > > > > >>
> > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > Best,
> > > Hangxiang.
>
>
>
> --
>
> Best,
> Benchao Li
>


-- 
Best regards,
Sergey


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Benchao Li
Congratulations! And thanks to all release managers and everyone
involved in this release!

Yubin Li  于2024年3月18日周一 18:11写道:
>
> Congratulations!
>
> Thanks to release managers and everyone involved.
>
> On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu  wrote:
> >
> > Congratulations!
> > Thanks release managers and all involved!
> >
> > On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan  wrote:
> >
> > > Congratulations!
> > >
> > > Best,
> > > Hang
> > >
> > > Paul Lam  于2024年3月18日周一 17:18写道:
> > >
> > > > Congrats! Thanks to everyone involved!
> > > >
> > > > Best,
> > > > Paul Lam
> > > >
> > > > > 2024年3月18日 16:37,Samrat Deb  写道:
> > > > >
> > > > > Congratulations !
> > > > >
> > > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li 
> > > > wrote:
> > > > >
> > > > >> Congratulations!
> > > > >>
> > > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
> > > > >>>
> > > > >>> Congratulations, thanks for the great work!
> > > > >>>
> > > > >>> Best,
> > > > >>> Rui
> > > > >>>
> > > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> > > > >> wrote:
> > > > 
> > > >  The Apache Flink community is very happy to announce the release of
> > > > >> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink
> > > > 1.19
> > > > >> series.
> > > > 
> > > >  Apache Flink® is an open-source stream processing framework for
> > > > >> distributed, high-performing, always-available, and accurate data
> > > > streaming
> > > > >> applications.
> > > > 
> > > >  The release is available for download at:
> > > >  https://flink.apache.org/downloads.html
> > > > 
> > > >  Please check out the release blog post for an overview of the
> > > > >> improvements for this bugfix release:
> > > > 
> > > > >>
> > > >
> > > https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> > > > 
> > > >  The full release notes are available in Jira:
> > > > 
> > > > >>
> > > >
> > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > > > 
> > > >  We would like to thank all contributors of the Apache Flink
> > > community
> > > > >> who made this release possible!
> > > > 
> > > > 
> > > >  Best,
> > > >  Yun, Jing, Martijn and Lincoln
> > > > >>
> > > >
> > > >
> > >
> >
> >
> > --
> > Best,
> > Hangxiang.



-- 

Best,
Benchao Li


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Yubin Li
Congratulations!

Thanks to release managers and everyone involved.

On Mon, Mar 18, 2024 at 5:55 PM Hangxiang Yu  wrote:
>
> Congratulations!
> Thanks release managers and all involved!
>
> On Mon, Mar 18, 2024 at 5:23 PM Hang Ruan  wrote:
>
> > Congratulations!
> >
> > Best,
> > Hang
> >
> > Paul Lam  于2024年3月18日周一 17:18写道:
> >
> > > Congrats! Thanks to everyone involved!
> > >
> > > Best,
> > > Paul Lam
> > >
> > > > 2024年3月18日 16:37,Samrat Deb  写道:
> > > >
> > > > Congratulations !
> > > >
> > > > On Mon, 18 Mar 2024 at 2:07 PM, Jingsong Li 
> > > wrote:
> > > >
> > > >> Congratulations!
> > > >>
> > > >> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
> > > >>>
> > > >>> Congratulations, thanks for the great work!
> > > >>>
> > > >>> Best,
> > > >>> Rui
> > > >>>
> > > >>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> > > >> wrote:
> > > 
> > >  The Apache Flink community is very happy to announce the release of
> > > >> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink
> > > 1.19
> > > >> series.
> > > 
> > >  Apache Flink® is an open-source stream processing framework for
> > > >> distributed, high-performing, always-available, and accurate data
> > > streaming
> > > >> applications.
> > > 
> > >  The release is available for download at:
> > >  https://flink.apache.org/downloads.html
> > > 
> > >  Please check out the release blog post for an overview of the
> > > >> improvements for this bugfix release:
> > > 
> > > >>
> > >
> > https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> > > 
> > >  The full release notes are available in Jira:
> > > 
> > > >>
> > >
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> > > 
> > >  We would like to thank all contributors of the Apache Flink
> > community
> > > >> who made this release possible!
> > > 
> > > 
> > >  Best,
> > >  Yun, Jing, Martijn and Lincoln
> > > >>
> > >
> > >
> >
>
>
> --
> Best,
> Hangxiang.


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Zakelly Lan
Congratulations!

Thanks Lincoln, Yun, Martijn and Jing for driving this release.
Thanks everyone involved.


Best,
Zakelly

On Mon, Mar 18, 2024 at 5:05 PM weijie guo 
wrote:

> Congratulations!
>
> Thanks release managers and all the contributors involved.
>
> Best regards,
>
> Weijie
>
>
> Leonard Xu  于2024年3月18日周一 16:45写道:
>
>> Congratulations, thanks release managers and all involved for the great
>> work!
>>
>>
>> Best,
>> Leonard
>>
>> > 2024年3月18日 下午4:32,Jingsong Li  写道:
>> >
>> > Congratulations!
>> >
>> > On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
>> >>
>> >> Congratulations, thanks for the great work!
>> >>
>> >> Best,
>> >> Rui
>> >>
>> >> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
>> wrote:
>> >>>
>> >>> The Apache Flink community is very happy to announce the release of
>> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19
>> series.
>> >>>
>> >>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data streaming
>> applications.
>> >>>
>> >>> The release is available for download at:
>> >>> https://flink.apache.org/downloads.html
>> >>>
>> >>> Please check out the release blog post for an overview of the
>> improvements for this bugfix release:
>> >>>
>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>> >>>
>> >>> The full release notes are available in Jira:
>> >>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>> >>>
>> >>> We would like to thank all contributors of the Apache Flink community
>> who made this release possible!
>> >>>
>> >>>
>> >>> Best,
>> >>> Yun, Jing, Martijn and Lincoln
>>
>>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 weijie guo
Congratulations!

Thanks release managers and all the contributors involved.

Best regards,

Weijie


Leonard Xu  于2024年3月18日周一 16:45写道:

> Congratulations, thanks release managers and all involved for the great
> work!
>
>
> Best,
> Leonard
>
> > 2024年3月18日 下午4:32,Jingsong Li  写道:
> >
> > Congratulations!
> >
> > On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
> >>
> >> Congratulations, thanks for the great work!
> >>
> >> Best,
> >> Rui
> >>
> >> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> wrote:
> >>>
> >>> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19
> series.
> >>>
> >>> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> >>>
> >>> The release is available for download at:
> >>> https://flink.apache.org/downloads.html
> >>>
> >>> Please check out the release blog post for an overview of the
> improvements for this bugfix release:
> >>>
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> >>>
> >>> The full release notes are available in Jira:
> >>>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> >>>
> >>> We would like to thank all contributors of the Apache Flink community
> who made this release possible!
> >>>
> >>>
> >>> Best,
> >>> Yun, Jing, Martijn and Lincoln
>
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Leonard Xu
Congratulations, thanks release managers and all involved for the great work!


Best,
Leonard

> 2024年3月18日 下午4:32,Jingsong Li  写道:
> 
> Congratulations!
> 
> On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
>> 
>> Congratulations, thanks for the great work!
>> 
>> Best,
>> Rui
>> 
>> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee  wrote:
>>> 
>>> The Apache Flink community is very happy to announce the release of Apache 
>>> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>>> 
>>> Apache Flink® is an open-source stream processing framework for 
>>> distributed, high-performing, always-available, and accurate data streaming 
>>> applications.
>>> 
>>> The release is available for download at:
>>> https://flink.apache.org/downloads.html
>>> 
>>> Please check out the release blog post for an overview of the improvements 
>>> for this bugfix release:
>>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>>> 
>>> The full release notes are available in Jira:
>>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>>> 
>>> We would like to thank all contributors of the Apache Flink community who 
>>> made this release possible!
>>> 
>>> 
>>> Best,
>>> Yun, Jing, Martijn and Lincoln



Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Jark Wu
Congrats! Thanks Lincoln, Jing, Yun and Martijn driving this release.
Thanks all who involved this release!

Best,
Jark


On Mon, 18 Mar 2024 at 16:31, Rui Fan <1996fan...@gmail.com> wrote:

> Congratulations, thanks for the great work!
>
> Best,
> Rui
>
> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee 
> wrote:
>
> > The Apache Flink community is very happy to announce the release of
> Apache
> > Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19
> series.
> >
> > Apache Flink® is an open-source stream processing framework for
> > distributed, high-performing, always-available, and accurate data
> streaming
> > applications.
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Please check out the release blog post for an overview of the
> improvements
> > for this bugfix release:
> >
> >
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
> >
> > The full release notes are available in Jira:
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
> >
> > We would like to thank all contributors of the Apache Flink community who
> > made this release possible!
> >
> >
> > Best,
> > Yun, Jing, Martijn and Lincoln
> >
>


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Jingsong Li
Congratulations!

On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote:
>
> Congratulations, thanks for the great work!
>
> Best,
> Rui
>
> On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee  wrote:
>>
>> The Apache Flink community is very happy to announce the release of Apache 
>> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>>
>> Apache Flink® is an open-source stream processing framework for distributed, 
>> high-performing, always-available, and accurate data streaming applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please check out the release blog post for an overview of the improvements 
>> for this bugfix release:
>> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>>
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>>
>> We would like to thank all contributors of the Apache Flink community who 
>> made this release possible!
>>
>>
>> Best,
>> Yun, Jing, Martijn and Lincoln


Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Rui Fan
Congratulations, thanks for the great work!

Best,
Rui

On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee  wrote:

> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.
>
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements
> for this bugfix release:
>
> https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
>
> Best,
> Yun, Jing, Martijn and Lincoln
>


[ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 文章 Lincoln Lee
The Apache Flink community is very happy to announce the release of Apache
Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements
for this bugfix release:
https://flink.apache.org/2024/03/18/announcing-the-release-of-apache-flink-1.19/

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353282

We would like to thank all contributors of the Apache Flink community who
made this release possible!


Best,
Yun, Jing, Martijn and Lincoln


Re: 急 [FLINK-34170] 何时能够修复?

2024-03-14 文章 Benchao Li
FLINK-34170 只是一个UI的展示问题,并不影响实际的运行。

JDBC Connector 维表下推的 filter 不生效问题,已经在 FLINK-33365 中修复了,最新的 JDBC
Connector 版本中已经带上了这个修复,你可以试一下~

casel.chen  于2024年3月15日周五 10:39写道:
>
> 我们最近在使用Flink 1.17.1开发flink sql作业维表关联使用复合主键时遇到FLINK-34170描述一样的问题,请问这个major 
> issue什么时候在哪个版本后能够修复呢?谢谢!
>
>
> select xxx from kafka_table as kt
> left join phoenix_table FORSYSTEM_TIMEASOFphoenix_table.proctime as pt
> on kt.trans_id=pt.trans_id and pt.trans_date = 
> DATE_FORMAT(CURRENT_TIMESTAMP,'MMdd');
>
>
> phoenix表主键是 trans_id + trans_date 
> 复合主键,实际作业运行发现flink只会带trans_id字段对phoenix表进行scan查询,再根据scan查询结果按trans_date字段值进行过滤
>
>
> https://issues.apache.org/jira/browse/FLINK-34170



-- 

Best,
Benchao Li


急 [FLINK-34170] 何时能够修复?

2024-03-14 文章 casel.chen
我们最近在使用Flink 1.17.1开发flink sql作业维表关联使用复合主键时遇到FLINK-34170描述一样的问题,请问这个major 
issue什么时候在哪个版本后能够修复呢?谢谢!


select xxx from kafka_table as kt 
left join phoenix_table FORSYSTEM_TIMEASOFphoenix_table.proctime as pt
on kt.trans_id=pt.trans_id and pt.trans_date = 
DATE_FORMAT(CURRENT_TIMESTAMP,'MMdd');


phoenix表主键是 trans_id + trans_date 
复合主键,实际作业运行发现flink只会带trans_id字段对phoenix表进行scan查询,再根据scan查询结果按trans_date字段值进行过滤


https://issues.apache.org/jira/browse/FLINK-34170

Re: Re:flink sql关联维表在lookup执行计划中的关联条件问题

2024-03-14 文章 Jane Chan
Hi iasiuide,

感谢提问. 先来回答最后一个问题

关联维表的限制条件有的会作为关联条件,有的不作为关联条件吗? 这种有什么规律吗?
>

Lookup join 的 on condition 会在优化过程中经过一系列改写, 这里只简要对影响 lookup 和 where 的几处进行说明.

1. logical 阶段, FlinkFilterJoinRule 会将 on 条件 split 为针对单边的 (左表/右表) 和针对双边的.
**针对单边的 filter 会被尽量 pushdown 到 join 节点之前** (这意味着有可能会额外生成一个 Filter 节点);
Filter 节点后续如何变化取决于这个 filter 能否 pushdown 到 source, 如果不能, 那么在 physical
阶段它就会变成维表上面 Calc 节点 (denoted by calcOnTemporalTable) 里面的 condition.

2. 在 CommonPhysicalLookupJoin 里解析 allLookupKeys 的时候, 会试图从
calcOnTemporalTable 里把常量条件抽取出来形成最终的 lookup key (也就是 explain plan 里面
lookup=[...] 的内容), 在 explain 时, 只要存在 calcOnTemporalTable, where=[...]
就会被打印出来.

回到具体的 case

为什么关联第一张维表dim_ymfz_prod_sys_trans_log的限制条件AND b.trans_date = DATE_FORMAT
> (CURRENT_TIMESTAMP, 'MMdd') 在执行计划中,不作为 lookup的条件 ==>
> lookup=[bg_rel_trans_id=bg_rel_trans_id],
>

因为 b.trans_date = DATE_FORMAT (CURRENT_TIMESTAMP, 'MMdd')
是针对维表单边的条件且无法被下推. 另外, 这里使用了非确定性函数[1], 请关注结果的正确性.


> 关联第二张维表 dim_ptfz_ymfz_merchant_info 的限制条件ON b.member_id = c.pk_id AND
> c.data_source = 'merch' 在执行计划中,都是作为lookup的条件 ==>
> lookup=[data_source=_UTF-16LE'merch', pk_id=member_id],
>

此时常量可以被提取出来


> 关联第三张维表dim_ptfz_ymfz_merchant_info的限制条件 ON c.agent_id = d.pk_id AND
> (d.data_source = 'ex_agent' OR d.data_source = 'agent')
> 中关于data_source的条件,在执行计划中不是lookup的条件 ==> lookup=[pk_id=agent_id],
>

据我所知 lookup 目前应该还不支持 SARGable

[1]
https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/table/concepts/determinism/

Best,
Jane

On Fri, Mar 8, 2024 at 11:19 AM iasiuide  wrote:

> 好的,已经贴了sql片段
>
> 在 2024-03-08 11:02:34,"Xuyang"  写道:
> >Hi, 你的图挂了,可以用图床或者直接贴SQL
> >
> >
> >
> >
> >--
> >
> >Best!
> >Xuyang
> >
> >
> >
> >
> >在 2024-03-08 10:54:19,"iasiuide"  写道:
> >
> >
> >
> >
> >
> >下面的sql片段中
> >ods_ymfz_prod_sys_divide_order  为kafka source表
> >dim_ymfz_prod_sys_trans_log   为mysql为表
> >dim_ptfz_ymfz_merchant_info   为mysql为表
> >
> >
> >
> >flink web ui界面的执行计划片段如下:
> >
> > [1]:TableSourceScan(table=[[default_catalog, default_database,
> ods_ymfz_prod_sys_divide_order, watermark=[-(CASE(IS NULL(create_time),
> 1970-01-01 00:00:00:TIMESTAMP(3), CAST(create_time AS TIMESTAMP(3))),
> 5000:INTERVAL SECOND)]]], fields=[row_kind, id, sys_date, bg_rel_trans_id,
> order_state, create_time, update_time, divide_fee_amt, divide_fee_flag])
> >+- [2]:Calc(select=[sys_date, bg_rel_trans_id, create_time,
> IF(SEARCH(row_kind, Sarg[_UTF-16LE'-D', _UTF-16LE'-U']), (-1 *
> divide_fee_amt), divide_fee_amt) AS div_fee_amt,
> Reinterpret(CASE(create_time IS NULL, 1970-01-01 00:00:00, CAST(create_time
> AS TIMESTAMP(3 AS ts], where=[((order_state = '2') AND (divide_fee_amt
>  0) AND (sys_date = DATE_FORMAT(CAST(CURRENT_TIMESTAMP() AS
> TIMESTAMP(9)), '-MM-dd')))])
> >   +-
> [3]:LookupJoin(table=[default_catalog.default_database.dim_ymfz_prod_sys_trans_log],
> joinType=[LeftOuterJoin], async=[false],
> lookup=[bg_rel_trans_id=bg_rel_trans_id], where=[(trans_date =
> DATE_FORMAT(CAST(CURRENT_TIMESTAMP() AS TIMESTAMP(9)), 'MMdd'))],
> select=[sys_date, bg_rel_trans_id, create_time, div_fee_amt, ts,
> bg_rel_trans_id, pay_type, member_id, mer_name])
> >  +- [4]:Calc(select=[sys_date, create_time, div_fee_amt, ts,
> pay_type, member_id, mer_name], where=[(CHAR_LENGTH(member_id)  1)])
> > +-
> [5]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
> joinType=[LeftOuterJoin], async=[false],
> lookup=[data_source=_UTF-16LE'merch', pk_id=member_id], where=[(data_source
> = 'merch')], select=[sys_date, create_time, div_fee_amt, ts, pay_type,
> member_id, mer_name, pk_id, agent_id, bagent_id])
> >+- [6]:Calc(select=[sys_date, create_time, div_fee_amt, ts,
> pay_type, member_id, mer_name, agent_id, bagent_id])
> >   +-
> [7]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
> joinType=[LeftOuterJoin], async=[false], lookup=[pk_id=agent_id],
> where=[SEARCH(data_source, Sarg[_UTF-16LE'agent', _UTF-16LE'ex_agent'])],
> select=[sys_date, create_time, div_fee_amt, ts, pay_type, member_id,
> mer_name, agent_id, bagent_id, pk_id, bagent_id, fagent_id])
> >  +- [8]:Calc(select=[sys_date, create_time, div_fee_amt,
> ts, pay_type, member_id, mer_name, bagent_id, bagent_id0, fagent_id AS
> fagent_id0])
> > +-
> [9]:LookupJoin(table=[default_catalog.default_database.dim_ptfz_ymfz_merchant_info],
> joinType=[LeftOuterJoin], async=[false],
> lookup=[data_source=_UTF-16LE'agent', pk_id=fagent_id0],
> where=[(data_source = 'agent')], select=[sys_date, create_time,
> div_fee_amt, ts, pay_type, member_id, mer_name, bagent_id, bagent_id0,
> fagent_id0, pk_id, agent_name, bagent_name])
> >  
> >
> >
> >为什么关联第一张维表dim_ymfz_prod_sys_trans_log的限制条件AND b.trans_date = DATE_FORMAT
> (CURRENT_TIMESTAMP, 'MMdd') 在执行计划中,不作为 lookup的条件 ==>
> lookup=[bg_rel_trans_id=bg_rel_trans_id],
> >关联第二张维表 dim_ptfz_ymfz_merchant_info 的限制条件ON b.member_id = c.pk_id AND
> 

flink k8s operator chk config interval bug.inoperative

2024-03-14 文章 kcz
kcz
573693...@qq.com





Re:flink写kafka时,并行度和分区数的设置问题

2024-03-14 文章 熊柱
退订

















在 2024-03-13 15:25:27,"chenyu_opensource"  写道:
>您好:
> flink将数据写入kafka【kafka为sink】,当kafka 
> topic分区数【设置的60】小于设置的并行度【设置的300】时,task是轮询写入这些分区吗,是否会影响写入效率?【是否存在遍历时的耗时情况】。
> 此时,如果扩大topic的分区数【添加至200,或者直接到300】,写入的效率是否会有明显的提升?
>
> 是否有相关的源码可以查看。
>期待回复,祝好,谢谢!
>
>
>


退订

2024-03-13 文章 李一飞
退订




  1   2   3   4   5   6   7   8   9   10   >