退订

> 下面是被转发的邮件:
> 
> 发件人: Biao Geng <biaoge...@gmail.com>
> 主题: 回复:申请退订邮件申请,谢谢
> 日期: 2024年4月2日 GMT+8 10:17:20
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi,
> 
> 退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
> <user-unsubscr...@flink.apache.org>.
> 
> 
> Best,
> Biao Geng
> 
> <wangw...@sina.cn> 于2024年3月31日周日 22:20写道:
> 
>> 申请退订邮件申请,谢谢
> 下面是被转发的邮件:
> 
> 发件人: "史鹏飞" <904148...@qq.com.INVALID>
> 主题: (无主题)
> 日期: 2024年4月17日 GMT+8 16:33:48
> 收件人: "user-zh" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 下面是被转发的邮件:
> 
> 发件人: "王广邦" <wangguangb...@foxmail.com>
> 主题: HBase SQL连接器为啥不支持ARRAY/MAP/ROW类型
> 日期: 2024年4月1日 GMT+8 19:37:57
> 收件人: "Flink" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> HBase SQL 连接器(flink-connector-hbase_2.11) 为啥不支持数据类型:ARRAY、MAP / MULTISET、ROW 
> 不支持?
> https://nightlies.apache.org/flink/flink-docs-release-1.11/zh/dev/table/connectors/hbase.html
> 另外这3种类型的需求处理思路是什么?
> 
> 
> 
> 
> 发自我的iPhone
> 下面是被转发的邮件:
> 
> 发件人: willluzheng <willluzh...@163.com>
> 主题: 回复:退订
> 日期: 2024年4月14日 GMT+8 15:43:24
> 收件人: user-zh <user-zh@flink.apache.org>
> 抄送: user-zh <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> ---- 回复的原邮件 ----
> | 发件人 | jimandlice<jimandl...@163.com> |
> | 发送日期 | 2024年04月13日 19:50 |
> | 收件人 | user-zh<user-zh@flink.apache.org> |
> | 主题 | 退订 |
> 退订
> 
> 
> 
> 
> jimandlice
> jimandl...@163.com
> 
> 
> 
> &nbsp;
> 下面是被转发的邮件:
> 
> 发件人: <wangw...@sina.cn>
> 主题: 申请退订邮件申请,谢谢
> 日期: 2024年3月31日 GMT+8 22:20:09
> 收件人: "user-zh" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 申请退订邮件申请,谢谢
> 下面是被转发的邮件:
> 
> 发件人: "ha.fen...@aisino.com" <ha.fen...@aisino.com>
> 主题: ProcessWindowFunction中使用per-window state
> 日期: 2024年4月12日 GMT+8 14:31:41
> 收件人: user-zh <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 文档中关于窗口里面有一部分描述 在 ProcessWindowFunction 中使用 per-window 
> state。这一部分没有看懂,如果有迟到数据会再次触发窗口计算,就是全部重新算一遍吧,存状态是为了不重新计算?有没有关于这方面的参考资料?
> 下面是被转发的邮件:
> 
> 发件人: "casel.chen" <casel_c...@126.com>
> 主题: flink cdc metrics 问题
> 日期: 2024年4月8日 GMT+8 11:59:27
> 收件人: "user-zh@flink.apache.org" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 请问flink cdc对外有暴露一些监控metrics么?
> 我希望能够监控到使用flink cdc的实时作业当前未消费的binlog数据条数,类似于kafka topic消费积压监控。
> 想通过这个监控防止flink cdc实时作业消费慢而被套圈(最大binlog条数如何获取?)
> 下面是被转发的邮件:
> 
> 发件人: "995626544" <995626...@qq.com.INVALID>
> 主题: 退订
> 日期: 2024年4月7日 GMT+8 16:06:11
> 收件人: "user-zh" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 
> 
> 
> 
> 995626544
> 995626...@qq.com
> 
> 
> 
> &nbsp;
> 下面是被转发的邮件:
> 
> 发件人: "bai年" <827931...@qq.com.INVALID>
> 主题: (无主题)
> 日期: 2024年4月1日 GMT+8 17:03:12
> 收件人: "user-zh@flink.apache.org" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 下面是被转发的邮件:
> 
> 发件人: Biao Geng <biaoge...@gmail.com>
> 主题: 回复:配置hadoop依赖问题
> 日期: 2024年4月2日 GMT+8 10:52:15
> 收件人: user-zh@flink.apache.org, ha.fen...@aisino.com
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi fengqi,
> 
> “Hadoop is not in the
> classpath/dependencies.”报错说明org.apache.hadoop.conf.Configuration和org.apache.hadoop.fs.FileSystem这些hdfs所需的类没有找到。
> 
> 如果你的系统环境中有hadoop的话,通常是用这种方式来设置classpath:
> export HADOOP_CLASSPATH=`hadoop classpath`
> 
> 如果你的提交方式是提交到本地一个standalone的flink集群的话,可以检查下flink生成的日志文件,里面会打印classpath,可以看下是否有Hadoop相关的class。
> 
> Best,
> Biao Geng
> 
> 
> ha.fen...@aisino.com <ha.fen...@aisino.com> 于2024年4月2日周二 10:24写道:
> 
>> 1、在开发环境下,添加的有hadoop-client依赖,checkpoint时可以访问到hdfs的路径
>> 2、flink1.19.0,hadoop3.3.1,jar提交到单机flink系统中,提示如下错误
>> Caused by: java.lang.RuntimeException:
>> org.apache.flink.runtime.client.JobInitializationException: Could not start
>> the JobMaster.
>> at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
>> at
>> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
>> at
>> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>> at
>> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>> at
>> java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
>> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>> at
>> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>> at
>> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
>> Caused by: org.apache.flink.runtime.client.JobInitializationException:
>> Could not start the JobMaster.
>> at
>> org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
>> at
>> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>> at
>> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>> at
>> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>> at
>> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>> at java.lang.Thread.run(Thread.java:748)
>> Caused by: java.util.concurrent.CompletionException:
>> org.apache.flink.util.FlinkRuntimeException: Failed to create checkpoint
>> storage at checkpoint coordinator side.
>> at
>> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
>> at
>> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
>> at
>> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)
>> ... 3 more
>> Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to create
>> checkpoint storage at checkpoint coordinator side.
>> at
>> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:364)
>> at
>> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:273)
>> at
>> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.enableCheckpointing(DefaultExecutionGraph.java:503)
>> at
>> org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:334)
>> at
>> org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:173)
>> at
>> org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:381)
>> at
>> org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:224)
>> at
>> org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:140)
>> at
>> org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:162)
>> at
>> org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:121)
>> at
>> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:379)
>> at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:356)
>> at
>> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128)
>> at
>> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100)
>> at
>> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
>> at
>> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>> ... 3 more
>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> Could not find a file system implementation for scheme 'hdfs'. The scheme
>> is not directly supported by Flink and no Hadoop file system to support
>> this scheme could be loaded. For a full list of supported file systems,
>> please see
>> https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
>> at
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:542)
>> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:408)
>> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:279)
>> at
>> org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.<init>(FsCheckpointStorageAccess.java:88)
>> at
>> org.apache.flink.runtime.state.storage.FileSystemCheckpointStorage.createCheckpointStorage(FileSystemCheckpointStorage.java:336)
>> at
>> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:357)
>> ... 18 more
>> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
>> Hadoop is not in the classpath/dependencies.
>> at
>> org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:55)
>> at
>> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:525)
>> 
>> 已经配置HADOOP_CLASSPATH=$HADOOP_HOME/etc/hadoop
>> 这样配置是否正确?还需要什么配置?
>> 
> 下面是被转发的邮件:
> 
> 发件人: Zhanghao Chen <zhanghao.c...@outlook.com>
> 主题: 回复:回复:退订
> 日期: 2024年4月1日 GMT+8 13:42:14
> 收件人: user-zh <user-zh@flink.apache.org>
> 抄送: user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163.com 
> <user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163....@flink.apache.org>,
>  user-zh-subscribe <user-zh-subscr...@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅。你可以参考[1] 来管理你的邮件订阅。
> 
> [1]
> https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
> 
> Best,
> Zhanghao Chen
> ________________________________
> From: 戴少 <dsq...@126.com>
> Sent: Monday, April 1, 2024 11:10
> To: user-zh <user-zh@flink.apache.org>
> Cc: user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163.com 
> <user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163....@flink.apache.org>;
>  user-zh-subscribe <user-zh-subscr...@flink.apache.org>; user-zh 
> <user-zh@flink.apache.org>
> Subject: 回复:退订
> 
> 退订
> 
> --
> 
> Best Regards,
> 
> 
> 
> 
> ---- 回复的原邮件 ----
> | 发件人 | 李一飞<kurisu...@163.com> |
> | 发送日期 | 2024年03月14日 00:09 |
> | 收件人 | 
> user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163.com<user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163....@flink.apache.org>,
> user-zh-subscribe <user-zh-subscr...@flink.apache.org>,
> user-zh <user-zh@flink.apache.org> |
> | 主题 | 退订 |
> 退订
> 
> 
> 下面是被转发的邮件:
> 
> 发件人: Enrique Alberto Perez Delgado <enrique.delg...@hellofresh.com>
> 主题: Unable to use Table API in AWS Managed Flink 1.18
> 日期: 2024年4月10日 GMT+8 17:32:19
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi all,
> 
> I am using AWS Managed Flink 1.18, where I am getting this error when trying 
> to submit my job:
> 
> ```
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option: 'connector'='jdbc'
>       at 
> org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:798)
>       at 
> org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:772)
>       at 
> org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:317)
>       ... 32 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'jdbc' that implements 
> 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.
> ```
> 
> I used to get this error when testing locally until I added the 
> `flink-connector-jdbc-3.1.2-1.18`.jar to `/opt/flink/lib` in my local docker 
> image, which I thought would be provided by AWS. apparently, it isn’t. Has 
> anyone encountered this error before?
> 
> I highly appreciate any help you could give me,
> 
> Best regards, 
> 
> Enrique Perez
> Data Engineer
> HelloFresh SE | Prinzenstraße 89 | 10969 Berlin, Germany
> Phone:  +4917625622422
> 
> 
> 
> 
> 
> 
> 
>  
> <https://www.hellofresh.com/jobs/?utm_medium=email&utm_source=email_signature>
> HelloFresh SE, Berlin (Sitz der Gesellschaft) | Vorstände: Dominik S. Richter 
> (Vorsitzender), Thomas W. Griesel, Christian Gärtner, Edward Boyes | 
> Vorsitzender des Aufsichtsrats: John H. Rittenhouse | Eingetragen beim 
> Amtsgericht Charlottenburg, HRB 182382 B | USt-Id Nr.: DE 302210417
> 
> CONFIDENTIALITY NOTICE: This message (including any attachments) is 
> confidential and may be privileged. It may be read, copied and used only by 
> the intended recipient. If you have received it in error please contact the 
> sender (by return e-mail) immediately and delete this message. Any 
> unauthorized use or dissemination of this message in whole or in parts is 
> strictly prohibited.
> 下面是被转发的邮件:
> 
> 发件人: 熊柱 <18428358...@163.com>
> 主题: Re:Re: Re: 1.19自定义数据源
> 日期: 2024年4月1日 GMT+8 11:14:50
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 在 2024-03-28 19:56:06,"Zhanghao Chen" <zhanghao.c...@outlook.com> 写道:
>> 如果是用的 DataStream API 的话,也可以看下新增的 DataGen Connector [1] 是否能直接满足你的测试数据生成需求。
>> 
>> 
>> [1] 
>> https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/datastream/datagen/
>> 
>> Best,
>> Zhanghao Chen
>> ________________________________
>> From: ha.fen...@aisino.com <ha.fen...@aisino.com>
>> Sent: Thursday, March 28, 2024 15:34
>> To: user-zh <user-zh@flink.apache.org>
>> Subject: Re: Re: 1.19自定义数据源
>> 
>> 我想问的就是如果需要实现Source接口,应该怎么写,有没有具体的例子实现一个按照一定速度生成自定义的类?
>> 
>> 发件人: gongzhongqiang
>> 发送时间: 2024-03-28 15:05
>> 收件人: user-zh
>> 主题: Re: 1.19自定义数据源
>> 你好:
>> 
>> 当前 flink 1.19 版本只是标识为过时,在未来版本会移除 SourceFunction。所以对于你的应用而言为了支持长期 flink
>> 版本考虑,可以将这些SourceFunction用Source重新实现。
>> 
>> ha.fen...@aisino.com <ha.fen...@aisino.com> 于2024年3月28日周四 14:18写道:
>> 
>>> 
>>> 原来是继承SourceFunction实现一些简单的自动生成数据的方法,在1.19中已经标识为过期,好像是使用Source接口,这个和原来的SourceFunction完全不同,应该怎么样生成测试使用的自定义数据源呢?
>>> 
> 下面是被转发的邮件:
> 
> 发件人: wyk <wyk118...@163.com>
> 主题: Re:Re: 采集mysql全量的时候出现oom问题
> 日期: 2024年4月9日 GMT+8 16:54:01
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> 
> 是的,分片比较大,有一万七千多个分片
> jm内存目前是2g,我调整到4g之后还是会有这么问题,我在想如果我一直调整jm内存,后面增量的时候内存会有所浪费,在flink官网上找到了flink堆内存的相关参数,但是对这个不太了解,不知道具体该怎么调试合适,麻烦帮忙看一下如下图这些参数调整那个合适呢?
> 
> flink官网地址为: 
> https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup_jobmanager/
>  
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup_jobmanager/>
> 
> 
> 
> 
> 
>   Component     Configuration options           Description  
> JVM Heap 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup_jobmanager/#configure-jvm-heap>
>  jobmanager.memory.heap.size 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-heap-size>
>     JVM Heap memory size for job manager.
> Off-heap Memory 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup_jobmanager/#configure-off-heap-memory>
>    jobmanager.memory.off-heap.size 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-off-heap-size>
>     Off-heap memory size for job manager. This option covers all off-heap 
> memory usage including direct and native memory allocation.
> JVM metaspace 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup/#jvm-parameters>
>    jobmanager.memory.jvm-metaspace.size 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-jvm-metaspace-size>
>   Metaspace size of the Flink JVM process
> JVM Overhead  jobmanager.memory.jvm-overhead.min 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-jvm-overhead-min>
> jobmanager.memory.jvm-overhead.max 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-jvm-overhead-max>
> jobmanager.memory.jvm-overhead.fraction 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobmanager-memory-jvm-overhead-fraction>
>   Native memory reserved for other JVM overhead: e.g. thread stacks, code 
> cache, garbage collection space etc, it is a capped fractionated component 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup/#capped-fractionated-components>
>  of the total process memory 
> <https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/memory/mem_setup/#configure-total-memory>
> 
> 
> 
> 
> 在 2024-04-09 11:28:57,"Shawn Huang" <hx0...@gmail.com> 写道:
> 
> 从报错信息看,是由于JM的堆内存不够,可以尝试把JM内存调大,一种可能的原因是mysql表全量阶段分片较多,导致SourceEnumerator状态较大。
> 
> Best,
> Shawn Huang
> 
> 
> wyk <wyk118...@163.com <mailto:wyk118...@163.com>> 于2024年4月8日周一 17:46写道:
> 
> 
> 开发者们好:
>         flink版本1.14.5 
>         flink-cdc版本 2.2.0
>        
> 在使用flink-cdc-mysql采集全量的时候,全量阶段会做checkpoint,但是checkpoint的时候会出现oom问题,这个有什么办法吗?
>        具体报错如附件文本以及下图所示:
> 
> 
> 
> 下面是被转发的邮件:
> 
> 发件人: "我的意中人是个盖世英雄" <369266...@qq.com.INVALID>
> 主题: 回复:退订
> 日期: 2024年4月18日 GMT+8 16:03:39
> 收件人: "user-zh" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 
> 
> 
> ---原始邮件---
> 发件人: "willluzheng"<willluzh...@163.com&gt;
> 发送时间: 2024年4月14日(周日) 下午3:46
> 收件人: "user-zh"<user-zh@flink.apache.org&gt;;
> 抄送: "user-zh"<user-zh@flink.apache.org&gt;;
> 主题: 回复:退订
> 
> 
> 退订
> ---- 回复的原邮件 ----
> | 发件人 | jimandlice |
> | 发送日期 | 2024年04月13日 19:50 |
> | 收件人 | user-zh |
> | 主题 | 退订 |
> 退订
> 
> 
> 
> 
> jimandlice
> jimandl...@163.com
> 
> 
> 
> &nbsp;
> 下面是被转发的邮件:
> 
> 发件人: Biao Geng <biaoge...@gmail.com>
> 主题: 回复:退订
> 日期: 2024年4月2日 GMT+8 10:17:33
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi,
> 
> 退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
> <user-unsubscr...@flink.apache.org>.
> 
> 
> Best,
> Biao Geng
> 
> CloudFunny <li1056c...@163.com> 于2024年3月31日周日 22:25写道:
> 
>> 
>> 
> 下面是被转发的邮件:
> 
> 发件人: spoon_lz <spoon...@126.com>
> 主题: 回复:flink 已完成job等一段时间会消失
> 日期: 2024年4月9日 GMT+8 11:10:46
> 收件人: "user-zh@flink.apache.org" <user-zh@flink.apache.org>
> 抄送: user-zh <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 有一个过期时间的配置
> https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#jobstore-expiration-time
> 
> 
> 
> | |
> spoon_lz
> |
> |
> spoon...@126.com
> |
> 
> 
> ---- 回复的原邮件 ----
> | 发件人 | ha.fen...@aisino.com<ha.fen...@aisino.com> |
> | 发送日期 | 2024年04月9日 10:38 |
> | 收件人 | user-zh<user-zh@flink.apache.org> |
> | 主题 | flink 已完成job等一段时间会消失 |
> 在WEBUI里面,已完成的任务会在completed jobs里面能够看到,过了一会再进去看数据就没有了,是有什么配置自动删除吗?
> 下面是被转发的邮件:
> 
> 发件人: Biao Geng <biaoge...@gmail.com>
> 主题: 回复:退订
> 日期: 2024年4月2日 GMT+8 10:17:02
> 收件人: user-zh@flink.apache.org, yangdongshu_l...@163.com
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi,
> 
> 退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
> <user-unsubscr...@flink.apache.org>.
> 
> 
> Best,
> Biao Geng
> 
> 
> 杨东树 <yangdongshu_l...@163.com> 于2024年3月31日周日 20:23写道:
> 
>> 申请退订邮件通知,谢谢!
> 下面是被转发的邮件:
> 
> 发件人: 薛礼彬 <xuelibin0...@163.com>
> 主题: 退订
> 日期: 2024年4月1日 GMT+8 23:19:23
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 下面是被转发的邮件:
> 
> 发件人: 戴少 <dsq...@126.com>
> 主题: 回复:退订
> 日期: 2024年4月1日 GMT+8 11:09:29
> 收件人: user-zh <user-zh@flink.apache.org>
> 抄送: user-zh <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 
> --
> 
> Best Regards,
> 
> 
> 
> 
> ---- 回复的原邮件 ----
> | 发件人 | wangfengyang<wangfy2...@qq.com.invalid> |
> | 发送日期 | 2024年03月22日 17:28 |
> | 收件人 | user-zh <user-zh@flink.apache.org> |
> | 主题 | 退订 |
> 退订
> 下面是被转发的邮件:
> 
> 发件人: CloudFunny <li1056c...@163.com>
> 主题: 退订
> 日期: 2024年3月31日 GMT+8 22:24:22
> 收件人: "user-zh@flink.apache.org" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 
> 下面是被转发的邮件:
> 
> 发件人: Shawn Huang <hx0...@gmail.com>
> 主题: 回复:flink cdc metrics 问题
> 日期: 2024年4月8日 GMT+8 13:02:31
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> 你好,目前flink cdc没有提供未消费binlog数据条数这样的指标,你可以通过 currentFetchEventTimeLag
> 这个指标(表示消费到的binlog数据中时间与当前时间延迟)来判断当前消费情况。
> 
> [1]
> https://github.com/apache/flink-cdc/blob/master/flink-cdc-connect/flink-cdc-source-connectors/flink-connector-mysql-cdc/src/main/java/org/apache/flink/cdc/connectors/mysql/source/metrics/MySqlSourceReaderMetrics.java
> 
> Best,
> Shawn Huang
> 
> 
> casel.chen <casel_c...@126.com> 于2024年4月8日周一 12:01写道:
> 
>> 请问flink cdc对外有暴露一些监控metrics么?
>> 我希望能够监控到使用flink cdc的实时作业当前未消费的binlog数据条数,类似于kafka topic消费积压监控。
>> 想通过这个监控防止flink cdc实时作业消费慢而被套圈(最大binlog条数如何获取?)
> 下面是被转发的邮件:
> 
> 发件人: 杨东树 <yangdongshu_l...@163.com>
> 主题: 退订
> 日期: 2024年3月31日 GMT+8 20:23:40
> 收件人: "user-zh@flink.apache.org" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 申请退订邮件通知,谢谢!
> 下面是被转发的邮件:
> 
> 发件人: Biao Geng <biaoge...@gmail.com>
> 主题: 回复:退订
> 日期: 2024年4月8日 GMT+8 10:10:24
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi,
> 
> If you want to unsubscribe to user-zh mailing list, please send an email
> with any content to user-zh-unsubscr...@flink.apache.org
> <user-unsubscr...@flink.apache.org>.
> 退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
> <user-unsubscr...@flink.apache.org>.
> 
> 
> Best,
> Biao Geng
> 
> 
> 995626544 <995626...@qq.com.invalid> 于2024年4月7日周日 16:06写道:
> 
>> 退订
>> 
>> 
>> 
>> 
>> 995626544
>> 995626...@qq.com
>> 
>> 
>> 
>> &nbsp;
> 下面是被转发的邮件:
> 
> 发件人: "bai年" <827931...@qq.com.INVALID>
> 主题: 退订
> 日期: 2024年4月1日 GMT+8 17:03:22
> 收件人: "user-zh" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 下面是被转发的邮件:
> 
> 发件人: jimandlice <jimandl...@163.com>
> 主题: 退订
> 日期: 2024年4月13日 GMT+8 19:50:21
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 
> 
> 
> 
> jimandlice
> jimandl...@163.com
> 
> 
> 
> &nbsp;
> 下面是被转发的邮件:
> 
> 发件人: Zhanghao Chen <zhanghao.c...@outlook.com>
> 主题: 回复:退订
> 日期: 2024年4月1日 GMT+8 13:42:24
> 收件人: "user-zh@flink.apache.org" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅。你可以参考[1] 来管理你的邮件订阅。
> 
> [1]
> https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
> 
> Best,
> Zhanghao Chen
> ________________________________
> From: zjw <co_...@163.com>
> Sent: Monday, April 1, 2024 11:05
> To: user-zh@flink.apache.org <user-zh@flink.apache.org>
> Subject: 退订
> 
> 
> 下面是被转发的邮件:
> 
> 发件人: Xuyang <xyzhong...@163.com>
> 主题: Re:Unable to use Table API in AWS Managed Flink 1.18
> 日期: 2024年4月11日 GMT+8 09:36:52
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi, Perez.
> Flink use SPI to find the jdbc connector in the classloader and when 
> starting, the dir '${FLINK_ROOT}/lib' will be added 
> into the classpath. That is why in AWS the exception throws. IMO there are 
> two ways to solve this question.
> 
> 
> 1. upload the connector jar to AWS to let the classloader keep this jar. As 
> for how to upload connector jars, you need to check 
> the relevant documents of AWS.
> 2. package the jdbc connector jar into your job jar and submit it again.
> 
> 
> 
> 
> --
> 
>    Best!
>    Xuyang
> 
> 
> 
> 
> At 2024-04-10 17:32:19, "Enrique Alberto Perez Delgado" 
> <enrique.delg...@hellofresh.com> wrote:
> 
> Hi all,
> 
> 
> I am using AWS Managed Flink 1.18, where I am getting this error when trying 
> to submit my job:
> 
> 
> ```
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option: 'connector'='jdbc' at 
> org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:798)
>  at 
> org.apache.flink.table.factories.FactoryUtil.discoverTableFactory(FactoryUtil.java:772)
>  at 
> org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(FactoryUtil.java:317)
>  ... 32 more Caused by: org.apache.flink.table.api.ValidationException: Could 
> not find any factory for identifier 'jdbc' that implements 
> 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.
> ```
> 
> 
> I used to get this error when testing locally until I added the 
> `flink-connector-jdbc-3.1.2-1.18`.jar to `/opt/flink/lib` in my local docker 
> image, which I thought would be provided by AWS. apparently, it isn’t. Has 
> anyone encountered this error before?
> 
> 
> I highly appreciate any help you could give me,
> 
> 
> Best regards, 
> 
> 
> Enrique Perez
> Data Engineer
> HelloFresh SE | Prinzenstraße 89 | 10969 Berlin, Germany
> Phone:  +4917625622422
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> | |
> HelloFresh SE, Berlin (Sitz der Gesellschaft) | Vorstände: Dominik S. Richter 
> (Vorsitzender), Thomas W. Griesel, Christian Gärtner, Edward Boyes | 
> Vorsitzender des Aufsichtsrats: John H. Rittenhouse | Eingetragen beim 
> Amtsgericht Charlottenburg, HRB 182382 B | USt-Id Nr.: DE 302210417
> 
> CONFIDENTIALITY NOTICE: This message (including any attachments) is 
> confidential and may be privileged. It may be read, copied and used only by 
> the intended recipient. If you have received it in error please contact the 
> sender (by return e-mail) immediately and delete this message. Any 
> unauthorized use or dissemination of this message in whole or in parts is 
> strictly prohibited.
> 下面是被转发的邮件:
> 
> 发件人: Yunfeng Zhou <flink.zhouyunf...@gmail.com>
> 主题: 回复:HBase SQL连接器为啥不支持ARRAY/MAP/ROW类型
> 日期: 2024年4月7日 GMT+8 10:47:35
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> 应该是由于这些复杂集合在HBase中没有一个直接与之对应的数据类型,所以Flink SQL没有直接支持的。
> 
> 一种思路是把这些数据类型按照某种格式(比如json)转换成字符串/序列化成byte array,把字符串存到HBase中,读取出来的时候也再解析/反序列化。
> 
> On Mon, Apr 1, 2024 at 7:38 PM 王广邦 <wangguangb...@foxmail.com> wrote:
>> 
>> HBase SQL 连接器(flink-connector-hbase_2.11) 为啥不支持数据类型:ARRAY、MAP / MULTISET、ROW 
>> 不支持?
>> https://nightlies.apache.org/flink/flink-docs-release-1.11/zh/dev/table/connectors/hbase.html
>> 另外这3种类型的需求处理思路是什么?
>> 
>> 
>> 
>> 
>> 发自我的iPhone
> 下面是被转发的邮件:
> 
> 发件人: gongzhongqiang <gongzhongqi...@apache.org>
> 主题: 回复:flink 已完成job等一段时间会消失
> 日期: 2024年4月10日 GMT+8 11:00:44
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> 你好:
> 
> 如果想长期保留已完成的任务,推荐使用  History Server :
> https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/deployment/config/#history-server
> 
> Best,
> 
> Zhongqiang Gong
> 
> ha.fen...@aisino.com <ha.fen...@aisino.com> 于2024年4月9日周二 10:39写道:
> 
>> 在WEBUI里面,已完成的任务会在completed jobs里面能够看到,过了一会再进去看数据就没有了,是有什么配置自动删除吗?
>> 
> 下面是被转发的邮件:
> 
> 发件人: "jh...@163.com" <jh...@163.com>
> 主题: 回复:回复:退订
> 日期: 2024年4月18日 GMT+8 16:17:17
> 收件人: user-zh <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 
> 
> 
> jh...@163.com
> 
> 发件人: 我的意中人是个盖世英雄
> 发送时间: 2024-04-18 16:03
> 收件人: user-zh
> 主题: 回复:退订
> 退订
> 
> 
> 
> ---原始邮件---
> 发件人: "willluzheng"<willluzh...@163.com&gt;
> 发送时间: 2024年4月14日(周日) 下午3:46
> 收件人: "user-zh"<user-zh@flink.apache.org&gt;;
> 抄送: "user-zh"<user-zh@flink.apache.org&gt;;
> 主题: 回复:退订
> 
> 
> 退订
> ---- 回复的原邮件 ----
> | 发件人 | jimandlice |
> | 发送日期 | 2024年04月13日 19:50 |
> | 收件人 | user-zh |
> | 主题 | 退订 |
> 退订
> 
> 
> 
> 
> jimandlice
> jimandl...@163.com
> 
> 
> 
> &nbsp;
> 下面是被转发的邮件:
> 
> 发件人: "ha.fen...@aisino.com" <ha.fen...@aisino.com>
> 主题: 配置hadoop依赖问题
> 日期: 2024年4月2日 GMT+8 10:24:34
> 收件人: user-zh <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 1、在开发环境下,添加的有hadoop-client依赖,checkpoint时可以访问到hdfs的路径
> 2、flink1.19.0,hadoop3.3.1,jar提交到单机flink系统中,提示如下错误
> Caused by: java.lang.RuntimeException: 
> org.apache.flink.runtime.client.JobInitializationException: Could not start 
> the JobMaster.
> at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
> at 
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> Caused by: org.apache.flink.runtime.client.JobInitializationException: Could 
> not start the JobMaster.
> at 
> org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
> at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
> at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.util.FlinkRuntimeException: Failed to create checkpoint 
> storage at checkpoint coordinator side.
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)
> ... 3 more
> Caused by: org.apache.flink.util.FlinkRuntimeException: Failed to create 
> checkpoint storage at checkpoint coordinator side.
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:364)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:273)
> at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.enableCheckpointing(DefaultExecutionGraph.java:503)
> at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder.buildGraph(DefaultExecutionGraphBuilder.java:334)
> at 
> org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:173)
> at 
> org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:381)
> at 
> org.apache.flink.runtime.scheduler.SchedulerBase.<init>(SchedulerBase.java:224)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.<init>(DefaultScheduler.java:140)
> at 
> org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:162)
> at 
> org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:121)
> at 
> org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:379)
> at org.apache.flink.runtime.jobmaster.JobMaster.<init>(JobMaster.java:356)
> at 
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:128)
> at 
> org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:100)
> at 
> org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
> ... 3 more
> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: 
> Could not find a file system implementation for scheme 'hdfs'. The scheme is 
> not directly supported by Flink and no Hadoop file system to support this 
> scheme could be loaded. For a full list of supported file systems, please see 
> https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
> at 
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:542)
> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:408)
> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:279)
> at 
> org.apache.flink.runtime.state.filesystem.FsCheckpointStorageAccess.<init>(FsCheckpointStorageAccess.java:88)
> at 
> org.apache.flink.runtime.state.storage.FileSystemCheckpointStorage.createCheckpointStorage(FileSystemCheckpointStorage.java:336)
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:357)
> ... 18 more
> Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: 
> Hadoop is not in the classpath/dependencies.
> at 
> org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:55)
> at 
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:525)
> 
> 已经配置HADOOP_CLASSPATH=$HADOOP_HOME/etc/hadoop
> 这样配置是否正确?还需要什么配置?
> 下面是被转发的邮件:
> 
> 发件人: Biao Geng <biaoge...@gmail.com>
> 主题: 回复:退订
> 日期: 2024年4月2日 GMT+8 10:18:25
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> Hi,
> 
> 退订请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
> <user-unsubscr...@flink.apache.org>.
> 
> 
> Best,
> Biao Geng
> 
> 戴少 <dsq...@126.com> 于2024年4月1日周一 11:09写道:
> 
>> 退订
>> 
>> --
>> 
>> Best Regards,
>> 
>> 
>> 
>> 
>> ---- 回复的原邮件 ----
>> | 发件人 | wangfengyang<wangfy2...@qq.com.invalid> |
>> | 发送日期 | 2024年03月22日 17:28 |
>> | 收件人 | user-zh <user-zh@flink.apache.org> |
>> | 主题 | 退订 |
>> 退订
> 下面是被转发的邮件:
> 
> 发件人: Zhanghao Chen <zhanghao.c...@outlook.com>
> 主题: 回复:Re:Re: Re: 1.19自定义数据源
> 日期: 2024年4月1日 GMT+8 13:41:57
> 收件人: "user-zh@flink.apache.org" <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅。你可以参考[1] 来管理你的邮件订阅。
> 
> [1]
> https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8
> 
> Best,
> Zhanghao Chen
> ________________________________
> From: 熊柱 <18428358...@163.com>
> Sent: Monday, April 1, 2024 11:14
> To: user-zh@flink.apache.org <user-zh@flink.apache.org>
> Subject: Re:Re: Re: 1.19自定义数据源
> 
> 退订
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 在 2024-03-28 19:56:06,"Zhanghao Chen" <zhanghao.c...@outlook.com> 写道:
>> 如果是用的 DataStream API 的话,也可以看下新增的 DataGen Connector [1] 是否能直接满足你的测试数据生成需求。
>> 
>> 
>> [1] 
>> https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/datastream/datagen/
>> 
>> Best,
>> Zhanghao Chen
>> ________________________________
>> From: ha.fen...@aisino.com <ha.fen...@aisino.com>
>> Sent: Thursday, March 28, 2024 15:34
>> To: user-zh <user-zh@flink.apache.org>
>> Subject: Re: Re: 1.19自定义数据源
>> 
>> 我想问的就是如果需要实现Source接口,应该怎么写,有没有具体的例子实现一个按照一定速度生成自定义的类?
>> 
>> 发件人: gongzhongqiang
>> 发送时间: 2024-03-28 15:05
>> 收件人: user-zh
>> 主题: Re: 1.19自定义数据源
>> 你好:
>> 
>> 当前 flink 1.19 版本只是标识为过时,在未来版本会移除 SourceFunction。所以对于你的应用而言为了支持长期 flink
>> 版本考虑,可以将这些SourceFunction用Source重新实现。
>> 
>> ha.fen...@aisino.com <ha.fen...@aisino.com> 于2024年3月28日周四 14:18写道:
>> 
>>> 
>>> 原来是继承SourceFunction实现一些简单的自动生成数据的方法,在1.19中已经标识为过期,好像是使用Source接口,这个和原来的SourceFunction完全不同,应该怎么样生成测试使用的自定义数据源呢?
>>> 
> 下面是被转发的邮件:
> 
> 发件人: 戴少 <dsq...@126.com>
> 主题: 回复:退订
> 日期: 2024年4月1日 GMT+8 11:10:35
> 收件人: user-zh <user-zh@flink.apache.org>
> 抄送: "user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163.com" 
> <user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163....@flink.apache.org>,
>  user-zh-subscribe <user-zh-subscr...@flink.apache.org>, user-zh 
> <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 退订
> 
> --
> 
> Best Regards,
> 
> 
> 
> 
> ---- 回复的原邮件 ----
> | 发件人 | 李一飞<kurisu...@163.com> |
> | 发送日期 | 2024年03月14日 00:09 |
> | 收件人 | 
> user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163.com<user-zh-sc.1618840368.ibekedaekejgeemingfn-kurisu_li=163....@flink.apache.org>,
> user-zh-subscribe <user-zh-subscr...@flink.apache.org>,
> user-zh <user-zh@flink.apache.org> |
> | 主题 | 退订 |
> 退订
> 
> 
> 下面是被转发的邮件:
> 
> 发件人: Shawn Huang <hx0...@gmail.com>
> 主题: 回复:采集mysql全量的时候出现oom问题
> 日期: 2024年4月9日 GMT+8 11:28:57
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> 从报错信息看,是由于JM的堆内存不够,可以尝试把JM内存调大,一种可能的原因是mysql表全量阶段分片较多,导致SourceEnumerator状态较大。
> 
> Best,
> Shawn Huang
> 
> 
> wyk <wyk118...@163.com <mailto:wyk118...@163.com>> 于2024年4月8日周一 17:46写道:
> 
> 
> 开发者们好:
>         flink版本1.14.5 
>         flink-cdc版本 2.2.0
>        
> 在使用flink-cdc-mysql采集全量的时候,全量阶段会做checkpoint,但是checkpoint的时候会出现oom问题,这个有什么办法吗?
>        具体报错如附件文本以及下图所示:
> 
> 
> 
> 下面是被转发的邮件:
> 
> 发件人: zjw <co_...@163.com>
> 主题: 退订
> 日期: 2024年4月1日 GMT+8 11:05:09
> 收件人: user-zh@flink.apache.org
> 回复-收件人: user-zh@flink.apache.org
> 
> 
> 下面是被转发的邮件:
> 
> 发件人: "ha.fen...@aisino.com" <ha.fen...@aisino.com>
> 主题: flink 已完成job等一段时间会消失
> 日期: 2024年4月9日 GMT+8 10:38:07
> 收件人: user-zh <user-zh@flink.apache.org>
> 回复-收件人: user-zh@flink.apache.org
> 
> 在WEBUI里面,已完成的任务会在completed jobs里面能够看到,过了一会再进去看数据就没有了,是有什么配置自动删除吗?

回复