Re: Backpressure handling in FileSource APIs - Flink 1.16

2023-05-24 Thread Kamal Mittal
Hello Shammon, Can you please point out the classes where like for "FileSource" slow down logic is placed? Just wanted to understand it more better and try it at my end by running various perf. runs, also apply changes in my application if any. Rgds, Kamal On Wed, May 24, 2023 at 11:41 AM

Re:回复: flink 窗口触发计算的条件

2023-05-24 Thread lxk
你好,可以先看看官方文档中关于事件时间和水印的介绍 https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/concepts/time/ 如果你发了多条数据,但是都是同样数据的话,水印没有推进,窗口就不会触发 在 2023-05-25 10:00:36,"小昌同学" 写道: >是的 我发送了很多数据,发现窗口还是没有触发 > > >| | >小昌同学 >| >| >ccc0606fight...@163.com >| > 回复的原邮件 >| 发件人 |

回复: flink 窗口触发计算的条件

2023-05-24 Thread 小昌同学
是的 我发送了很多数据,发现窗口还是没有触发 | | 小昌同学 | | ccc0606fight...@163.com | 回复的原邮件 | 发件人 | yidan zhao | | 发送日期 | 2023年5月25日 09:59 | | 收件人 | | | 主题 | Re: flink 窗口触发计算的条件 | 如果你只发送了一条数据,那么watermark不会推进,就不会触发窗口计算。你需要更多数据。 小昌同学 于2023年5月25日周四 09:32写道: 各位老师,请教一下关于flink 事件时间窗口的执行时间点的相关问题;

Re: flink 窗口触发计算的条件

2023-05-24 Thread yidan zhao
如果你只发送了一条数据,那么watermark不会推进,就不会触发窗口计算。你需要更多数据。 小昌同学 于2023年5月25日周四 09:32写道: > > 各位老师,请教一下关于flink 事件时间窗口的执行时间点的相关问题; > 我使用的窗口是:TumblingEventTimeWindows(Time.minutes(1L)),我使用的时间定义是System.currentTimeMillis(),watermark是2秒, > 但是当我发送一条数据后,过了5分钟之后,窗口都没有触发计算,想请各位老师帮忙看一下程序的问题所在: > 相关代码以及样例数据如下: > | >

flink 窗口触发计算的条件

2023-05-24 Thread 小昌同学
各位老师,请教一下关于flink 事件时间窗口的执行时间点的相关问题; 我使用的窗口是:TumblingEventTimeWindows(Time.minutes(1L)),我使用的时间定义是System.currentTimeMillis(),watermark是2秒, 但是当我发送一条数据后,过了5分钟之后,窗口都没有触发计算,想请各位老师帮忙看一下程序的问题所在: 相关代码以及样例数据如下: | package job; import bean.MidInfo3; import bean.Result; import bean2.BaseInfo2; import

关于Table API 或 SQL 如何设置水印的疑问?

2023-05-24 Thread ZhaoShuKang
各位老师好,我最近在做Flink查询Hive的功能,需要用到窗口处理数据,在编写代码过程中无法设置水印,我看官网看到Table API & SQL 设置事件时间有三种方式: 1、在 DDL 中定义 2、在 DataStream 到 Table 转换时定义 3、使用 TableSource 定义 而我使用的是HiveCatalog查询hive,貌似用不上以上三种方式。所以我想问问各位老师,有没有一种办法可以直接在Table上设置某个字段为事件事件,并且设置水印? 另外说明,我的第一版代码是将Table转换为DataSteam,然后再设置水印和窗口,但是执行转换过程非常耗时,并且在源码中

Web UI don't show up In Flink on Yarn (Flink 1.17)

2023-05-24 Thread tan yao
Hi all,   I find a strange thing with flink 1.17 deployed on yarn (CDH 6.x), flink web ui can not show up from yarn web link "ApplicationMaster",even typed jobmanager ip directly in browser . when i run wordcount application in flink 1.17 examples, and click yarn web "ApplicationMaster" link

Re: Why I can't run more than 19 tasks?

2023-05-24 Thread Shammon FY
Hi Hemi, There may be two reasons that I can think of 1. The number of connections exceeds the MySQL limit, you can check the options in my.cnf for your mysql server and increase the max connections. 2. Connection timeout for mysql client, you can try to add 'autoReconnect=true' to the connection

Why I can't run more than 19 tasks?

2023-05-24 Thread Hemi Grs
hey everybody, I have a problem with my apache flink, I am synchronizing from MySQL to Elasticsearch but it seems that I can't run more than 19 tasks. it gave me this error: -- Caused by: org.apache.flink.util.FlinkRuntimeException: org.apache.flink.util.FlinkRuntimeException:

Re: Kafka Quotas & Consumer Group Client ID (Flink 1.15)

2023-05-24 Thread Mason Chen
Hi Hatem, The reason for setting different client ids is to due to Kafka client metrics conflicts and the issue is documented here: https://nightlies.apache.org/flink/flink-docs-stable/docs/connectors/datastream/kafka/#kafka-consumer-metrics. I think that the warning log is benign if you are

Re: Kafka Quotas & Consumer Group Client ID (Flink 1.15)

2023-05-24 Thread Hatem Mostafa
Hello Martijn, Yes, checkpointing is enabled and the offsets are committed without a problem. I think I might have figured out the answer to my second question based on my understanding of this code

First Flink Embedded Stateful Functions taking long to get invoked

2023-05-24 Thread Chinthakrindi, Rakesh
Hi team, We are exploring flink stateful function for one of our use case. As part of feasibility test, we are doing load testing to determine the time spent by flink app orchestrating the functions (E2E request latency -

Re: Kafka Quotas & Consumer Group Client ID (Flink 1.15)

2023-05-24 Thread Martijn Visser
Hi Hatem, Could it be that you don't have checkpointing enabled? Flink only commits its offset when a checkpoint has been completed successfully, as explained on https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/kafka/#consumer-offset-committing Best regards,

Re: Reading KafkaSource state from a savepoint using the State Processor API

2023-05-24 Thread Hang Ruan
Hi, Charles, I am used to read the state in the debug mode. I always set the breakpoint at the return statemnet in `SavepointReader#read`. Then I could find the state I need in the field `SavepointMetadataV2 savepointMetadata`. Finally I could deserialize the state bytes with

Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.5.0 released

2023-05-24 Thread huang huang
退订 Maximilian Michels 于2023年5月23日周二 10:12写道: > Niceee. Thanks for managing the release, Gyula! > > -Max > > On Wed, May 17, 2023 at 8:25 PM Márton Balassi > wrote: > > > > Thanks, awesome! :-) > > > > On Wed, May 17, 2023 at 2:24 PM Gyula Fóra wrote: > >> > >> The Apache Flink community is

退订

2023-05-24 Thread 梁猛
退订 | | 梁猛 | | cdt...@163.com |

Re: Backpressure handling in FileSource APIs - Flink 1.16

2023-05-24 Thread Kamal Mittal
Thanks Shammon for clarification. On Wed, May 24, 2023 at 11:01 AM Shammon FY wrote: > Hi Kamal, > > The source will slow down when there is backpressure in the flink job, you > can refer to docs [1] and [2] to get more detailed information about > backpressure mechanism. > > Currently there's