Re: Read configmap data in FlinkDeployment

2024-03-18 Thread Yaroslav Tkachenko
Hi Suyash, You can expose your configmap values as environment variables using the *podTemplate* parameter (see: https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.8/docs/custom-resource/reference/). This can be configured individually for TaskManager, JobManager or for

Read configmap data in FlinkDeployment

2024-03-18 Thread Suyash Jaiswal
Hi All, I was trying to deploy Apache-Flink in my EKS cluster. I can successfully deploy it, but now I want to expose some variable via the configmap, these variables are used by my sessionjob. I need help with yaml structure of how/where to expose the configmap variables to make It visible

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Zakelly Lan
Congratulations! Thanks Lincoln, Yun, Martijn and Jing for driving this release. Thanks everyone involved. Best, Zakelly On Mon, Mar 18, 2024 at 5:05 PM weijie guo wrote: > Congratulations! > > Thanks release managers and all the contributors involved. > > Best regards, > > Weijie > > >

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Zakelly Lan
Congratulations! Thanks Lincoln, Yun, Martijn and Jing for driving this release. Thanks everyone involved. Best, Zakelly On Mon, Mar 18, 2024 at 5:05 PM weijie guo wrote: > Congratulations! > > Thanks release managers and all the contributors involved. > > Best regards, > > Weijie > > >

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread weijie guo
Congratulations! Thanks release managers and all the contributors involved. Best regards, Weijie Leonard Xu 于2024年3月18日周一 16:45写道: > Congratulations, thanks release managers and all involved for the great > work! > > > Best, > Leonard > > > 2024年3月18日 下午4:32,Jingsong Li 写道: > > > >

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread weijie guo
Congratulations! Thanks release managers and all the contributors involved. Best regards, Weijie Leonard Xu 于2024年3月18日周一 16:45写道: > Congratulations, thanks release managers and all involved for the great > work! > > > Best, > Leonard > > > 2024年3月18日 下午4:32,Jingsong Li 写道: > > > >

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Leonard Xu
Congratulations, thanks release managers and all involved for the great work! Best, Leonard > 2024年3月18日 下午4:32,Jingsong Li 写道: > > Congratulations! > > On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote: >> >> Congratulations, thanks for the great work! >> >> Best, >>

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Leonard Xu
Congratulations, thanks release managers and all involved for the great work! Best, Leonard > 2024年3月18日 下午4:32,Jingsong Li 写道: > > Congratulations! > > On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote: >> >> Congratulations, thanks for the great work! >> >> Best, >>

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Jark Wu
Congrats! Thanks Lincoln, Jing, Yun and Martijn driving this release. Thanks all who involved this release! Best, Jark On Mon, 18 Mar 2024 at 16:31, Rui Fan <1996fan...@gmail.com> wrote: > Congratulations, thanks for the great work! > > Best, > Rui > > On Mon, Mar 18, 2024 at 4:26 PM Lincoln

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Jark Wu
Congrats! Thanks Lincoln, Jing, Yun and Martijn driving this release. Thanks all who involved this release! Best, Jark On Mon, 18 Mar 2024 at 16:31, Rui Fan <1996fan...@gmail.com> wrote: > Congratulations, thanks for the great work! > > Best, > Rui > > On Mon, Mar 18, 2024 at 4:26 PM Lincoln

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Jingsong Li
Congratulations! On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote: > > Congratulations, thanks for the great work! > > Best, > Rui > > On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee wrote: >> >> The Apache Flink community is very happy to announce the release of Apache >> Flink

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Jingsong Li
Congratulations! On Mon, Mar 18, 2024 at 4:30 PM Rui Fan <1996fan...@gmail.com> wrote: > > Congratulations, thanks for the great work! > > Best, > Rui > > On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee wrote: >> >> The Apache Flink community is very happy to announce the release of Apache >> Flink

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Rui Fan
Congratulations, thanks for the great work! Best, Rui On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee wrote: > The Apache Flink community is very happy to announce the release of Apache > Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series. > > Apache Flink® is an open-source

Re: [ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Rui Fan
Congratulations, thanks for the great work! Best, Rui On Mon, Mar 18, 2024 at 4:26 PM Lincoln Lee wrote: > The Apache Flink community is very happy to announce the release of Apache > Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series. > > Apache Flink® is an open-source

[ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Lincoln Lee
The Apache Flink community is very happy to announce the release of Apache Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming

[ANNOUNCE] Apache Flink 1.19.0 released

2024-03-18 Thread Lincoln Lee
The Apache Flink community is very happy to announce the release of Apache Flink 1.19.0, which is the fisrt release for the Apache Flink 1.19 series. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming

Re: Unaligned checkpoint blocked by long Async operation

2024-03-17 Thread Zakelly Lan
I agree. Also create https://issues.apache.org/jira/browse/FLINK-34704 for tracking and further discussion. Best, Zakelly On Fri, Mar 15, 2024 at 2:59 PM Gyula Fóra wrote: > Posting this to dev as well... > > Thanks Zakelly, > Sounds like a solution could be to add a new different version of

Re:Help with using multiple Windows in the Table API

2024-03-17 Thread Xuyang
Hi, Nick. Can you try `cascading window aggregation` here[1] if it meets your needs? [1] https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/queries/window-agg/#cascading-window-aggregation -- Best! Xuyang At 2024-03-16 06:21:52, "Nick Hecht" wrote:

Re: FlinkSource to read iceberg table in Batch mode

2024-03-16 Thread Péter Váry
Hi Chetas, How sure are you that your job has no other unbounded source? I have seen this working in several cases, but if you could help with providing a sort executable example, I would like to take a look. Thanks, Peter On Wed, Mar 13, 2024, 20:55 Chetas Joshi wrote: > Hello, > > I am

Help with using multiple Windows in the Table API

2024-03-15 Thread Nick Hecht
Hello, I'm working on a pyflink application (1.15 deployed to aws). I have made a few applications and I feel like I'm making good progress, but I have a different problem that I think requires that I have multiple stages each with their own window. I'm not sure how I can properly pass time into

Re: Unaligned checkpoint blocked by long Async operation

2024-03-15 Thread Gyula Fóra
Posting this to dev as well... Thanks Zakelly, Sounds like a solution could be to add a new different version of yield that would actually yield to the checkpoint barrier too. That way operator implementations could decide whether any state modification may or may not have happened and can

Re: Unaligned checkpoint blocked by long Async operation

2024-03-14 Thread Zakelly Lan
Hi Gyula, Processing checkpoint halfway through `processElement` is problematic. The current element will not be included in the input in-flight data, and we cannot assume it has taken effect on the state by user code. So the best way is to treat `processElement` as an 'atomic' operation. I guess

Re: 急 [FLINK-34170] 何时能够修复?

2024-03-14 Thread Benchao Li
FLINK-34170 只是一个UI的展示问题,并不影响实际的运行。 JDBC Connector 维表下推的 filter 不生效问题,已经在 FLINK-33365 中修复了,最新的 JDBC Connector 版本中已经带上了这个修复,你可以试一下~ casel.chen 于2024年3月15日周五 10:39写道: > > 我们最近在使用Flink 1.17.1开发flink sql作业维表关联使用复合主键时遇到FLINK-34170描述一样的问题,请问这个major > issue什么时候在哪个版本后能够修复呢?谢谢! > > > select xxx from

急 [FLINK-34170] 何时能够修复?

2024-03-14 Thread casel.chen
我们最近在使用Flink 1.17.1开发flink sql作业维表关联使用复合主键时遇到FLINK-34170描述一样的问题,请问这个major issue什么时候在哪个版本后能够修复呢?谢谢! select xxx from kafka_table as kt left join phoenix_table FORSYSTEM_TIMEASOFphoenix_table.proctime as pt on kt.trans_id=pt.trans_id and pt.trans_date = DATE_FORMAT(CURRENT_TIMESTAMP,'MMdd');

Re: Question around manually setting Flink jobId

2024-03-14 Thread Asimansu Bera
Hello Venkat, There are few ways to get the JobID from the client side. JobID is alpha numeric as 9eec4d17246b5ff965a43082818a3336. When you submit the job using flink command line client , Job is returned as Job has been submitted with JobID 9eec4d17246b5ff965a43082818a3336 1. using below

Re: ARM flink docker image

2024-03-14 Thread Sharil Shafie
The docker image is build for both linux/amd64 or linux/arm64/v8 platform. The image you'll pull is based on the platform or the flag you set. Regards. On Fri, 15 Mar 2024, 2:01 am Yang LI, wrote: > Dear Flink Community, > > Do you know if we have somewhere a tested ARM based flink docker

Re: Question around manually setting Flink jobId

2024-03-14 Thread Venkatakrishnan Sowrirajan
Junrui, Thanks for your answer for the above questions. Allison and I work together on Flink. One of the main questions is, is there an easy way to get the Flink "JobID" from the Flink client side? Without the "JobID", users have no way to access Flink HistoryServer other than searching through

Broadcast state not getting filled with all the data in processElement

2024-03-14 Thread Nabil Hadji
im using flink 1.81.1 api on java 11 and im trying to use a BroadcastProcessFunction to filter a Products Datastream with a brand autorized Datastream as broadcast. So my first products Datastream contains different products that has a field brand and my second brands Datastream contains only

ARM flink docker image

2024-03-14 Thread Yang LI
Dear Flink Community, Do you know if we have somewhere a tested ARM based flink docker image? I think we can already run locally on an ARM macbook. But we don't have a ARM specified docker image yet. Regards, Yang LI

Re: Unaligned checkpoint blocked by long Async operation

2024-03-14 Thread Gyula Fóra
Thank you for the detailed analysis Zakelly. I think we should consider whether yield should process checkpoint barriers because this puts quite a serious limitation on the unaligned checkpoints in these cases. Do you know what is the reason behind the current priority setting? Is there a problem

Re: Re:flink sql关联维表在lookup执行计划中的关联条件问题

2024-03-14 Thread Jane Chan
Hi iasiuide, 感谢提问. 先来回答最后一个问题 关联维表的限制条件有的会作为关联条件,有的不作为关联条件吗? 这种有什么规律吗? > Lookup join 的 on condition 会在优化过程中经过一系列改写, 这里只简要对影响 lookup 和 where 的几处进行说明. 1. logical 阶段, FlinkFilterJoinRule 会将 on 条件 split 为针对单边的 (左表/右表) 和针对双边的. **针对单边的 filter 会被尽量 pushdown 到 join 节点之前** (这意味着有可能会额外生成一个 Filter 节点);

Re: Unaligned checkpoint blocked by long Async operation

2024-03-14 Thread Zakelly Lan
Hi Gyula, Well I tried your example in local mini-cluster, and it seems the source can take checkpoints but it will block in the following AsyncWaitOperator. IIUC, the unaligned checkpoint barrier should wait until the current `processElement` finishes its execution. In your example, the element

flink k8s operator chk config interval bug.inoperative

2024-03-14 Thread kcz
kcz 573693...@qq.com

Unaligned checkpoint blocked by long Async operation

2024-03-14 Thread Gyula Fóra
Hey all! I encountered a strange and unexpected behaviour when trying to use unaligned checkpoints with AsyncIO. If the async operation queue is full and backpressures the pipeline completely, then unaligned checkpoints cannot be completed. To me this sounds counterintuitive because one of the

Re:flink写kafka时,并行度和分区数的设置问题

2024-03-14 Thread 熊柱
退订 在 2024-03-13 15:25:27,"chenyu_opensource" 写道: >您好: > flink将数据写入kafka【kafka为sink】,当kafka > topic分区数【设置的60】小于设置的并行度【设置的300】时,task是轮询写入这些分区吗,是否会影响写入效率?【是否存在遍历时的耗时情况】。 > 此时,如果扩大topic的分区数【添加至200,或者直接到300】,写入的效率是否会有明显的提升? > > 是否有相关的源码可以查看。 >期待回复,祝好,谢谢! > > >

Re: Question around manually setting Flink jobId

2024-03-13 Thread Junrui Lee
Hi Allison, The PIPELINE_FIXED_JOB_ID configuration option is not intended for public use. IIUC, the only way to manually specify the jobId is submitting a job through the JAR RUN REST API, where you can provide the jobId in the request body (

Question around manually setting Flink jobId

2024-03-13 Thread Allison Chang via user
Hi, I was wondering if there is any way to manually set the jobID for the jobGraph. I noticed that there is a configuration for PIPELINE_FIXED_JOB_ID, but there doesn't seem to be a way to set it via config with the StreamingJobGraphGenerator.java. Would appreciate any assistance if anyone has

FlinkSource to read iceberg table in Batch mode

2024-03-13 Thread Chetas Joshi
Hello, I am using iceberg-flink-runtime lib (1.17-1.4.0) and running the following code to read an iceberg table in BATCH mode. var source = FlinkSource .forRowData() .streaming(false) .env(execEnv) .tableLoader(tableLoader) .limit((long) operation.getLimit())

退订

2024-03-13 Thread 李一飞
退订

Re: 如何查询create table语句的详细内容

2024-03-13 Thread Yubin Li
刚刚图没发完整 [image: Screenshot 2024-03-13 103802.png] Yubin Li 于2024年3月13日周三 17:44写道: > 用show create table语句 > [image: Screenshot 2024-03-13 103802.png] > > ha.fen...@aisino.com 于2024年3月12日周二 15:37写道: > >> 例如 >> CREATE TABLE Orders_in_kafka ( >> -- 添加 watermark 定义 >> WATERMARK FOR

Re: 如何查询create table语句的详细内容

2024-03-13 Thread Yubin Li
用show create table语句 [image: Screenshot 2024-03-13 103802.png] ha.fen...@aisino.com 于2024年3月12日周二 15:37写道: > 例如 > CREATE TABLE Orders_in_kafka ( > -- 添加 watermark 定义 > WATERMARK FOR order_time AS order_time - INTERVAL '5' SECOND > ) WITH ( > 'connector' = 'kafka', > ... > ) >

Re: flink写kafka时,并行度和分区数的设置问题

2024-03-13 Thread Zhanghao Chen
你好, 写 Kafka 分区的策略取决于使用的 Kafka Sink 的 Partitioner [1],默认使用的是 Kafka 的 Default Partitioner,底层使用了一种称之为黏性分区的策略:对于指定 key 的数据按照对 key hash 的方式选择分区写入,对于未指定 key 的数据则随机选择一个分区,然后“黏住”这个分区一段时间以提升攒批效果,然后攒批结束写完后再随机换一个分区,来在攒批效果和均匀写入间做一个平衡。 具体可以参考 [2]。 因此,默认配置下不存在你说的遍历导致攒批效果下降的问题,在达到 Kafka

Re: Flink Batch Execution Mode

2024-03-13 Thread irakli.keshel...@sony.com
Hi Feng, I'm using flink-connector-kafka 3.0.1-1.17. I see that 1.17 is affected, but the ticket is marked as fixed so I'm not sure if that is actually the issue. Best, Irakli From: Feng Jin Sent: 12 March 2024 18:28 To: Keshelava, Irakli Cc:

Re:Flink 1.18 with Java 17 production version release

2024-03-13 Thread Xuyang
Hi, Meng. I think you can follow this jira[1] and ping the creator about the latest progress. [1] https://issues.apache.org/jira/browse/FLINK-34491 -- Best! Xuyang At 2024-03-13 04:02:09, "Meng, Ping via user" wrote: Hi, The latest Flink 1.18.1 with Java 17 support is in beta

Re: flink集群如何将日志直接写入elasticsearch中?

2024-03-13 Thread Jiabao Sun
比较简单的方式是启动一个filebeat进程,抓取 jobmanager.log 和t askmanager.log Best, Jiabao kellygeorg...@163.com 于2024年3月13日周三 15:30写道: > 有没有比较方便快捷的解决方案? > > >

Re:一次执行单条insert语句和一次执行多条insert语句有什么区别

2024-03-13 Thread Xuyang
Hi, fengqi. 上面那种statement的方式,最终将只会产生一个作业,这个作业有机会复用这个source(拓扑图1 个source -> 2 个calc_sink),因此只需要读一次source就行了。 下面那种execute sql两次的方式,将产生两个作业,两个作业完全独立。 -- Best! Xuyang At 2024-03-13 12:26:05, "ha.fen...@aisino.com" wrote: >StatementSet stmtSet = tEnv.createStatementSet();

flink集群如何将日志直接写入elasticsearch中?

2024-03-13 Thread kellygeorg...@163.com
有没有比较方便快捷的解决方案?

flink写kafka时,并行度和分区数的设置问题

2024-03-13 Thread chenyu_opensource
您好: flink将数据写入kafka【kafka为sink】,当kafka topic分区数【设置的60】小于设置的并行度【设置的300】时,task是轮询写入这些分区吗,是否会影响写入效率?【是否存在遍历时的耗时情况】。 此时,如果扩大topic的分区数【添加至200,或者直接到300】,写入的效率是否会有明显的提升? 是否有相关的源码可以查看。 期待回复,祝好,谢谢!

退订

2024-03-12 Thread 18846086541
退订 | | 郝文强 | | 18846086...@163.com |

Re: Facing ClassNotFoundException: org.apache.flink.api.common.ExecutionConfig on EMR

2024-03-12 Thread Sachin Mittal
Hi Hang, I have checked this in my fat jar and the same class is not packaged in my jar. I have also searched about this issue in our mail archives too and the same issue was posted a few months back too. https://www.mail-archive.com/user@flink.apache.org/msg52035.html The solution was to

回复:如何查询create table语句的详细内容

2024-03-12 Thread Yubin Li
使用show create table Orders_in_kafka语句 ha.fengqi wrote: > 例如 > CREATE TABLE Orders_in_kafka ( > -- 添加 watermark 定义 > WATERMARK FOR order_time AS order_time - INTERVAL '5' SECOND > ) WITH ( > 'connector' = 'kafka', > ... > ) > LIKE Orders_in_file ( > EXCLUDING ALL > INCLUDING GENERATED >

Re: Flink SQL query using a UDTAGG

2024-03-12 Thread Junrui Lee
Hi Pouria, Table aggregate functions are not currently supported in SQL, they have been introduced in the Table API as per FLIP-29: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739. Best, Junrui Pouria Pirzadeh 于2024年3月13日周三 02:06写道: > Hi, > I am using the SQL api on

Re: Facing ClassNotFoundException: org.apache.flink.api.common.ExecutionConfig on EMR

2024-03-12 Thread Hang Ruan
Hi, Sachin. I use the command `jar -tf flink-dist-1.18.0.jar| grep OutputTag` to make sure that this class is packaged correctly. I think you should check your own jar to make sure this class is not packaged in your jar. Best, Hang Sachin Mittal 于2024年3月12日周二 20:29写道: > I miss wrote. It’s

High latency in reading Iceberg tables using Flink table api

2024-03-12 Thread Chetas Joshi
Hello all, I am using the flink-iceberg-runtime lib to read an iceberg table into a Flink datastream. I am using Glue as the catalog. I use the flink table API to build and query an iceberg table and then use toDataStream to convert it into a DataStream. Here is the code Table table =

Flink 1.18 with Java 17 production version release

2024-03-12 Thread Meng, Ping via user
Hi, The latest Flink 1.18.1 with Java 17 support is in beta mode, users can report issue, is there a planned release date for production version? Do you have a roadmap for production version? Thank you, Angela Meng

Flink SQL query using a UDTAGG

2024-03-12 Thread Pouria Pirzadeh
Hi, I am using the SQL api on Flink 1.18 and I am trying to write a SQL query which uses a 'user-defined table aggregate function' (UDTAGG). However, the documentation [1] only includes a Table API example

Re: Flink Batch Execution Mode

2024-03-12 Thread Feng Jin
Hi Irakli What version of flink-connector-kafka are you using? You may have encountered a bug [1] in the old version that prevents the source task from entering the finished state. [1]. https://issues.apache.org/jira/browse/FLINK-31319 Best, Feng On Tue, Mar 12, 2024 at 7:21 PM

Re:Read data from elasticsearch using Java flink

2024-03-12 Thread Xuyang
Hi, Fidea. Currently, elasticsearch is not supported to be used as a source. You can see the jira[1] for more details. You can also cherry pick this pr[2] to your own branch and build a custom elasticsearch connector to use it directly. [1] https://issues.apache.org/jira/browse/FLINK-25568 [2]

Re: Facing ClassNotFoundException: org.apache.flink.api.common.ExecutionConfig on EMR

2024-03-12 Thread Sachin Mittal
Hi Hang, Once I exclude file-core from the fat jar I get this error: I believe org.apache.flink.util.OutputTag is part of flink-core itself. Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/util/OutputTag at java.base/java.lang.Class.forName0(Native Method)

Read data from elasticsearch using Java flink

2024-03-12 Thread Fidea Lidea
Hi , I am trying to read data from elasticsearch & store in a stream. Could you please share a few examples to *read*/get all data from Elasticsearch using java. Thanks,

Re: Flink performance

2024-03-12 Thread Robin Moffatt via user
It would be useful if you shared what you've found already, or could give a bit more detail about what it is that you're looking for. Numbers on their own don't really tell a full picture. Here are a few links that I found for you that might help: *

回复:如何查询create table语句的详细内容

2024-03-12 Thread Yubin Li
使用show create table Orders_in_kafka语句 回复的原邮件 | 发件人 | ha.fen...@aisino.com | | 发送日期 | 2024年3月12日 15:37 | | 收件人 | user-zh | | 主题 | 如何查询create table语句的详细内容 | 例如 CREATE TABLE Orders_in_kafka ( -- 添加 watermark 定义 WATERMARK FOR order_time AS order_time - INTERVAL '5' SECOND ) WITH (

Re: Facing ClassNotFoundException: org.apache.flink.api.common.ExecutionConfig on EMR

2024-03-12 Thread Sachin Mittal
Ok. Actually it’s version 1.18. I will try to remove flink-core from the fat jar. On Tue, 12 Mar 2024 at 1:51 PM, Hang Ruan wrote: > Hi, Sachin. > > This error occurs when there is class conflict. There is no need to > package flink-core in your own jar. It is already contained in flink-dist. >

Re: Facing ClassNotFoundException: org.apache.flink.api.common.ExecutionConfig on EMR

2024-03-12 Thread Sachin Mittal
I miss wrote. It’s version 1.18. This is latest and works locally but not on aws emr and I get class not found exception. On Tue, 12 Mar 2024 at 1:25 PM, Zhanghao Chen wrote: > Hi Sachin, > > Flink 1.8 series have already been out of support, have you tried with a > newer version of Flink?

Re: Facing ClassNotFoundException: org.apache.flink.api.common.ExecutionConfig on EMR

2024-03-12 Thread Hang Ruan
Hi, Sachin. This error occurs when there is class conflict. There is no need to package flink-core in your own jar. It is already contained in flink-dist. And Flink version 1.8 is too old. It is better to update your flink version. Best, Hang Sachin Mittal 于2024年3月12日周二 16:04写道: > Hi, > We

Re: (无主题)

2024-03-12 Thread Xuannan Su
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅来自 user-zh@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理你的邮件订阅。 Best, Xuannan [1] https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8 [2] https://flink.apache.org/community.html#mailing-lists On Tue, Mar 12, 2024 at 2:04 

Re: Facing ClassNotFoundException: org.apache.flink.api.common.ExecutionConfig on EMR

2024-03-12 Thread Zhanghao Chen
Hi Sachin, Flink 1.8 series have already been out of support, have you tried with a newer version of Flink? From: Sachin Mittal Sent: Tuesday, March 12, 2024 14:48 To: user@flink.apache.org Subject: Facing ClassNotFoundException:

Flink Batch Execution Mode

2024-03-12 Thread irakli.keshel...@sony.com
Hello, I have a Flink job that is running in the Batch mode. The source for the job is a Kafka topic which has limited number of events. I can see that the job starts running fine and consumes the events, but never makes it past the first task and becomes idle. The Kafka source is defined to

RE: Flink performance

2024-03-12 Thread Kamal Mittal via user
Hello Community, Please share info. for below query. Rgds, Kamal From: Kamal Mittal via user Sent: Monday, March 11, 2024 1:18 PM To: user@flink.apache.org Subject: Flink performance Hello, Can you please point me to documentation if any such available where flink talks about or documented

Facing ClassNotFoundException: org.apache.flink.api.common.ExecutionConfig on EMR

2024-03-12 Thread Sachin Mittal
Hi, We have installed a flink cluster version 1.8.0 on AWS EMR. However when we submit a job we get the following error: (Do note that when we submit the same job on a local instance of Flink 1.8.1 it is working fine. The fat jar we submit has all the flink dependencies from 1.8.0 including the

Re: 退订

2024-03-12 Thread Hang Ruan
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅来自 user-zh-unsubscr...@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理你的邮件订阅。 Best, Hang [1] https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8 [2] https://flink.apache.org/community.html#mailing-lists willluzheng

Re: 退订

2024-03-12 Thread Hang Ruan
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅来自 user-zh@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理你的邮件订阅。 Best, Hang [1] https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8 [2] https://flink.apache.org/community.html#mailing-lists 熊柱 <18428358...@163.com>

回复:退订

2024-03-12 Thread willluzheng
退订 回复的原邮件 | 发件人 | 王阳 | | 发送日期 | 2024年03月12日 13:49 | | 收件人 | user-zh@flink.apache.org | | 主题 | 退订 | 退订

退订

2024-03-12 Thread 王阳
退订

(无主题)

2024-03-12 Thread willluzheng
退订

退订

2024-03-11 Thread 王阳
退订

退订

2024-03-11 Thread 熊柱
退订

回复:flink operator 高可用任务偶发性报错unable to update ConfigMapLock

2024-03-11 Thread kellygeorg...@163.com
有没有高手指点一二???在线等 回复的原邮件 | 发件人 | kellygeorg...@163.com | | 日期 | 2024年03月11日 20:29 | | 收件人 | user-zh | | 抄送至 | | | 主题 | flink operator 高可用任务偶发性报错unable to update ConfigMapLock | jobmanager的报错如下所示,请问是什么原因? Exception occurred while renewing lock:Unable to update ConfigMapLock Caused

flink operator 高可用任务偶发性报错unable to update ConfigMapLock

2024-03-11 Thread kellygeorg...@163.com
jobmanager的报错如下所示,请问是什么原因? Exception occurred while renewing lock:Unable to update ConfigMapLock Caused by:io.fabric8.kubernetes.client.Kubernetes Client Exception:Operation:[replace] for kind:[ConfigMap] with name:[flink task xx- configmap] in namespace:[default] Caused by:

Re: TTL in pyflink does not seem to work

2024-03-11 Thread Ivan Petrarka
Thanks! We’ve created and issue for that:  https://issues.apache.org/jira/browse/FLINK-34625 Yeap, planning to use timers as workaround for now On Mar 10, 2024 at 02:59 +0400, David Anderson , wrote: > My guess is that this only fails when pyflink is used with the heap state > backend, in which

Flink performance

2024-03-11 Thread Kamal Mittal via user
Hello, Can you please point me to documentation if any such available where flink talks about or documented performance numbers w.r.t certain use cases? Rgds, Kamal

Re: 退订

2024-03-10 Thread Hang Ruan
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 地址来取消订阅来自 user-zh-unsubscr...@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理你的邮件订阅。 Best, Hang [1] https://flink.apache.org/zh/community/#%e9%82%ae%e4%bb%b6%e5%88%97%e8%a1%a8 [2] https://flink.apache.org/community.html#mailing-lists 王新隆 于2024年3月11日周一

Re: Entering a busy loop when adding a new sink to the graph + checkpoint enabled

2024-03-10 Thread Hang Ruan
Hi, Xuyang & Daniel. I have checked this part of code. I think it is an expected behavior. As marked in code comments, this loop makes sure that the transactions before this checkpoint id are re-created. The situation Daniel mentioned will happen only when all checkpoint between 1 and 2

退订

2024-03-10 Thread 王新隆
退订

Re:Entering a busy loop when adding a new sink to the graph + checkpoint enabled

2024-03-10 Thread Xuyang
Hi, Danny. When the problem occurs, can you use flame graph to confirm whether the loop in this code is causing the busyness? Since I'm not particularly familiar with kafka connector, I can't give you an accurate reply. I think Hang Ruan is an expert in this field :). Hi, Ruan Hang. Can you

Re: FlinkSQL connector jdbc 用户密码是否可以加密/隐藏

2024-03-10 Thread gongzhongqiang
hi, 东树 隐藏sql中的敏感信息,这个需要外部的大数据平台来做。 比如:StreamPark 的变量管理,可以提前维护好配置信息,编写sql时引用配置,由平台提交至flink时解析sql并替换变量。 Best, Zhongqiang Gong 杨东树 于2024年3月10日周日 21:50写道: > 各位好, >考虑到数据库用户、密码安全性问题,使用FlinkSQL connector > jdbc时,请问如何对数据库的用户密码进行加密/隐藏呢。例如如下常用sql中的password: > CREATE TABLE wordcount_sink ( >

Re: FlinkSQL connector jdbc 用户密码是否可以加密/隐藏

2024-03-10 Thread Feng Jin
1. 目前 JDBC connector 本身不支持加密, 我理解可以在提交 SQL 给 SQL 文本来做加解密的操作,或者做一些变量替换来隐藏密码。 2. 可以考虑提前创建好 jdbc catalog,从而避免编写 DDL 暴露密码。 Best, Feng On Sun, Mar 10, 2024 at 9:50 PM 杨东树 wrote: > 各位好, >考虑到数据库用户、密码安全性问题,使用FlinkSQL connector > jdbc时,请问如何对数据库的用户密码进行加密/隐藏呢。例如如下常用sql中的password: > CREATE TABLE

Re:Re: Schema Evolution & Json Schemas

2024-03-10 Thread Jensen
退订 At 2024-02-26 20:55:19, "Salva Alcántara" wrote: Awesome Andrew, thanks a lot for the info! On Sun, Feb 25, 2024 at 4:37 PM Andrew Otto wrote: > the following code generator Oh, and FWIW we avoid code generation and POJOs, and instead rely on Flink's Row or RowData

FlinkSQL connector jdbc 用户密码是否可以加密/隐藏

2024-03-10 Thread 杨东树
各位好, 考虑到数据库用户、密码安全性问题,使用FlinkSQL connector jdbc时,请问如何对数据库的用户密码进行加密/隐藏呢。例如如下常用sql中的password: CREATE TABLE wordcount_sink ( word String, cnt BIGINT, primary key (word) not enforced ) WITH ( 'connector' = 'jdbc', 'url' = 'jdbc:mysql://localhost:3306/flink', 'username' = 'root',

Re: TTL in pyflink does not seem to work

2024-03-09 Thread David Anderson
My guess is that this only fails when pyflink is used with the heap state backend, in which case one possible workaround is to use the RocksDB state backend instead. Another workaround would be to rely on timers in the process function, and clear the state yourself. David On Fri, Mar 8, 2024 at

Re: Re: Running Flink SQL in production

2024-03-08 Thread Robin Moffatt via user
That makes sense, thank you. I found FLIP-316 [1] and will keep an eye on it too. Thanks, Robin. [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-316%3A+Support+application+mode+for+SQL+Gateway On Fri, 8 Mar 2024 at 13:56, Zhanghao Chen wrote: > Hi Robin, > > It's better to use

Re: Re: Running Flink SQL in production

2024-03-08 Thread Zhanghao Chen
Hi Robin, It's better to use Application mode [1] for mission-critical long-running SQL jobs as it provides better isolation, you can utilize the table API to package a jar as suggested by Feng to do so. Neither SQL client nor SQL gateway supports submitting SQL in Application mode for now,

Re: 回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 Thread Zhanghao Chen
事实上是可行的。你可以直接修改 StreamExecutionEnvironment 的源码,默认给作业作业注册上一个你们定制的 listener,然后通过某种那个方式把这个信息透出来。在 FLIP-314 [1] 中,我们计划直接在 Flink 里原生提供一个这样的接口让你去注册自己的 listener 获取血缘信息,不过还没发布,可以先自己做。 [1] https://cwiki.apache.org/confluence/display/FLINK/FLIP-314:+Support+Customized+Job+Lineage+Listener

回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 Thread 阿华田
我们想修改源码 实现任意任务提交实时平台,初始化DAG的时候获取到血缘信息,StreamExecutionEnvironment注册 这种只能写在任务里 不满足需求 | | 阿华田 | | a15733178...@163.com | 签名由网易邮箱大师定制 在2024年03月8日 18:23,Zhanghao Chen 写道: 你可以看下 OpenLineage 和 Flink 的集成方法 [1],它是在 StreamExecutionEnvironment 里注册了一个 JobListener(通过这个可以拿到 JobClient 进而拿到 job id)。然后从

Re: 回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 Thread Zhanghao Chen
你可以看下 OpenLineage 和 Flink 的集成方法 [1],它是在 StreamExecutionEnvironment 里注册了一个 JobListener(通过这个可以拿到 JobClient 进而拿到 job id)。然后从 execution environment 里可以抽取到 transformation 信息处理 [2]。 [1] https://openlineage.io/docs/integrations/flink/ [2]

回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 Thread 阿华田
”JobGraph 可以获得 transformation 信息“, JobGraph可以直接获取transformation的信息吗?, 我们是在 SourceTransformation 和SinkTransformation反射拿到链接信息 ,但是这个地方拿不到flinkJobid, JobGraph可以拿到source和sink的链接信息和flinkJobid? | | 阿华田 | | a15733178...@163.com | JobGraph 可以获得 transformation 信息transformation 签名由网易邮箱大师定制

Re: TTL in pyflink does not seem to work

2024-03-08 Thread lorenzo.affetti.ververica.com via user
Hello Ivan! Could you please create a JIRA issue out of this? That seem the proper place where to discuss this. It seems a bug as the two versions of the code you posted look identical, and the behavior should be consistent. On Mar 7, 2024 at 13:09 +0100, Ivan Petrarka , wrote: > Note, that in

Re: 回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-08 Thread Zhanghao Chen
JobGraph 里有个字段就是 jobid。 Best, Zhanghao Chen From: 阿华田 Sent: Friday, March 8, 2024 14:14 To: user-zh@flink.apache.org Subject: 回复: Flink DataStream 作业如何获取到作业血缘? 获取到Source 或者 DorisSink信息之后, 如何知道来自那个flink任务,好像不能获取到flinkJobId | | 阿华田 | | a15733178...@163.com |

回复: Flink DataStream 作业如何获取到作业血缘?

2024-03-07 Thread 阿华田
获取到Source 或者 DorisSink信息之后, 如何知道来自那个flink任务,好像不能获取到flinkJobId | | 阿华田 | | a15733178...@163.com | 签名由网易邮箱大师定制 在2024年02月26日 20:04,Feng Jin 写道: 通过 JobGraph 可以获得 transformation 信息,可以获得具体的 Source 或者 Doris Sink,之后再通过反射获取里面的 properties 信息进行提取。 可以参考 OpenLineage[1] 的实现. 1.

Re: Flink Checkpoint & Offset Commit

2024-03-07 Thread Yanfei Lei
Hi Jacob, > I have multiple upstream sources to connect to depending on the business > model which are not Kafka. Based on criticality of the system and publisher > dependencies, we cannot switch to Kafka for these. Sounds like you want to implement some custom connectors, [1][2] may be

<    3   4   5   6   7   8   9   10   11   12   >