回复:flink on yarn HDFS_DELEGATION_TOKEN清除后,任务am attempt时失败

2022-02-10 Thread xieyi
再补充一个信息: 故障案例中: flink 客户端flink_conf.ymal 中正确配置了security.kerberos.login.keytab。 在2022年02月11日 15:18,xieyi 写道: 老师们好: 请教一个问题, 由于hadoop Delegation token 会在超过Max

Re: CDC using Query

2022-02-10 Thread mohan radhakrishnan
Thanks. I looked at it. Our primary DB is Oracle and MySql. Flink CDC Connector uses Debezium. I think. So ververica doesn't have a Flink CDC Connector for Oracle ? On Mon, Feb 7, 2022 at 3:03 PM Leonard Xu wrote: > Hello, mohan > > 1. Does flink have any support to track any missed source Jdbc

flink on yarn HDFS_DELEGATION_TOKEN清除后,任务am attempt时失败

2022-02-10 Thread xieyi
老师们好: 请教一个问题, 由于hadoop Delegation token 会在超过Max Lifetime(默认7天)后过期清除,对于长期运行任务,yarn提到有三种策略解决这个问题:https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnApplicationSecurity.md#securing-long-lived-yarn-services 想知道flink on

flink on yarn HDFS_DELEGATION_TOKEN清除后,任务am attempt时失败

2022-02-10 Thread xieyi
老师们好: 请教一个问题, 由于hadoop Delegation token 会在超过Max Lifetime(默认7天)后过期清除,对于长期运行任务,yarn提到有三种策略解决这个问题:https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnApplicationSecurity.md#securing-long-lived-yarn-services 想知道flink on

Illegal reflective access by org.apache.flink.api.java.ClosureCleaner

2022-02-10 Thread Антон
Hello, what could be the reason for warning like this:WARNING: An illegal reflective access operation has occurredWARNING: Illegal reflective access by org.apache.flink.api.java.ClosureCleaner (file:/var/flink/flink-1.13.2/lib/flink-dist_2.12-1.13.2.jar) to field java.lang.String.valueWARNING:

There Is a Delay While Over Aggregation Sending Results

2022-02-10 Thread wang guanglei
Hey Flink Community, I am using FlinkSQL Over Aggregation to calculate the number of uuid per client ip during the past 1 hour. The flink sql I am using is something like below: SELECT

Re: [External] : Re: Use TwoPhaseCommitSinkFunction or StatefulSink+TwoPhaseCommittingSink to achieve EXACTLY_ONCE sink?

2022-02-10 Thread Fuyao Li
Hello Yun, Thanks for the quick response. This is really helpful. I have confirmed with Oracle Streaming Service (OSS) that they currently don’t support EXACTLY_ONCE semantic, only AT_LEAST_ONCE semantic works. They suggest to add some deduplicate mechanisms at Sink to mitigate the issue.

Re: question on dataSource.collect() on reading states from a savepoint file

2022-02-10 Thread Antonio Si
Thanks Bastien. I will check it out. Antonio. On Thu, Feb 10, 2022 at 11:59 AM bastien dine wrote: > I haven't used s3 with Flink, but according to this doc : > https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/filesystems/s3/ > You can setup pretty easily s3 and use

Re: question on dataSource.collect() on reading states from a savepoint file

2022-02-10 Thread bastien dine
I haven't used s3 with Flink, but according to this doc : https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/filesystems/s3/ You can setup pretty easily s3 and use it with s3://path/to/your/file with a write sink The page talk about DataStream but it should work with

Re: question on dataSource.collect() on reading states from a savepoint file

2022-02-10 Thread Antonio Si
Thanks Bastien. Can you point to an example of using a sink as we are planning to write to S3? Thanks again for your help. Antonio. On Thu, Feb 10, 2022 at 11:49 AM bastien dine wrote: > Hello Antonio, > > .collect() method should be use with caution as it's collecting the > DataSet (multiple

Re: question on dataSource.collect() on reading states from a savepoint file

2022-02-10 Thread bastien dine
Hello Antonio, .collect() method should be use with caution as it's collecting the DataSet (multiple partitions on multiple TM) into a List single list on JM (so in memory) Unless you have a lot of RAM, you can not use it this way and you probably should not I recommend you to use a sink to print

question on dataSource.collect() on reading states from a savepoint file

2022-02-10 Thread Antonio Si
Hi, I am using the stateful processing api to read the states from a savepoint file. It works fine when the state size is small, but when the state size is larger, around 11GB, I am getting an OOM. I think it happens when it is doing a dataSource.collect() to obtain the states. The stackTrace is

回复: flink是否支持 http请求并返回json数据

2022-02-10 Thread yckkcy
你好,我想到一个异步方案,不知道是否能满足需求,可供参考: 1、接收端,我同意Caizhi Weng的说法,还是要另起一个http server服务a:把请求的参数放入消息队列中,同时服务a通知调用方请求已收到。 2、flink从消息队列中消费数据,并将参数进行解析,业务逻辑处理。 3、sink端可以考虑继承RichSinkFunction类,将解析结果再通过调用对方的http服务传回去,在invoke方法中post数据即可。 sink端可以参考这个代码 public class HttpResultSink extends RichSinkFunction { private

Problem with kafka with key=None using pyhton-kafka module

2022-02-10 Thread mrAlexTFB
Hello, I am following the example in Python Walkthrough , I downloaded the zip file with the project skeleton. I'm having a problem when changing the key attribute in

Re: Issue with Flink UI for Flink 1.14.0

2022-02-10 Thread Guillaume Vauvert
Hi, This issue is impacting all deployments with 2 JobManagers or more (HA mode), because in this case serialization is used (well, depending on the JobManager who is responding, the Leader or a Follower). It prevents: * usage of Flink UI * usage of Flink command "flink.sh list" * usage

Re: Issue with Flink UI for Flink 1.14.0

2022-02-10 Thread Roman Khachatryan
Hi, AFAIK there are no plans currently to release 1.14.4. The previous one (1.14.3) was released on Jan 20, so I'd 1.14.4 preparation to start in the next several weeks. Regards, Roman On Tue, Feb 8, 2022 at 7:31 PM Sweta Kalakuntla wrote: > I am facing the same issue, do we know when 1.14.4

Re: JSONKeyValueDeserializationSchema cannot be converted to ObjectNode>

2022-02-10 Thread Martijn Visser
Thanks for sharing the full solution, much appreciated! On Thu, 10 Feb 2022 at 09:07, HG wrote: > The complete solution for the record ( that others can benefit from it). > > KafkaSource source = KafkaSource.builder() > .setProperties(kafkaProps) >

Re: JSONKeyValueDeserializationSchema cannot be converted to ObjectNode>

2022-02-10 Thread HG
The complete solution for the record ( that others can benefit from it). KafkaSource source = KafkaSource.builder() .setProperties(kafkaProps) .setProperty("ssl.truststore.type",trustStoreType) .setProperty("ssl.truststore.password",trustStorePassword)