Re:Re: CustomKafkaDeserializationSchema is abstrct

2021-09-13 Thread 赢峰
FlinkKafkaConsumer> consumer = new FlinkKafkaConsumer<>( topic, new CustomKafkaDeserializationSchema(), props ); public class CustomKafkaDeserializationSchema implements KafkaDeserializationSchema> { @Override public boolean

回复:退订

2021-09-13 Thread JasonLee
Hi 退订应该发送到 user-zh-unsubscr...@flink.apache.org Best JasonLee 在2021年09月14日 11:46,abel0130 写道: 退订

回复:flink on native k8s 无法查看火焰图问题

2021-09-13 Thread JasonLee
Hi 这个配置默认是关闭的,因为对性能有一定影响,具体的配置可以参考官网 https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/config/#advanced-options-for-the-rest-endpoint-and-client Best JasonLee 在2021年09月14日 11:20,赵旭晨 写道: Unable to load requested file /jobs/d2fcac59f4a42ad17ceba8c5371862bb/。。

退订

2021-09-13 Thread abel0130
退订

回复:flink on native k8s 无法查看火焰图问题

2021-09-13 Thread johnjlong
开启这个参数:rest.flamegraph.enabled: true | | johnjlong | | johnjl...@163.com | 签名由网易邮箱大师定制 在2021年9月14日 11:29,赵旭晨 写道: Unable to load requested file /jobs/d2fcac59f4a42ad17ceba8c5371862bb/。。 请问这是什么原因啊,恳请大佬解惑 镜像版本flink:1.13.2-scala_2.12-java8

退订

2021-09-13 Thread qq
退订

Re: Table program cannot be compiled. This is a bug. Please file an issue

2021-09-13 Thread Caizhi Weng
Hi! This is because Java has a maximum method length of 64 KB. For Flink <= 1.13 please set table.generated-code.max-length to less than 65536 (~8192 is preferred) to limit the length of each generated method. If this doesn't help, we've (hopefully) completely fixed this issue in Flink 1.14 by

flink on native k8s 无法查看火焰图问题

2021-09-13 Thread 赵旭晨
Unable to load requested file /jobs/d2fcac59f4a42ad17ceba8c5371862bb/。。 请问这是什么原因啊,恳请大佬解惑 镜像版本flink:1.13.2-scala_2.12-java8

Table program cannot be compiled. This is a bug. Please file an issue

2021-09-13 Thread 张颖
I write a long sql,but when I explain my plan,it make a mistake: org.apache.flink.util.FlinkRuntimeException: org.apache.flink.api.common.InvalidProgramException: Table program cannot be compiled. This is a bug. Please file an issue. at

Re: Error while fetching data from Apache Kafka

2021-09-13 Thread Caizhi Weng
Hi! This seems to be caused by some mismatching types in your source definition and your workflow. If possible could you describe the schema of your Kafka source and paste your datastream / Table / SQL code here? Dhiru 于2021年9月14日周二 上午3:49写道: > *I am not sure when we try to receive data from

Re: CustomKafkaDeserializationSchema is abstrct

2021-09-13 Thread Caizhi Weng
Hi! 邮件里看不到图片,请检查一下。 从标题来看,是不是写了一个自己的 kafka deserialization schema,然后这个类是 abstract class,不能直接 new? 赢峰 于2021年9月13日周一 下午8:57写道: > 报错如下: > > > 代码如下: > > > > 签名由 网易邮箱大师 定制 > >

Error while fetching data from Apache Kafka

2021-09-13 Thread Dhiru
I am not sure when we try to receive data from Apache Kafka I get this error , but works good for me when I try to run via Conflunece kafka java.lang.ClassCastException: class java.lang.String cannot be cast to class scala.Product (java.lang.String is in module java.base of loader 'bootstrap';

Flink Native Kubernetes - Configuration kubernetes.flink.log.dir not working

2021-09-13 Thread bat man
Hi, I am running a POC to evaluate Flink on Native Kubernetes. I tried changing the default log location by using the configuration - kubernetes.flink.log.dir However, the job in application mode fails after bringing up the task manager. This is the command I use - ./bin/flink run-application

Re: JVM Metaspace capacity planning

2021-09-13 Thread Puneet Duggal
Hi, Thank you for quick reply. So in my case i am using Datastream Apis.Each job is a real time processing engine which consumes data from kafka and performs some processing on top of it before ingesting into sink. JVM Metaspace size earlier set was around 256MB (default) which i had to

Re: TaskManagers OOM'ing for Flink App with very large state only when restoring from checkpoint

2021-09-13 Thread Kevin Lam
Thanks for your replies Alexis and Guowei. We're using 1.13.1 version of Flink, and using the DataStream API. I'll try the savepoint, and take a look at that IO article, thank you. Please let me know if anything else comes to mind! On Mon, Sep 13, 2021 at 3:05 AM Alexis Sarda-Espinosa <

RocksDB state not cleaned up

2021-09-13 Thread tao xiao
Hi team We have a job that uses value state with RocksDB and TTL set to 1 day. The TTL update type is OnCreateAndWrite. We set the value state when the value state doesn't exist and we never update it again after the state is not empty. The key of the value state is timestamp. My understanding of

CustomKafkaDeserializationSchema is abstrct

2021-09-13 Thread 赢峰
报错如下: 代码如下: 签名由网易邮箱大师定制

Re: JVM Metaspace capacity planning

2021-09-13 Thread Caizhi Weng
Hi! Which API are you using? The datastream API or the Table / SQL API? If it is the Table / SQL API then some Java classes for some operators (for example aggregations, projection, filter, etc.) will be generated when compiling user code to executable Java code. These Java classes are new to the

JVM Metaspace capacity planning

2021-09-13 Thread Puneet Duggal
Hi, So on going through multiple resources, got basic idea that JVM Metaspace is used by flink class loader to load class metadata which is used to create objects in heap. Also this is a one time activity since all the objects of single class require single class metadata object in JVM

Flink-Zookeeper Security

2021-09-13 Thread Beata Szymanowska
Hi! I struggling with finding the answer for the question if this is possible to connect Fink to Zookeeper cluster secured with TLS certificate? All the Best, Beata Sz.

Re: CEP library support in Python

2021-09-13 Thread Pedro Silva
Hello Seth, Thank you very much for your reply. I've taken a look at MATCH_RECOGNIZE but I have the following doubt. Can I implement a state machine that detect patterns with multiple end states? To give you a concrete example: I'm trying to count the number of *Jobs* that have been *cancelled*

Re: 请教Cumulate Windows问题

2021-09-13 Thread Caizhi Weng
Hi! 你可能想要的是 tumble window,具体见 https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/queries/window-agg/ xiaocuyu 于2021年9月13日周一 上午11:56写道: > 各位社区大佬: > 您好! > 在使用Flink SQL中,我有一个需求是:对当天所有的数据进行聚合,然后固定时间输出聚合结果,目前感觉Cumulate >

Re: Job manager crash

2021-09-13 Thread houssem
hello, here's some of full GC log: OpenJDK 64-Bit Server VM (25.232-b09) for linux-amd64 JRE (1.8.0_232-b09), built on Oct 18 2019 15:04:46 by "jenkins" with gcc 4.8.2 20140120 (Red Hat 4.8.2-15) Memory: 4k page, physical 976560k(946672k free), swap 0k(0k free) CommandLine flags:

Re: TaskManagers OOM'ing for Flink App with very large state only when restoring from checkpoint

2021-09-13 Thread Alexis Sarda-Espinosa
I'm not very knowledgeable when it comes to Linux memory management, but do note that Linux (and by extension Kubernetes) takes disk IO into account when deciding whether a process is using more memory than it's allowed to, see e.g.

Re: Flink Task/Operator metrics renaming

2021-09-13 Thread Guowei Ma
Hi, Ashutosh As far as I know, there is no way to rename the system metrics name. But would you like to share why you need to rename the metrics ? Best, Guowei On Mon, Sep 13, 2021 at 2:29 PM Ashutosh Uttam wrote: > Hi team, > > We are using PrometheusReporter to expose Flink metrics to

Re: TaskManagers OOM'ing for Flink App with very large state only when restoring from checkpoint

2021-09-13 Thread Guowei Ma
Hi, Kevin 1. Could you give me some specific information, such as what version of Flink is you using, and is it using DataStream or SQL? 2. As far as I know, RocksDB will put state on disk, so it will not consume memory all the time and cause OOM in theory. So you can see if there are any

Flink Task/Operator metrics renaming

2021-09-13 Thread Ashutosh Uttam
Hi team, We are using PrometheusReporter to expose Flink metrics to Prometheus. Is there possibility of renaming Task/Operators metric like numRecordsIn, numRecordsOut etc. and exposing it to Prometheus. Regards, Ashutosh

Re: Issue while creating Hive table from Kafka topic

2021-09-13 Thread Harshvardhan Shinde
Hi, I checked for the dependency using the stackoverflow link you sent, the ByteArrayDeserializer class is present in my jar but when I try to run it, I'm getting the same error message. I also removed the kafka and kafka-client dependencies from my pom.xml file and added the jar from the link you