Re:Query on Flink SQL create DDL primary key for nested field

2023-10-30 Thread Xuyang
Hi, Flink SQL doesn't support a inline field in struct type as pk. You can try to raise an issue about this feature in community[1]. For a quick solution, you can try to transform it by DataStream API first by extracting the 'id' and then convert it to Table API to use SQL. [1]

Best practice way to conditionally discard a window and not serialize the results

2023-10-30 Thread Mark Petronic
I am reading stats from Kinesis, deserializing them into a stat POJO and then doing something like this using an aggregated window with no defined processWindow function: timestampedStats .keyBy(v -> v.groupKey())

Re: [DISCUSS][FLINK-33240] Document deprecated options as well

2023-10-30 Thread Matthias Pohl via user
Thanks for your proposal, Zhanghao Chen. I think it adds more transparency to the configuration documentation. +1 from my side on the proposal On Wed, Oct 11, 2023 at 2:09 PM Zhanghao Chen wrote: > Hi Flink users and developers, > > Currently, Flink won't generate doc for the deprecated

Re: Metrics with labels

2023-10-30 Thread Lars Skjærven
Registering the counter is fine, e.g. in `open()`: lazy val responseCounter: Counter = getRuntimeContext .getMetricGroup .addGroup("response_code") .counter("myResponseCounter") then, in def asyncInvoke(), I can still only do responseCounter.inc(), but what I want is

Clear the State Backends in Flink

2023-10-30 Thread arjun s
Hi team, I'm interested in understanding if there is a method available for clearing the State Backends in Flink. If so, could you please provide guidance on how to accomplish this particular use case? Thanks and regards, Arjun S

Monitoring File Processing Progress in Flink Jobs

2023-10-30 Thread arjun s
Hi team, I'm also interested in finding out if there is Java code available to determine the extent to which a Flink job has processed files within a directory. Additionally, I'm curious about where the details of the processed files are stored within Flink. Thanks and regards, Arjun S

Re: Enhancing File Processing and Kafka Integration with Flink Jobs

2023-10-30 Thread arjun s
Hi team, I'm also interested in finding out if there is Java code available to determine the extent to which a Flink job has processed files within a directory. Additionally, I'm curious about where the details of the processed files are stored within Flink. Thanks and regards, Arjun S On Mon,

Checkpoints are not triggering when S3 is unavailable

2023-10-30 Thread Evgeniy Lyutikov
Hi team! I came across strange behavior in Flink 1.17.1. If during the build of a checkpoint the s3 storage becomes unavailable, then the current checkpoint expired by timeout and new ones are not triggered. The triggering for new checkpoints is resumed only after s3 is restored and this can be

[ANNOUNCE] Apache Flink Kubernetes Operator 1.6.1 released

2023-10-30 Thread Rui Fan
The Apache Flink community is very happy to announce the release of Apache Flink Kubernetes Operator 1.6.1. Please check out the release blog post for an overview of the release: https://flink.apache.org/2023/10/27/apache-flink-kubernetes-operator-1.6.1-release-announcement/ The release is

[ANNOUNCE] Apache Flink Kubernetes Operator 1.6.1 released

2023-10-30 Thread Rui Fan
The Apache Flink community is very happy to announce the release of Apache Flink Kubernetes Operator 1.6.1. Please check out the release blog post for an overview of the release: https://flink.apache.org/2023/10/27/apache-flink-kubernetes-operator-1.6.1-release-announcement/ The release is

Query on Flink SQL create DDL primary key for nested field

2023-10-30 Thread elakiya udhayanan
Hi team, I have a Kafka topic named employee which uses confluent avro schema and will emit the payload as below: { "employee": { "id": "123456", "name": "sampleName" } } I am using the upsert-kafka connector to consume the events from the above Kafka topic as below using the Flink SQL DDL

Re: Re: flink 1.18.0 使用 hive beeline + jdbc driver连接sql gateway失败

2023-10-30 Thread Benchao Li
hiveserver2 endpoint 就是让 flink gateway 直接变成 hive server2,对外来讲它就是 hive server2 了,它可以直接跟已有的跟 hive server2 的工具配合一起使用。 但是现在你其实用的是 flink jdbc driver,这个并不是跟 hive server2 交互,它就是跟 flink gateway 交互,所以你用hive server2的模式启动,它就不认识了。 casel.chen 于2023年10月30日周一 14:36写道: > > 果然不指定endpoint为hiveserver2类型后使用hive

Re:Re:Re:回复: flink sql不支持show create catalog 吗?

2023-10-30 Thread casel.chen
谢谢解答,我查了一下目前有两种CatalogStore实现,一个是基于内存的,另一个是基于文件系统的。 请问要如何配置基于文件系统的CatalogStore?这个文件可以在对象存储上吗?flink sql client要如何使用这个CatalogStore? 谢谢! 在 2023-10-30 10:28:34,"Xuyang" 写道: >Hi, CatalogStore 的引入我理解是为了Catalog能被更好地管理、注册和元数据存储,具体motivation可以参考Flip295[1].

flink 1.18.0的sql gateway支持提交作业到k8s上运行吗?

2023-10-30 Thread casel.chen
想问一下目前flink 1.18.0的sql gateway只支持提交作业到本地运行吗?能否支持提交作业到yarn或k8s上运行呢?

Re:Re: flink 1.18.0 使用 hive beeline + jdbc driver连接sql gateway失败

2023-10-30 Thread casel.chen
果然不指定endpoint为hiveserver2类型后使用hive beeline工具连接上了。感谢! 不过我仍然有个疑问,看官网文档上有写提供 hiveserver2 endpoint 是为了兼容hive方言,按理也应该可以使用beeline连接上,因为原本beeline支持连接hiveserver2 以下是原文: HiveServer2 Endpoint is compatible with HiveServer2 wire protocol and allows users to interact (e.g. submit Hive SQL) with Flink SQL