Hi all,
I am currently facing the problem of having a pipeline (DataStream API) where I
need to split a GenericRecord into its fields and then aggregate all the values
of a particular field into 30 minute windows. Therefore, if I were to use only
a keyBy field name, I would send all the values
Hi,
> 那是不是要计算sink延迟的话只需用 context.currentProcessingTime() - context.timestamp() 即可?
对,具体可以参考下这个内部实现的算子[1]
> 新的sink
> v2中的SinkWriter.Context已经没有currentProcessingTime()方法了,是不是可以直接使用System.currentTimeMillis()
> - context.timestamp()得到sink延迟呢?
应该是可以的,就是可能因为各tm的机器时间会有略微差异的情况,不会特别准,但是应该也够用了。
[1]
Good morning Rion,
Are you in session job mode or application mode? I’ve had some similar issues
(logback) lately and it turned out that I also needed to add the additional
dependencies (I guess JsonTemplateLayout is one of them) to the lib folder of
the deployment.
Kind regards
Dominik
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 来取消订阅来自
user-zh@flink.apache.org
邮件组的邮件。
Best,
Zhanghao Chen
From: 曹明勤
Sent: Thursday, February 22, 2024 9:42
To: user-zh@flink.apache.org
Subject: 退订
退订
退订
Hey Flinkers,
Recently I’ve been in the process of migrating a series of older Flink jobs to
use the official operator and have run into a snag on the logging front.
I’ve attempted to use the following configuration for the job:
```
logConfiguration:
log4j-console.properties: |+
Hi,
I have been trying to write a temporal join in SQL done on a rolling aggregate
view. However it does not work and throws :
org.apache.flink.table.api.ValidationException: Event-Time Temporal Table Join
requires both primary key and row time attribute in versioned table, but no row
time
Hi Folks,
I am currently seeking full-time positions in Flink Scala in India or the
USA (non consulting) , specifically at the Principal or Staff level
positions in India or USA.
I require an h1b transfer and assistance with relocation from India , my
i40 is approved.
Thanks & Regards
Sri
Hello
As per the OpenSearch connector documentation, OpensearchEmitter can be used to
perform requests of different types i.e., IndexRequest, DeleteRequest,
UpdateRequest etc.
Thanks Thias and Zakelly,
I probably muddied the waters saying that my use case was similar to
kvCache.
What I was calling "non serializable state" is actually a Random Cut Forest
ML model that cannot be serialized by itself, but you can extract a
serializable state. That is serializable, but
Good morning all,
Let me loop myself in …
1. Another even more convenient way to enable cache is to actually
configure/assign RocksDB to use more off-heap memory for cache, you also might
consider enabling bloom filters (all depends on how large you key-space is
11 matches
Mail list logo