退订
Thanks for the hint, however i am not using the prometheus push gateway.
Regards,
M.
From: Caizhi Weng
Sent: 28 December 2021 02:17:34
To: Geldenhuys, Morgan Karl
Cc: user@flink.apache.org
Subject: Re: How to reduce interval between Uptime Metric
B between A and A + INTERVAL '7' DAY
个人感觉 A between B - INTERVAL '7' DAY and B 更容易理解一些
参见:
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/joins/#interval-joins
在 2021/12/31 17:57, mayifan 写道:
Hi,各位大佬!
Flink
我的理解是超过kafka transaction
timeout时间重启flink任务才会发生未提交数据丢失的情况,
kafka不会无限期的保存未提交事务数据。
正常情况下的flink重启是不会出现数据丢失的。
在 2021/12/31 11:31, zilong xiao 写道:
看官方文档中有介绍说当kafka事务超时时,可能会出现数据丢失的情况,那就是说,Flink没办法完全保证端到端exactly
once是么?想请教下社区大佬,我这么理解是正确的吗?一直都听说Flink 写kafka是可以保证端到端exactly once的,看到文档描述有点懵
文档地址:
HI David
Thanks a lot.
I almost get the point. When I use initializeState to restore the mapstate,
the task can not get a key at that moment, so I just get the key but not
the UK, when I use the mapstate in processElement, a key will be provided
implictly, so I would get the right UK and UV. But
Hi,
I am trying to run a Flink on GCP with the current source and
destination on Kinesis on AWS.
I have configured the access key on AWS to be able to connect.
I am running Flink 1.12.1
In flink I use the following code (Scala 2.12.2)
val props = new Properties
想看当时的讨论情况,但是这个访问不了。
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Move-Flink-ML-pipeline-API-and-library-code-to-a-separate-repository-named-flink-ml-tc49420.html
退订
Hi,各位大佬!
Flink SQL双流JOIN,先有的A流后有的B流,现在需要对A流状态保留7天,然后用B流去关联A流
正确的写法是 B between A and A + INTERVAL '7' DAY
还是 B between A - INTERVAL '7' DAY and A
期待大佬们的答复~!
非常感谢~!
Hi Puneet,
are we talking about the `web.upload.dir` [1] ? Maybe others have a
better solution for your problem, but have you thought about configuring
an NFS or some other distributed file system as the JAR directory? In
this case it should be available to all JobManagers.
Regards,
Timo
Hi Siddhesh,
how to use a ProcessFunction is documented here:
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/operators/process_function/
.process() is similar to .map() but with more Flink specific methods
available. Anyway, a simple map() should also do the job. But
Hi Yuval,
feel free to open an issue for this. Looks like a bug in our release
artifacts. We should definitely investigate how to solve this as the
ScalaDocs are crucial for the development experience.
Regards,
Timo
On 27.12.21 03:22, Zhipeng Zhang wrote:
Hi Yuval,
It seems that scala
退订
13 matches
Mail list logo