在flink处理函数中定义一个状态变量,比如private ValueState
vs;这个状态变量是否需要用transient来修饰,为什么呢?以及什么情况下flink代码中需要用transient来修饰变量,什么情况下不用transient来修饰?请大家指教
Thanks Martijn, the documentation for Async IO was also indicating the same
and that's what prompted me to post this question here.
~
Karthik
On Mon, Jun 12, 2023 at 7:45 PM Martijn Visser
wrote:
> Hi Karthik,
>
> In my opinion, it makes more sense to use a sink to leverage Scylla over
> using
Hello,
Using Flink record stream format file source API as below for parquet records
reading.
FileSource.FileSourceBuilder source =
FileSource.forRecordStreamFormat(streamformat, path);
source.monitorContinuously(Duration.ofMillis(1));
Want to log/generate metrices for corrupt records and
hi casel
1. 可以考虑使用 Flink1.15, 使用精简的 operator name
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/dev/table/config/#table-exec-simplify-operator-name-enabled
2. Flink 也提供了 restful 接口直接获取瞬时的 metric,如果不需要历史的 metric
线上跑了200多个flink
sql作业,接了prometheus指标(prometheus定期来获取作业指标)监控后没跑一会儿就将prometheus内存打爆(开了64GB内存),查了一下是因为指标名称过长导致的。
flink
sql作业的指标名称一般是作业名称+算子名称组成的,而算子名称是由sql内容拼出来的,在select字段比较多或sql较复杂的情况下容易生成过长的名称,
请问这个问题有什么好的办法解决吗?
谢谢解答,如果是flink sql作业要如何获取到作业中每个算子的latency指标呢?
而且通过prometheus获取作业指标的话,因为flink sql作业中每个算子的名称是按sql内容拼出来的,会出现名称很长,
这样的算子指标直接打到prometheus的话会直接将prometheus内存打爆,这个问题有什么好的办法解决吗?
在 2023-06-12 18:01:11,"Hangxiang Yu" 写道:
>[.[.]]..latency
>这个应该可以满足需求?也可以设置不同的粒度。
Hi Yogesh,
I think you need to build the dynamic SQL statement in your service and
then submit the SQL to flink cluster.
Best,
Shammon FY
On Mon, Jun 12, 2023 at 9:15 PM Yogesh Rao wrote:
> Hi,
>
> Is there a way we can build a dynamic SQL in Flink from contents of Map ?
>
> Essentially
Hi,
I tried to install apache-flink python package but getting an error message,
and I am unsure how to resolve this, since I am able to install other packages
like pandas without any issues. Any help would be greatly appreciated.
vagrant@devbox:~$ pip install apache-flink
Defaulting to user
Hi Karthik,
In my opinion, it makes more sense to use a sink to leverage Scylla over
using Async IO. The primary use case for Async IO is enrichment, not for
writing to a sync.
Best regards,
Martijn
On Mon, Jun 12, 2023 at 4:10 PM Karthik Deivasigamani
wrote:
> Thanks Martijn for your
Thanks Martijn for your response.
One thing I did not mention was that we are in the process of moving away
from Cassandra to Scylla and would like to use the Scylla Java Driver for
the following reason :
> The Scylla Java driver is shard aware and contains extensions for a
>
Hi,
Is there a way we can build a dynamic SQL in Flink from contents of Map ?
Essentially trying to do achieve something like below
StringBuilder builder = new StringBuilder("INSERT INTO sampleSink SELECT ");
builder.append("getColumnsFromMap(dataMap), ");
builder.append(" FROM
1. Like a regular Kafka client, so it depends on how you configure it.
2. Yes
3. It depends on the failure of course, but you could create something like
a Dead Letter Queue in case deserialization fails for your incoming message.
Best regards,
Martijn
On Sat, Jun 10, 2023 at 2:03 PM Anirban
Hi!
I think you forgot to upgrade the operator CRD (which contains the updates
enum values).
Please see:
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/upgrade/#1-upgrading-the-crd
Cheers
Gyula
On Mon, 12 Jun 2023 at 13:38, Liting Liu (litiliu)
wrote:
Hi!
I think you forgot to upgrade the operator CRD (which contains the updates
enum values).
Please see:
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/upgrade/#1-upgrading-the-crd
Cheers
Gyula
On Mon, 12 Jun 2023 at 13:38, Liting Liu (litiliu)
wrote:
Hi, I was trying to submit a flink 1.17 job with the flink-kubernetes-operator
version v1.5.0.
But encountered the below exception:
The FlinkDeployment "test-scale-z6t4cd" is invalid: spec.flinkVersion:
Unsupported value: "v1_17": supported values: "v1_13", "v1_14", "v1_15", "v1_16"
I think
Hi, I was trying to submit a flink 1.17 job with the flink-kubernetes-operator
version v1.5.0.
But encountered the below exception:
The FlinkDeployment "test-scale-z6t4cd" is invalid: spec.flinkVersion:
Unsupported value: "v1_17": supported values: "v1_13", "v1_14", "v1_15", "v1_16"
I think
Hi,
Why wouldn't you just use the Flink Kafka connector and the Flink Cassandra
connector for your use case?
Best regards,
Martijn
On Mon, Jun 12, 2023 at 12:03 PM Karthik Deivasigamani
wrote:
> Hi,
>I have a use case where I need to read messages from a Kafka topic,
> parse it and write
Hi,
I have a use case where I need to read messages from a Kafka topic,
parse it and write it to a database (Cassandra). Since Cassandra supports
async APIs I was considering using Async IO operator for my writes. I do
not need exactly-once semantics for my use-case.
Is it okay to leverage the
[.[.]]..latency
这个应该可以满足需求?也可以设置不同的粒度。
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/ops/metrics/#io
On Mon, Jun 12, 2023 at 5:05 PM casel.chen wrote:
> 想统计数据经过flink作业各个算子的延迟指标,目前社区开源版能实现吗?
--
Best,
Hangxiang.
想统计数据经过flink作业各个算子的延迟指标,目前社区开源版能实现吗?
Please send an email to user-unsubscr...@flink.apache.org to unsubscribe
Best,
Hang
Yu voidy 于2023年6月12日周一 11:39写道:
>
>
我的是flink sql作业,要如何实现你说的方案呢?我看到阿里云实时计算平台VVR是支持展示作业时延指标的,想知道它是如何实现的
在 2023-06-08 16:46:34,"17610775726" <17610775...@163.com> 写道:
>Hi
>
>
>你提到的所有的监控都是可以通过 metric 来监控报警的,至于你提到的 LatencyMarker 因为它不参与算子内部的计算逻辑的时间,所以这个
>metric 并不是准确的,但是如果任务有反压的情况下 LatencyMarker
22 matches
Mail list logo