Hi,
Thanks for the reply.
I don't think I can use IAM integration and avoid distributing keys to the
application because my Flink application is running outside AWS EC2, in
native K8s cluster nodes, from where I am distributing to S3 services
hosted on AWS.
If there is a procedure to still
flink1.16.1
mysql8.0.33
jdbc-3.1.0-1.16
I have a sql,
insert into test_flink_res2(id,name,address)
select a.id,a.name,a.address from test_flink_res1 a left join test_flink_res2 b
on a.id=b.id where a.name='abc0.11317691217472489' and b.id is null;
Why does flinksql convert this statement
Hi,
if for some reason there exists a checkpoint by same name.
>
Could you give more details about your scenarios here?
>From your description, I guess this problem occurred when a job restart,
does this restart is triggered personally?
In common restart processing, the job will retrieve the
Hello Shammon/Team,
We are using same Flink version which is 1.14.2 but only change is earlier we
were using Flink Kafka Consumer not we are moving with Kafka Source. I dont see
any difference in Job planner but I see Kafka source is introducing more
latency while performing Flinka Kafka
Hi, amenreet,
As Hangxiang said, we should use a new checkpoint dir if the new job has
the same jobId as the old one . Or else you should not use a fixed jobId
and the checkpoint dir will not conflict.
Best,
Hang
Hangxiang Yu 于2023年5月10日周三 10:35写道:
> Hi,
> I guess you used a fixed JOB_ID, and
Hi, Iris,
The metrics have already be calculated in Flink. So we only need to report
these metric as the gauges.
For example, the metric `metricA` is a Flink counter and is increased from
1 to 2. The statsd gauge will be 2 now. If we register it as a statsd
counter, we will send 1 and 2 to the
Hi,
I guess you used a fixed JOB_ID, and configured the same checkpoint dir as
before ?
And you may also start the job without before state ?
The new job cannot know anything about before checkpoints, that's why the
new job will fail when it tries to generate a new checkpoint.
I'd like to suggest
Hi Madan,
Could you give the old and new versions of flink and provide the job plan?
I think it will help community to find the root cause
Best,
Shammon FY
On Wed, May 10, 2023 at 2:04 AM Madan D via user
wrote:
> Hello Team,
>
> We have been using Flink Kafka consumer and recently we have
Hi
如果使用CEP,可以将两个流合并成一个流,然后通过subtype根据不同的事件类型来匹配,定义CEP的Pattern,例如以下这种
DataStream s1 = ...;
DataStream s2 = ...;
DataStream s = s1.union(s1)...;
Pattern = Pattern.begin("first")
.subtype(E1.class)
.where(...)
.followedBy("second")
.subtype(E2.class)
.where(...)
如果使用Flink
Hi,这个应该是FLINK-31839已经确定的ISSUE,在1.17.1中已经修复了,可以参考:
https://issues.apache.org/jira/browse/FLINK-31839
On Sat, May 6, 2023 at 5:00 PM maker_d...@foxmail.com <
maker_d...@foxmail.com> wrote:
> flink version:flink-1.17.0
> k8s application模式模式
>
> 已经在flink-conf中禁用delegation token:
>
如果需要取消订阅 user-zh@flink.apache.org 邮件组,请发送任意内容的邮件到
user-zh-unsubscr...@flink.apache.org ,参考[1]
[1] https://flink.apache.org/zh/community/
Best,
Yuxin
胡家发 <15802974...@163.com> 于2023年5月7日周日 22:14写道:
> 退订
需求:业务端实现支付功能,需要通过第三方支付平台的交易数据采用Flink SQL来做一个实时对账,对于超过30分钟内未到达的第三方支付平台交易数据进行告警。
请问这个双流实时对帐场景使用Flink CEP SQL要如何实现?
网上找的例子都是基于单条流实现的,而上述场景会用到两条流,一个是PlatformPaymentStream,另一个是ThirdPartyPaymentStream。
Hello Team,
We have been using Flink Kafka consumer and recently we have been moving to
Flink Kafka source to get more advanced features but we have been observing
more rebalances right after data consumed and moving to next operator than
Flink Kafka consumer.
Can you please let us know what
退订
The following is the content of the forwarded email
From:"胡家发" <15802974...@163.com>
To:user-zh
Date:2023-05-07 22:13:55
Subject:退订
退订
Please send email to user-zh-unsubscr...@flink.apache.org
if you want to unsubscribe the mail from
user-zh-unsubscr...@flink.apache.org , and you can
refer[1][2] for more details.
请发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org
地址来取消订阅来自 user-zh@flink.apache.org 邮件组的邮件,你可以参考[1][2] 管理邮件订阅。
退订
> Could you please open a jira
Done: https://issues.apache.org/jira/browse/FLINK-32041
> PR (in case you fixed this already)
Haven't fixed it yet! But if I find time to do it I will!
Thanks!
On Tue, May 9, 2023 at 4:49 AM Tamir Sagi
wrote:
> Hey,
>
> I also encountered something similar
Hi all,
Is there any way to prevent restart of flink job, or override the
checkpoint metadata, if for some reason there exists a checkpoint by same
name. I get the following exception and my job restarts, have been trying
to find solution for a very long time but havent found anything useful yet,
Hi all,
Hi Team,
I am deploying my job in application mode on Flink-1.16.0, but keep
constantly receiving this error from a long time:
2023-04-10 13:48:39,366 INFO org.apache.kafka.clients.NetworkClient
[] - [Consumer clientId=event-executor-client-1,
hi Anuj,
As Martijn said IAM is the preferred option but if you've no other way than
access keys then environment variables is a better choice.
Such case conf doesn't contain plain text keys.
Just a side note, putting `s3a.access.key` into Flink conf file is not
configuring Hadoop S3. The way
Hi Anuj,
You can't provide the values for S3 in job code, since the S3 filesystems
are loaded via plugins. Credentials must be stored in flink-conf.yaml. The
recommended method for setting up credentials is by using IAM, not via
Access Keys. See
Hi,
Thanks for the reply.
Yes my flink deployment is on K8s but I am not using Flink-k8s operator.
If i understood correctly, even with init-container the flink-conf.yaml
(inside the container) would finally contain unencrypted values for access
tokens. We don't want to persist such sensitive
Hey folks trying to troubleshoot why counter metrics are appearing as gauges on
my end. Is it expected that the StatsdMetricsReporter is reporting gauges for
counters as well?
Looking at this one:
Hey,
I also encountered something similar with different error. I enabled HA with
RBAC.
org.apache.flink.kubernetes.shaded.io.fabric8.kubernetes.client.KubernetesClientException","message":"Failure
executing: GET at: https://172.20.0.1/api/v1/nodes. Message:
Forbidden!Configured service
Hi, Pritam,
I see Martijn has responsed the ticket.
Kafka source (FLIP-27) will commit offsets in two places: kafka consumer
auto commit and invoke `consumer.commitAsync` when checkpoint is completed.
- If the checkpoint is enabled and commit.offsets.on.checkpoint = true,
kafka connector commits
Hi Team,
I need to get kafka-lag to prepare a graph and its dependent on kafka
committed offset. Flink is updating the offsets only after checkpointing to
make it consistent.
Default Behaviour as per doc :
If checkpoint is enabled, but consumer.setCommitOffsetsOnCheckpoints set to
false, then
26 matches
Mail list logo