Hi team,
We are using the JdbcSink from flink-connector-jdbc artifact, version
3.1.0-1.17.
I want to know if it’s possible to catch BatchUpdateException thrown and
put that message to DLQ.
Below is the use case:
Flink job reads a packet from Kafka and writes it to Postgres using the
JdbcSin
Hi Aian,
Which sink API are you using?
Have you tried the Sink v2 API [1]?
If you implement the WithPostCommitTopology interface [2], then you can
provide a follow-up step after the commits are finished. I have not tried
yet, but I expect that the failed Committables are emitted as well, and
avai
Hi Lijuan,
Please send email to user-unsubscr...@flink.apache.org if you want to
unsubscribe the mail from user@flink.apache.org.
Best,
Zakelly
On Thu, Oct 19, 2023 at 6:23 AM Hou, Lijuan via user
wrote:
>
> Hi team,
>
>
>
> Could you please remove this email from the subscription list? I have
Hi Alex,
AFAIK, the emptyDir[1] can be used directly as local disks, and
emptyDir can be defined by referring to this pod template[2].
If you want to use local disks through PV, you can first create a
statefulSet and mount the PV through volume claim templates[3], the
example “Local Recovery Enab
Hi team,
Could you please remove this email from the subscription list?
Thank you!
Best,
Minglei
I did see another email thread that mentions instructions on getting the
image from this link:
https://github.com/apache/flink-kubernetes-operator/pkgs/container/flink-kubernetes-operator/127962962?tag=3f0dc2e
On Wed, Oct 18, 2023 at 6:25 PM Tony Chen wrote:
> We're using the Helm chart to deplo
We're using the Helm chart to deploy the operator right now, and the image
that I'm using was downloaded from Docker Hub:
https://hub.docker.com/r/apache/flink-kubernetes-operator/tags. I wouldn't
be able to use the release-1.6 branch (
https://github.com/apache/flink-kubernetes-operator/commits/re
Hi team,
Could you please remove this email from the subscription list? I have another
email (juliehou...@gmail.com) subscribed as well. I can use that email to
receive flink emails.
Thank you!
Best,
Lijuan
HI Evgeniy,
Did you rollback your operator version? If yes, did you run into any issues?
I ran into the following exception in my flink-kubernetes-operator pod
while rolling back, and I was wondering if you encountered this.
2023-10-18 21:01:15,251 i.f.k.c.e.l.LeaderElector [ERROR] Exceptio
The recommended practice for RocksDB usage is to have local disks accessible to
it. The Kubernetes Operator doesn’t have fields related to creating disks for
RocksDB to use.
For instance, say I have maxParallelism=10 but parallelism=1. I have a
statically created PVC named “flink-rocksdb”. The
Hi!
Not sure if it’s the same but could you try picking up the fix from the
release branch and confirming that it solves the problem?
If it does we may consider a quick bug fix release.
Cheers
Gyula
On Wed, 18 Oct 2023 at 18:09, Tony Chen wrote:
> Hi Flink Community,
>
> Most of the Flink appl
Hi,
We have an use case where we need to ensure that data reaches all
endpoints/sinks one by one or to split the flow if any of them fails. Here is
an schema of the use case:
Current job: Source -> filters/maps/process -> sink1
\-> sink2
\-> sink3
Desired job: Source -> filters/maps/proce
Hi Flink Community,
Most of the Flink applications run on 1.14 at my company. After upgrading
the Flink Operator to 1.6, we've seen many jobmanager pods show
"JobManagerDeploymentStatus: MISSING".
Here are some logs from the operator pod on one of our Flink applications:
[m [33m2023-10-18 02:02:
Hi, Patrick:
We have encountered the same issue, that TaskManager's memory consumption
increases almost monotonously.
I'll try to describe what we have observed and our solution. You can check
if it would solve the problem.
We have observed that
1. Jobs with RocksDB state backend would fail afte
14 matches
Mail list logo