ing.com>>
Cc: dev@flink.apache.org<mailto:dev@flink.apache.org>
mailto:dev@flink.apache.org>>
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi Fred,
ah yes, I think I understand the issue. The KafkaSink always creates a
KafkaCommitter even if
ckchannel(BackchannelImpl.java:96)
> ~[flink-sql-connector-kafka-4.0.0-2.0.jar:4.0.0-2.0]
> at
> org.apache.flink.connector.kafka.sink.internal.BackchannelFactory.getBackchannel(BackchannelFactory.java:110)
> ~[flink-sql-connector-kafka-4.0.0-2.0.jar:4.0.0-2.0]
> ... 18 more
>
>
.com/?url=https%3A%2F%2Fgithub.com%2Fapache%2Fflink-connector-kafka%2Fpull%2F156%253chttps%3A%2Fgithub.com%2Fapache%2Fflink-connector-kafka%2Fpull%2F156%253e&data=05%7C02%7CFred.Teunissen%40ing.com%7C0d076ad25bef4485eb7808dd96fd385a%7C587b6ea13db94fe1a9d785d4c64ce5cc%7C0%7C0%7C638832736508761138%
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi Fred,
ah yes, I think I understand the issue. The KafkaSink always creates a
KafkaCommitter even if you are not using EXACTLY_ONCE. It's an unfortunate
limitation of our Sink design.
When I implemented the chan
Cc: ar...@apache.org
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi Fred,
I see. It looks like this check was added in
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FFLINK-37282&data=05%7C02%7
: Teunissen, F.G.J. (Fred)
Date: Monday, 19 May 2025 at 17:33
To: dev@flink.apache.org
Subject: [EXTERNAL] Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0
Kafka Sinks
Hi David,
Depending on the flink version we use a different Kafka connector.
*
flink:2.0.0 -> flink-connector-kafka:4.
(at-least-once), so according
to the docs, the transactionalIdPrefix should not be required.
kind regards,
Fred
From: David Radley
Date: Monday, 19 May 2025 at 17:57
To: dev@flink.apache.org
Subject: Re: Issue with Duplicate transactionalIdPrefix in Flink 2.0 Kafka Sinks
Hi,
I had a quick loo
transactionalIdPrefix in Flink 2.0
Kafka Sinks
Hi everyone,
I'm encountering an issue with Flink 2.0 when using the Table API. In previous
versions (1.19/1.20), I was able to create a Flink job with the following setup:
*
One Kafka topic-based input table
*
One Kafka topic-based output table
Hi everyone,
I'm encountering an issue with Flink 2.0 when using the Table API. In previous
versions (1.19/1.20), I was able to create a Flink job with the following setup:
*
One Kafka topic-based input table
*
One Kafka topic-based output table
*
One statement set with two insert statemen