hongxu han created FLINK-35374:
--
Summary: Flink 1.14 kafka connector Demo Error
Key: FLINK-35374
URL: https://issues.apache.org/jira/browse/FLINK-35374
Project: Flink
Issue Type: Bug
Martijn Visser created FLINK-35109:
--
Summary: Drop support for Flink 1.17 and 1.18 in Flink Kafka
connector
Key: FLINK-35109
URL: https://issues.apache.org/jira/browse/FLINK-35109
Project: Flink
yufeng.sun created FLINK-35034:
--
Summary: codegen compile error raised when use kafka connector and
protobuf format
Key: FLINK-35034
URL: https://issues.apache.org/jira/browse/FLINK-35034
Project: Flink
Jiabao Sun created FLINK-35011:
--
Summary: The change in visibility of MockDeserializationSchema
cause compilation failure in kafka connector
Key: FLINK-35011
URL: https://issues.apache.org/jira/browse/FLINK-35011
Martijn Visser created FLINK-35008:
--
Summary: Bump org.apache.commons:commons-compress from 1.25.0 to
1.26.0 for Flink Kafka connector
Key: FLINK-35008
URL: https://issues.apache.org/jira/browse/FLINK-35008
Martijn Visser created FLINK-35007:
--
Summary: Update Flink Kafka connector to support 1.19 and test
1.20-SNAPSHOT
Key: FLINK-35007
URL: https://issues.apache.org/jira/browse/FLINK-35007
Project
yansuopeng created FLINK-34995:
--
Summary: flink kafka connector source stuck when partition leader
invalid
Key: FLINK-34995
URL: https://issues.apache.org/jira/browse/FLINK-34995
Project: Flink
Zhenqiu Huang created FLINK-34466:
-
Summary: Implement Lineage Interface in Kafka Connector
Key: FLINK-34466
URL: https://issues.apache.org/jira/browse/FLINK-34466
Project: Flink
Issue Type
Martijn Visser created FLINK-34320:
--
Summary: Flink Kafka connector tests time out
Key: FLINK-34320
URL: https://issues.apache.org/jira/browse/FLINK-34320
Project: Flink
Issue Type: Bug
ersioning [1].
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
> <
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
> >
> >
> > Best,
> > Mason
> &g
ng [1].
>
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
>
> <https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development>
>
> Best,
> Mason
>
> On Thu, Jan 25, 2024 at 2:16 PM Martijn Visser
onnector versioning [1].
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
>
> Best,
> Mason
>
> On Thu, Jan 25, 2024 at 2:16 PM Martijn Visser
> wrote:
>
> > Hi everyone,
> >
> > The latest version of the Flink
with what was agreed upon for external connector versioning [1].
[1]
https://cwiki.apache.org/confluence/display/FLINK/Externalized+Connector+development
Best,
Mason
On Thu, Jan 25, 2024 at 2:16 PM Martijn Visser
wrote:
> Hi everyone,
>
> The latest version of the Flink Kafka connect
t;
> The latest version of the Flink Kafka connector that's available is
> currently v3.0.2, which is compatible with both Flink 1.17 and Flink 1.18.
>
> I would like to propose to create a release which is either v3.1, or v4.0
> (see below), with compatibility for Flink 1.17 and Fl
Hi everyone,
The latest version of the Flink Kafka connector that's available is
currently v3.0.2, which is compatible with both Flink 1.17 and Flink 1.18.
I would like to propose to create a release which is either v3.1, or v4.0
(see below), with compatibility for Flink 1.17 and Flink 1.18
Martijn Visser created FLINK-34154:
--
Summary: Bump org.apache.zookeeper:zookeeper from 3.5.9 to 3.7.2
for Kafka connector
Key: FLINK-34154
URL: https://issues.apache.org/jira/browse/FLINK-34154
Martijn Visser created FLINK-34149:
--
Summary: Flink Kafka connector can't compile against 1.19-SNAPSHOT
Key: FLINK-34149
URL: https://issues.apache.org/jira/browse/FLINK-34149
Project: Flink
Mason Chen created FLINK-34127:
--
Summary: Kafka connector repo runs a duplicate of
`IntegrationTests` framework tests
Key: FLINK-34127
URL: https://issues.apache.org/jira/browse/FLINK-34127
Project
Qingsheng Ren created FLINK-33512:
-
Summary: Update download link in doc of Kafka connector
Key: FLINK-33512
URL: https://issues.apache.org/jira/browse/FLINK-33512
Project: Flink
Issue Type
Timo Walther created FLINK-33497:
Summary: Update the Kafka connector to support DISTRIBUTED BY
clause
Key: FLINK-33497
URL: https://issues.apache.org/jira/browse/FLINK-33497
Project: Flink
Darcy Lin created FLINK-33484:
-
Summary: Flink Kafka Connector Offset Lag Issue with Transactional
Data and Read Committed Isolation Level
Key: FLINK-33484
URL: https://issues.apache.org/jira/browse/FLINK-33484
> >
>> >> > > >
>> >> > > > On Sun, 29 Oct 2023 at 08:02, Leonard Xu
>> wrote:
>> >> > > >
>> >> > > > > +1 (binding)
>> >> > > > >
>> >> > > > > - Verified
> > >
> >> > > >
> >> > > > On Sun, 29 Oct 2023 at 08:02, Leonard Xu
> wrote:
> >> > > >
> >> > > > > +1 (binding)
> >> > > > >
> >> > > > > - Verified signatures
>
Verified signatures
>> > > > > - Verified hashsums
>> > > > > - Checked Github release tag
>> > > > > - Built from source code succeeded
>> > > > > - Checked release notes
>> > > > > - Reviewed the web PR
>&
elease notes
> > > > > - Reviewed the web PR
> > > > >
> > > > > Best,
> > > > > Leonard
> > > > >
> > > > >
> > > > > > 2023年10月29日 上午11:34,mystic lama 写道:
> > > > > >
>
> >
> > > > > 2023年10月29日 上午11:34,mystic lama 写道:
> > > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > - verified signatures
> > > > > - build with Java 8 and Java 11 - build suc
;
> > > > - verified signatures
> > > > - build with Java 8 and Java 11 - build success
> > > >
> > > > Minor observation
> > > > - RAT check flagged that README.md is missing ASL
> > > >
> > > > On
Pavel Khokhlov created FLINK-33401:
--
Summary: Kafka connector has broken version
Key: FLINK-33401
URL: https://issues.apache.org/jira/browse/FLINK-33401
Project: Flink
Issue Type: Bug
EADME.md is missing ASL
> > >
> > > On Fri, 27 Oct 2023 at 23:40, Xianxun Ye
> > wrote:
> > >
> > >> +1(non-binding)
> > >>
> > >> - Started a local Flink 1.18 cluster, read and wrote with Kafka and
> > Upsert
> > >&
; - build with Java 8 and Java 11 - build success
> >
> > Minor observation
> > - RAT check flagged that README.md is missing ASL
> >
> > On Fri, 27 Oct 2023 at 23:40, Xianxun Ye
> wrote:
> >
> >> +1(non-binding)
> >>
> >> - Started
build with Java 8 and Java 11 - build success
>
> Minor observation
> - RAT check flagged that README.md is missing ASL
>
> On Fri, 27 Oct 2023 at 23:40, Xianxun Ye wrote:
>
>> +1(non-binding)
>>
>> - Started a local Flink 1.18 cluster, read and wrote with Kaf
ith Kafka and Upsert
> Kafka connector successfully to Kafka 2.2 cluster
>
> One minor question: should we update the dependency manual of these two
> documentations[1][2]?
>
> [1]
> https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/kafka/#dependencies
> [
+1(non-binding)
- Started a local Flink 1.18 cluster, read and wrote with Kafka and Upsert
Kafka connector successfully to Kafka 2.2 cluster
One minor question: should we update the dependency manual of these two
documentations[1][2]?
[1]
https://nightlies.apache.org/flink/flink-docs-master
, successfully read and
wrote with the Kafka connector to Confluent Cloud with AVRO and Schema
Registry enabled
On Thu, Oct 26, 2023 at 5:09 AM Qingsheng Ren wrote:
>
> +1 (binding)
>
> - Verified signature and checksum
> - Verified that no binary exists in the source archive
> - Built fro
- Nothing suspicious in LICENSE and NOTICE file
- Reviewed web PR
Thanks for the effort, Gordon!
Best,
Qingsheng
On Thu, Oct 26, 2023 at 5:13 AM Tzu-Li (Gordon) Tai
wrote:
> Hi everyone,
>
> Please review and vote on release candidate #1 for version 3.0.1 of the
> Apache Flink Kaf
Hi everyone,
Please review and vote on release candidate #1 for version 3.0.1 of the
Apache Flink Kafka Connector, as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)
This release contains important changes for the following:
- Supports
Zhanghao Chen created FLINK-33265:
-
Summary: Support source parallelism setting for Kafka connector
Key: FLINK-33265
URL: https://issues.apache.org/jira/browse/FLINK-33265
Project: Flink
Hi David,
I didn't see this message, but I did go over the Flink repo yesterday
and closed off all PRs that were relevant to the Kafka connector.
Removing the label won't help much, if a user classifies a Jira ticket
as Connector/Kafka but opens a PR in the Flink repo, the label will be
re-added
Qingsheng Ren created FLINK-33219:
-
Summary: Kafka connector has architecture test violation against
Flink 1.18
Key: FLINK-33219
URL: https://issues.apache.org/jira/browse/FLINK-33219
Project: Flink
Jing Ge created FLINK-33191:
---
Summary: Kafka Connector should directly depend on 3rd-party libs
instead of flink-shaded repo
Key: FLINK-33191
URL: https://issues.apache.org/jira/browse/FLINK-33191
Project
Hi,
I was looking at the pr backlog in the Flink repository and realise that there
are 51 hits on the search
https://github.com/apache/flink/pulls?q=is%3Apr+is%3Aopen+kafka-connector.
And 25 hits on
https://github.com/apache/flink/pulls?q=is%3Apr+is%3Aopen+kafka-connector+label%3Acomponent
Aarsh Shah created FLINK-33124:
--
Summary: Kafka Connector not working for table
Key: FLINK-33124
URL: https://issues.apache.org/jira/browse/FLINK-33124
Project: Flink
Issue Type: Bug
Martijn Visser created FLINK-33104:
--
Summary: Nightly run for Flink Kafka connector fails
Key: FLINK-33104
URL: https://issues.apache.org/jira/browse/FLINK-33104
Project: Flink
Issue Type
Martijn Visser created FLINK-33017:
--
Summary: Nightly run for Flink Kafka connector fails
Key: FLINK-33017
URL: https://issues.apache.org/jira/browse/FLINK-33017
Project: Flink
Issue Type
Xiaojian Sun created FLINK-32743:
Summary: Flink kafka connector source can directly parse data
collected from kafka-connect
Key: FLINK-32743
URL: https://issues.apache.org/jira/browse/FLINK-32743
Matthias Pohl created FLINK-32582:
-
Summary: Move TypeSerializerUpgradeTestBase from Kafka connector
into flink-connector-common
Key: FLINK-32582
URL: https://issues.apache.org/jira/browse/FLINK-32582
Thank you for this!
On Tue, Jun 27, 2023 at 8:57 PM Mason Chen wrote:
> Hi all,
>
> I would like to inform you that we have removed the Kafka connector code
> from the Flink main repo. This should reduce the developer confusion of
> which repo to submit PRs.
>
> Regarding a
Hi all,
I would like to inform you that we have removed the Kafka connector code
from the Flink main repo. This should reduce the developer confusion of
which repo to submit PRs.
Regarding a few nuances, we have kept the Confluent avro format in the main
repo. This is because the format
Mason Chen created FLINK-32451:
--
Summary: Refactor Confluent Schema Registry E2E Tests to remove
Kafka connector dependency
Key: FLINK-32451
URL: https://issues.apache.org/jira/browse/FLINK-32451
Chesnay Schepler created FLINK-32327:
Summary: Python Kafka connector runs into strange
NullPointerException
Key: FLINK-32327
URL: https://issues.apache.org/jira/browse/FLINK-32327
Project: Flink
Hi Jark, John,
Thank you for the discussion! I will proceed with completing the patch that
adds exactly-once to upsert-kafka connector.
Best,
Alexander
On Wed, Apr 12, 2023 at 12:21 AM Jark Wu wrote:
> Hi John,
>
> Thank you for your valuable input. It sounds reasonable to me.
.
> Exactly
> >> > once
> >> > can’t recognize duplicated records and drop duplications. That means
> >> > duplicated
> >> > records are written into topics even if exactly-once mode is enabled.
> >> >
> >> >
> >> &g
ogether with idempotent
>> producers prevent duplicated records[1], at least in the cases when
>> upstream does not produce them intentionally and across checkpoints.
>>
>> Could you please elaborate or point me to the docs that explain the reason
>> for duplicated rec
ckpoints? I am relatively new
> to Flink and not aware of it. According to the kafka connector
> documentation, it does support exactly once semantics by configuring '
> sink.delivery-guarantee'='exactly-once'[2]. It is not clear to me why we
> can't make upsert-kafka configurable i
for duplicated records upstream and across checkpoints? I am relatively new
to Flink and not aware of it. According to the kafka connector
documentation, it does support exactly once semantics by configuring '
sink.delivery-guarantee'='exactly-once'[2]. It is not clear to me why we
can't make
Flink community,
>
> I would like to discuss if it is worth adding EXACTLY_ONCE delivery
> semantics to upsert-kafka connector. According to upsert-kafka docs[1] and
> ReducingUpsertSink javadoc[2], the connector is correct even with duplicate
> records under AT_LEAST_ONCE beca
Do we have to move Debezium _now_?
There is no hard dependency between debezium-json and the Kafka
connector, nor does the format depend on Kafka afaict.
So is this only about the e2e test that uses debezium-json + kafka
connector?
If so, then I would suggest to put debezium-json issue aside
; > talked with @PatrickRen <https://github.com/PatrickRen> offline, don't
> > have a suitable way to fix it before. and we will solved it in this week
> >
> > Shammon FY 于2023年3月25日周六 13:13写道:
> >
> > > Thanks Jing and Gordon, I have closed the pr
> >
Hello Flink community,
I would like to discuss if it is worth adding EXACTLY_ONCE delivery
semantics to upsert-kafka connector. According to upsert-kafka docs[1] and
ReducingUpsertSink javadoc[2], the connector is correct even with duplicate
records under AT_LEAST_ONCE because the records
l solved it in this week
>
> Shammon FY 于2023年3月25日周六 13:13写道:
>
> > Thanks Jing and Gordon, I have closed the pr
> > https://github.com/apache/flink/pull/21965 and will open a new one for
> > kafka connector
> >
> >
> > Best,
> > shammon FY
> >
don't
have a suitable way to fix it before. and we will solved it in this week
Shammon FY 于2023年3月25日周六 13:13写道:
> Thanks Jing and Gordon, I have closed the pr
> https://github.com/apache/flink/pull/21965 and will open a new one for
> kafka connector
>
>
> Best,
> shammon FY
Thanks Jing and Gordon, I have closed the pr
https://github.com/apache/flink/pull/21965 and will open a new one for
kafka connector
Best,
shammon FY
On Saturday, March 25, 2023, Ran Tao wrote:
> Thank you Gordon and all the people who have worked on the externalized
> kafka implemen
There is another PR related to Kafka
> > connector: https://github.com/apache/flink/pull/21965
> >
> > Best regards,
> > Jing
> >
> > On Fri, Mar 24, 2023 at 4:06 PM Tzu-Li (Gordon) Tai >
> > wrote:
> >
> > > Hi all,
> > >
> > &
Thanks Jing! I missed https://github.com/apache/flink/pull/21965 indeed.
Please let us know if anything else was overlooked.
On Fri, Mar 24, 2023 at 8:13 AM Jing Ge wrote:
> Thanks Gordon for driving this! There is another PR related to Kafka
> connector: https://github.com/apache/flin
Thanks Gordon for driving this! There is another PR related to Kafka
connector: https://github.com/apache/flink/pull/21965
Best regards,
Jing
On Fri, Mar 24, 2023 at 4:06 PM Tzu-Li (Gordon) Tai
wrote:
> Hi all,
>
> Now that Flink 1.17 has been released, and given that we've alrea
Hi all,
Now that Flink 1.17 has been released, and given that we've already synced
the latest Kafka connector code up to Flink 1.17 to the
apache/flink-connector-kafka repo (thanks to Mason and Martijn for most of
the effort!), we're now in the final step of completely removing the Kafka
Ruibin Xing created FLINK-31483:
---
Summary: Implement Split Deletion Support in Flink Kafka Connector
Key: FLINK-31483
URL: https://issues.apache.org/jira/browse/FLINK-31483
Project: Flink
Hi Krish,
Thanks for reaching out! I've assigned the ticket to you and also granted
you access to create a FLIP.
Best regards,
Martijn
On Sat, Dec 17, 2022 at 9:52 PM Krish Narukulla wrote:
> Hi Team,
>
> I have written a connector for confluent kafka for protobuf and tested it
>
Hi Team,
I have written a connector for confluent kafka for protobuf and tested it
internally.
Could you please assign Jira:
https://issues.apache.org/jira/browse/FLINK-29731 to me? Also give me
access to create FLIP if needed.
My Jira and confluence user name is : *krisnaru*
Thanks
Krish
Hi Mason,
great, thanks a lot for working on this. Will greatly speed up CI of the
core repository once this is finalized.
Cheers
Konstantin
Am Mi., 7. Dez. 2022 um 07:23 Uhr schrieb Mason Chen :
> Hi all,
>
> I've finished the first pass on externalizing the Kafka connec
Hi all,
I've finished the first pass on externalizing the Kafka connector under the
release-1.16 branch in this
https://github.com/apache/flink-connector-kafka/pull/1. The docs,
connectors, and e2e tests have been relocated and CI on my fork has
been running green. Thanks for everyone else's work
Lukas Mahl created FLINK-30218:
--
Summary: [Kafka Connector] java.lang.OutOfMemoryError: Metaspace
Key: FLINK-30218
URL: https://issues.apache.org/jira/browse/FLINK-30218
Project: Flink
Issue
Martijn Visser created FLINK-30052:
--
Summary: Move existing Kafka connector code from Flink repo to
dedicated Kafka repo
Key: FLINK-30052
URL: https://issues.apache.org/jira/browse/FLINK-30052
Martijn Visser created FLINK-30051:
--
Summary: Create repository for Kafka connector
Key: FLINK-30051
URL: https://issues.apache.org/jira/browse/FLINK-30051
Project: Flink
Issue Type: Sub
Chesnay Schepler created FLINK-29977:
Summary: Kafka connector not compatible with kafka-clients 3.3x
Key: FLINK-29977
URL: https://issues.apache.org/jira/browse/FLINK-29977
Project: Flink
Mingliang Liu created FLINK-29920:
-
Summary: Minor reformat Kafka connector documentation
Key: FLINK-29920
URL: https://issues.apache.org/jira/browse/FLINK-29920
Project: Flink
Issue Type
hi.
You can use SQL API to parse or write the header in the Kafka record[1] if
you are using Flink SQL.
Best,
Shengkai
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/kafka/#available-metadata
Yaroslav Tkachenko 于2022年10月13日周四 02:21写道:
> Hi,
>
> You can
Hi,
You can implement a custom KafkaRecordDeserializationSchema (example
https://docs.immerok.cloud/docs/cookbook/reading-apache-kafka-headers-with-apache-flink/#the-custom-deserializer)
and just avoid emitting the record if the header value matches what you
need.
On Wed, Oct 12, 2022 at 11:04
I have some flink applications that read streams from Kafka, now
the producer side code has introduced some additional information in Kafka
headers while producing records.
Now I need to change my consumer-side logic to process the records if the
header contains a specific value, if the header
Thanks!.
On Sun, Oct 9, 2022, 08:45 Qingsheng Ren wrote:
> Hi Sriram,
>
> A short answer: the interval of polling is adjusted “dynamically” (by
> blocking the KafkaConsumer#poll call) according to the traffic of data.
>
> I think this line [1] is what you are looking for.
>
> Basically
Hi Sriram,
A short answer: the interval of polling is adjusted “dynamically” (by blocking
the KafkaConsumer#poll call) according to the traffic of data.
I think this line [1] is what you are looking for.
Basically KafkaSource fires KafkaPartitionSplitReader.fetch calls repeatedly in
a loop,
Hi Everyone,
I am trying to understand how Flink works in realtime with Kafka. Since
Kafka works on polling, what will be the minimal time for Flink to poll
Kafka?.
Any explanation or documentation will be helpful.
Thanks,
Sriram G
Nathan created FLINK-28622:
--
Summary: Can't restore a flink job that uses Table API and Kafka
connector with savepoint
Key: FLINK-28622
URL: https://issues.apache.org/jira/browse/FLINK-28622
Project: Flink
Leo zhang created FLINK-28475:
-
Summary: kafka connector won't stop when the stopping offset is
zero
Key: FLINK-28475
URL: https://issues.apache.org/jira/browse/FLINK-28475
Project: Flink
Issue
Arseniy Tashoyan created FLINK-28266:
Summary: Kafka connector fails: Invalid negative offset
Key: FLINK-28266
URL: https://issues.apache.org/jira/browse/FLINK-28266
Project: Flink
Issue
SunShun created FLINK-28069:
---
Summary: Cannot attach SSL JKS file for Kafka connector
Key: FLINK-28069
URL: https://issues.apache.org/jira/browse/FLINK-28069
Project: Flink
Issue Type: Bug
liuwei created FLINK-27730:
--
Summary: Kafka connector document code sink has an error
Key: FLINK-27730
URL: https://issues.apache.org/jira/browse/FLINK-27730
Project: Flink
Issue Type: Bug
Spongebob created FLINK-27436:
-
Summary: option `properties.group.id` is not effective in kafka
connector for finksql
Key: FLINK-27436
URL: https://issues.apache.org/jira/browse/FLINK-27436
Project
Qingsheng Ren created FLINK-26928:
-
Summary: Remove unnecessary Docker network creation in Kafka
connector tests
Key: FLINK-26928
URL: https://issues.apache.org/jira/browse/FLINK-26928
Project: Flink
Qingsheng Ren created FLINK-26409:
-
Summary: Remove meaningless Kafka connector test case
KafkaConsumerTestBase.runBrokerFailureTest
Key: FLINK-26409
URL: https://issues.apache.org/jira/browse/FLINK-26409
Ragu Krishnamurthy created FLINK-26379:
--
Summary: First message produced via Flink Kafka Connector is slow
Key: FLINK-26379
URL: https://issues.apache.org/jira/browse/FLINK-26379
Project: Flink
Alexander Preuss created FLINK-26195:
Summary: Kafka connector tests are mixing JUnit4 and JUnit5
Key: FLINK-26195
URL: https://issues.apache.org/jira/browse/FLINK-26195
Project: Flink
Yun Gao created FLINK-26115:
---
Summary: Multiple Kafka connector tests failed due to The topic
metadata failed to propagate to Kafka broker
Key: FLINK-26115
URL: https://issues.apache.org/jira/browse/FLINK-26115
Jing Ge created FLINK-25701:
---
Summary: Add API annotation to some Kafka connector core classes
and interface
Key: FLINK-25701
URL: https://issues.apache.org/jira/browse/FLINK-25701
Project: Flink
Hi,
There is already an on-going issue about it. (
https://issues.apache.org/jira/browse/FLINK-24456)
Best,
hang
聂荧屏 于2022年1月10日周一 10:06写道:
> hello
>
>
> Is there any plan to develop batch mode of Flink SQL Kafka connector?
>
> I would like to use kafka connector for d
hello
Is there any plan to develop batch mode of Flink SQL Kafka connector?
I would like to use kafka connector for daily/hourly/minute-by-minute
statistics, but currently only supports streaming mode and kafka parameters
only support the start parameter setting, not the end parameter setting
jingzi created FLINK-25344:
--
Summary: flink kafka connector Property group.id is required when
using committed offset for offsets initializer
Key: FLINK-25344
URL: https://issues.apache.org/jira/browse/FLINK-25344
Yuan Zhu created FLINK-25336:
Summary: Kafka connector compatible problem in Flink sql
Key: FLINK-25336
URL: https://issues.apache.org/jira/browse/FLINK-25336
Project: Flink
Issue Type: Bug
Fabian Paul created FLINK-25222:
---
Summary: Remove NetworkFailureProxy used for Kafka connector tests
Key: FLINK-25222
URL: https://issues.apache.org/jira/browse/FLINK-25222
Project: Flink
Varun Yeligar created FLINK-25106:
-
Summary: Support tombstone messages in FLINK's "kafka" connector
Key: FLINK-25106
URL: https://issues.apache.org/jira/browse/FLINK-25106
Proj
1 - 100 of 211 matches
Mail list logo