Andrew Schofield created KAFKA-17550:
Summary: Add INCONSISTENT_GROUP_TYPE to ConsumerGroupDescribe and
kafka-consumer-groups.sh
Key: KAFKA-17550
URL: https://issues.apache.org/jira/browse/KAFKA-17550
[
https://issues.apache.org/jira/browse/KAFKA-17230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jun Rao resolved KAFKA-17230.
-
Fix Version/s: 3.9.0
Resolution: Fixed
merged the PR to trunk
> Kafka consumer client does
Apoorv Mittal created KAFKA-17230:
-
Summary: Kafka consumer client doesn't report node request-latency
metrics
Key: KAFKA-17230
URL: https://issues.apache.org/jira/browse/KAFKA-17230
Project:
Omnia Ibrahim created KAFKA-17218:
-
Summary: kafka-consumer-groups fails to describe all group if one
group has been consuming from a deleted topic
Key: KAFKA-17218
URL: https://issues.apache.org/jira/browse
Bumping up this thread since I can not find it in the mail archive.
On Wed, 22 May 2024 at 18:09, Harsh Panchal
wrote:
> Hi,
>
> I would like to propose a change in the kafka-consumer-perf-test tool to
> support perf testing specific partitions.
>
> kafka-consumer-perf-test
Hi,
I would like to propose a change in the kafka-consumer-perf-test tool to
support perf testing specific partitions.
kafka-consumer-perf-test is a great tool to quickly check raw consumer
performance. Currently, It subscribes to all the partitions and gives
overall cluster performance, however
Harsh Panchal created KAFKA-16810:
-
Summary: Improve kafka-consumer-perf-test to benchmark single
partition
Key: KAFKA-16810
URL: https://issues.apache.org/jira/browse/KAFKA-16810
Project: Kafka
Soby Chacko created KAFKA-16309:
---
Summary: Enabling virtual threads in the Kafka Consumer
Key: KAFKA-16309
URL: https://issues.apache.org/jira/browse/KAFKA-16309
Project: Kafka
Issue Type
Bruno Cadonna created KAFKA-16285:
-
Summary: Make group metadata available when a new assignment is
set in async Kafka consumer
Key: KAFKA-16285
URL: https://issues.apache.org/jira/browse/KAFKA-16285
Lucas Brutschy created KAFKA-16248:
--
Summary: Kafka consumer should cache leader offset ranges
Key: KAFKA-16248
URL: https://issues.apache.org/jira/browse/KAFKA-16248
Project: Kafka
Issue
; Namboodiri, Vishnu ;
Mudlapur, Rajesh ; Reddy, Thanmai
Subject: RE: Kafka consumer group crashing and not able to consume once service
is up
Hi Philip,
Are you expecting logs from subscriber from our service please do let me know.
Thanks,
Santhosh Aditya
From: Marigowda, Santhosh Aditya
Sent
, Thanmai
Subject: RE: Kafka consumer group crashing and not able to consume once service
is up
Hi Philip, Please find the service logs(Subscriber logs). We don’t see any
logs-related issues with consumer.
Kafka consumer configuration
kafka-consumer = {
server= "10.221.10
..@in.unisys.com>; Mudlapur, Rajesh <
> rajesh.mudla...@au.unisys.com>; Reddy, Thanmai ;
> Marigowda, Santhosh Aditya
> *Subject:* RE: Kafka consumer group crashing and not able to consume once
> service is up
>
>
>
> Hi Philip,
>
> Thanks for your queries, Please
Hi Santhosh,
Your problem statement confuses me a bit (apologize). You mentioned "if one
of the kafka consumer(Service)" - Do you have a single member consumer
group? Could you elaborate on the setup a bit? Did you also mean after
restarting the "service", the service was not
: Matthias J. Sax mailto:mj...@apache.org>>
Sent: Wednesday, January 31, 2024 12:13 AM
To: dev@kafka.apache.org<mailto:dev@kafka.apache.org>
Subject: Re: Kafka consumer group crashing and not able to consume once service
is up
I am a not sure if I can follow completely. From the figures you sh
@kafka.apache.org
Subject: Re: Kafka consumer group crashing and not able to consume once service
is up
I am a not sure if I can follow completely. From the figures you show, you have
a topic with 4 partitions, and 4 consumer groups. Thus, each consumer group
should read all 4 partitions, but the
with our problem.
In our POC, if one of the kafka consumer(Service) shuts down or crashes
then post restart of service none of the messages are getting consumed
by the crashed Service.
Other services are consuming without any issues.
One of service crash/Shutdown
If we rename the Kafka con
Dear Kafka developers,
I submitted https://github.com/apache/kafka/pull/13914 to fix a long
standing problem that the Kafka consumer on the JVM is not usable from
asynchronous runtimes such as Kotlin co-routines and ZIO.
Your review is much appreciated.
Kind regards,
Erik.
--
Erik van
Armand created KAFKA-14878:
--
Summary: Reduce assignment data size to improve Kafka Consumer
scalability
Key: KAFKA-14878
URL: https://issues.apache.org/jira/browse/KAFKA-14878
Project: Kafka
Issue
Colin Shaw created KAFKA-14739:
--
Summary: Kafka consumer reading messages out of order after a
rebalance
Key: KAFKA-14739
URL: https://issues.apache.org/jira/browse/KAFKA-14739
Project: Kafka
Feiyan Yu created KAFKA-14626:
-
Summary: Kafka Consumer Coordinator does not cleanup all metrics
Key: KAFKA-14626
URL: https://issues.apache.org/jira/browse/KAFKA-14626
Project: Kafka
Issue Type
Chetan created KAFKA-14366:
--
Summary: Kafka consumer rebalance issue, offsets points back to
very old committed offset
Key: KAFKA-14366
URL: https://issues.apache.org/jira/browse/KAFKA-14366
Project: Kafka
[
https://issues.apache.org/jira/browse/KAFKA-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Luke Chen resolved KAFKA-14161.
---
Resolution: Not A Problem
> kafka-consumer-group.sh --list not list all consumer gro
yandufeng created KAFKA-14161:
-
Summary: kafka-consumer-group.sh --list not list all consumer
groups
Key: KAFKA-14161
URL: https://issues.apache.org/jira/browse/KAFKA-14161
Project: Kafka
Issue
Mikhail Filatov created KAFKA-13961:
---
Summary: GKE kafka-consumer-groups returns incorrect hosts
Key: KAFKA-13961
URL: https://issues.apache.org/jira/browse/KAFKA-13961
Project: Kafka
some partitions over others. Our messages are different in
sizes and we have set max bytes per message and overall partition fetch
size too.
How can we ensure to process Round Robin mode in Kafka Consumer with the
partitions assigned to it? Can you pls help?
--
thanks,
Muthuswamy.S
Hello Luke
i have build a new kafka environment with kafka 2.8.0
the consumer is a new consumer set up to this environment is throwing the
below error... the old consumers for the same applications for the same
environment -2.8.0 is working fine.. .
could you please advise
2021-11-02 12:25:24 D
Hi,
Which version of kafka client are you using?
I can't find this error message in the source code.
When googling this error message, it showed the error is in Kafka v0.9.
Could you try to use the V3.0.0 and see if that issue still exist?
Thank you.
Luke
On Thu, Oct 28, 2021 at 11:15 PM Kafka L
Dear Kafka Experts
We have set up a group.id (consumer ) = YYY
But when tried to connect to kafka instance : i get this error message. I
am sure this consumer (group id does not exist in kafka) .We user plain
text protocol to connect to kafka 2.8.0. Please suggest how to resolve this
issue.
D
Ignacio Acuna created KAFKA-12926:
-
Summary: ConsumerGroupCommand's java.lang.NullPointerException at
negative offsets while running kafka-consumer-groups.sh
Key: KAFKA-12926
URL: https://issues.apache.org
Hey kafka-dev,
I created KIP-748 as a proposal to add broker count metrics to the Quorum
Controller.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-748%3A+Add+Broker+Count+Metrics#KIP748:AddBrokerCountMetrics
Best,
Ryan Dielhenn
Prakash Patel created KAFKA-12821:
-
Summary: Kafka consumer skip reading from single partition
Key: KAFKA-12821
URL: https://issues.apache.org/jira/browse/KAFKA-12821
Project: Kafka
Issue
cond rolling
> bounce in which
> you remove the CooperativeStickyAssignor and downgrade the member to 2.3
> again. It's just
> the inverted upgrade path.
>
> Hope that helps,
> Sophie
>
> On Tue, May 11, 2021 at 7:36 AM Vipul Goyal
> wrote:
>
> > Hi Tea
ipul Goyal wrote:
> Hi Team,
>
> I need some guidance related to Kafka Consumer Incremental Rebalance
> Protocol.
>
> I was following the below KIP and understood the upgrade path, But a bit
> confused with the downgrade procedure.
>
> *KIP:*
>
> https://cwiki.apac
Hi Team,
I need some guidance related to Kafka Consumer Incremental Rebalance
Protocol.
I was following the below KIP and understood the upgrade path, But a bit
confused with the downgrade procedure.
*KIP:*
https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Incremental
radai rosenblatt created KAFKA-12605:
Summary: kafka consumer churns through buffer memory iterating
over records
Key: KAFKA-12605
URL: https://issues.apache.org/jira/browse/KAFKA-12605
Project
[
https://issues.apache.org/jira/browse/KAFKA-12428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Guozhang Wang resolved KAFKA-12428.
---
Resolution: Duplicate
> Add a last-heartbeat-seconds-ago metric to Kafka Consu
donglei created KAFKA-12531:
---
Summary: kafka-consumer-groups list got
Key: KAFKA-12531
URL: https://issues.apache.org/jira/browse/KAFKA-12531
Project: Kafka
Issue Type: Bug
Affects Versions
Guozhang Wang created KAFKA-12428:
-
Summary: Add a last-heartbeat-seconds-ago metric to Kafka Consumer
Key: KAFKA-12428
URL: https://issues.apache.org/jira/browse/KAFKA-12428
Project: Kafka
Brian Wyka created KAFKA-10708:
--
Summary: Add "group-id" Tag to Kafka Consumer Metrics
Key: KAFKA-10708
URL: https://issues.apache.org/jira/browse/KAFKA-10708
Project: Kafka
Russell Sayers created KAFKA-10685:
--
Summary: --to-datetime passed to kafka-consumer-groups getting
interpreted as a timezone
Key: KAFKA-10685
URL: https://issues.apache.org/jira/browse/KAFKA-10685
Hi Team,
Currently I'm implementing Spring boot @kafkalistener. I need to know that
what all are the exception that may raise, while try to consume the record
from kafka topic. How to commit the current offset after successfully
processed the record.In between that I have handle the exception.
Kin
Guillaume Bort created KAFKA-10422:
--
Summary: Provide a `timesForOffsets` operation in kafka consumer
Key: KAFKA-10422
URL: https://issues.apache.org/jira/browse/KAFKA-10422
Project: Kafka
[
https://issues.apache.org/jira/browse/KAFKA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ismael Juma reopened KAFKA-10134:
-
> High CPU issue during rebalance in Kafka consumer after upgrading to
[
https://issues.apache.org/jira/browse/KAFKA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Guozhang Wang resolved KAFKA-10134.
---
Resolution: Fixed
> High CPU issue during rebalance in Kafka consumer after upgrading
Sean Guo created KAFKA-10134:
Summary: High CPU issue during rebalance in Kafka consumer after
upgrading to 2.5
Key: KAFKA-10134
URL: https://issues.apache.org/jira/browse/KAFKA-10134
Project: Kafka
Raman Gupta created KAFKA-10007:
---
Summary: Kafka consumer offset reset despite recent group activity
Key: KAFKA-10007
URL: https://issues.apache.org/jira/browse/KAFKA-10007
Project: Kafka
[
https://issues.apache.org/jira/browse/KAFKA-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Matthias J. Sax resolved KAFKA-6822.
Resolution: Abandoned
> Kafka Consumer 0.10.2.1 can not normally read data from Ka
startjava created KAFKA-9902:
Summary: java client api can not completely take out the
kafka-consumer-groups.sh output of information
Key: KAFKA-9902
URL: https://issues.apache.org/jira/browse/KAFKA-9902
Tom Bentley created KAFKA-9775:
--
Summary: IllegalFormatConversionException from
kafka-consumer-perf-test.sh
Key: KAFKA-9775
URL: https://issues.apache.org/jira/browse/KAFKA-9775
Project: Kafka
Colin McCabe created KAFKA-9761:
---
Summary: kafka-consumer-groups tool overrides admin client
defaults with a short 5 s timeout
Key: KAFKA-9761
URL: https://issues.apache.org/jira/browse/KAFKA-9761
li xiangyuan created KAFKA-9646:
---
Summary: kafka consumer cause high cpu usage
Key: KAFKA-9646
URL: https://issues.apache.org/jira/browse/KAFKA-9646
Project: Kafka
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/KAFKA-9306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sanjana Kaundinya resolved KAFKA-9306.
--
Fix Version/s: 2.4.1
Resolution: Fixed
> Kafka Consumer does not clean up
[
https://issues.apache.org/jira/browse/KAFKA-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sophie Blee-Goldman resolved KAFKA-5868.
Fix Version/s: 2.5.0
Resolution: Fixed
> Kafka Consumer Rebalancing ta
gt; Consumer lag is a useful metric to monitor how many records
> are
> > > > > >> queued to
> > > > > >> > > be processed. We can look at individual lag per partition
> or we
> > > may
> > > > > >> > >
percentiles, but one aggregate that does not make sense is to sum
latency across many partitions, because partitions are typically always
consumed in parallel fashion.
>
> 2. Some existing solutions already expose the consumer group lap in time.
> See
>
> https://www.lightbend.com/
entify
> > > > >> > > hot partitions, caused by an insufficient producing partitioning
> > > > >> strategy.
> > > > >> > > We may want to monitor a sum of lag across all partitions so we
> > have a
> > > > >> > >
gt; have a
> > > >> > > sense as to our total backlog of messages to consume. Lag in
> offsets
> > > >> is
> > > >> > > useful when you have a good understanding of your messages and
> > > >> processing
> > > >> >
> >> > > characteristics, but it doesn’t tell us how far behind *in time* we
> > >> are.
> > >> > > This is known as wait time in queueing theory, or more informally
> > >> > > it’s
> > >> > > referred to as latency.
> >
nce Protocol for Kafka Consumer
> -
>
> Key: KAFKA-8179
> URL: https://issues.apache.org/jira/browse/KAFKA-8179
> Project: Kafka
> Issue Type: Improvement
>
/monitor-kafka-consumer-group-latency-with-kafka-lag-exporter
for
an example. *The KIP should reference existing solutions and suggest the
benefits of using the native solution that you propose*.
3. If a message was produced a long time ago, and a new consumer group has
been created, then the
a
> >> > > consumer. The latency of records in a partition correlates with lag,
> >> but a
> >> > > larger lag doesn’t necessarily mean a larger latency. For example, a
> >> topic
> >> > > consumed by two separate application consumer groups A
Colin McCabe created KAFKA-9306:
---
Summary: Kafka Consumer does not clean up all metrics after
shutdown
Key: KAFKA-9306
URL: https://issues.apache.org/jira/browse/KAFKA-9306
Project: Kafka
essing time is slower, it takes
>> > > longer to process each message per partition. Meanwhile, Application
>> B is
>> > > a consumer which performs a simple ETL operation to land streaming
>> data in
>> > > another system, such as HDFS. It may have similar
ny consumer group members to
> handle the
> > > load quickly enough, but since its processing time is slower, it takes
> > > longer to process each message per partition. Meanwhile, Application B
> is
> > > a consumer which performs a simple ETL operation to land stre
es
> > longer to process each message per partition. Meanwhile, Application B is
> > a consumer which performs a simple ETL operation to land streaming data in
> > another system, such as HDFS. It may have similar lag to Application A, but
> > because it has a faster p
ime is slower, it takes
> longer to process each message per partition. Meanwhile, Application B is
> a consumer which performs a simple ETL operation to land streaming data in
> another system, such as HDFS. It may have similar lag to Application A, but
> because it has a faster processin
Hi all,
I was recently discussing deserialization errors and how they can be
handled in Kafka consumers.
I believe the current code still throws an exception if deserialization
fails which stops consumption unless you seek past that record.
This creates an issue for a JDBC sink connector that us
Andrew Olson created KAFKA-9233:
---
Summary: Kafka consumer throws undocumented IllegalStateException
Key: KAFKA-9233
URL: https://issues.apache.org/jira/browse/KAFKA-9233
Project: Kafka
Issue
Hello folks,
One last update on the KIP: we've added a section with a list of newly
added metrics corresponding to consumer rebalance events as part of this
proposal as well, detailed list can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Increm
linking12 created KAFKA-8898:
Summary: if there is no message for poll, kafka consumer apply
memory
Key: KAFKA-8898
URL: https://issues.apache.org/jira/browse/KAFKA-8898
Project: Kafka
Issue
I want to test the throughput and latency of Kafka consumer through the
application, but there is no test entry. How to test and collect throughput and
latency data?
Veerabhadra Rao Mallavarapu created KAFKA-8883:
--
Summary: Kafka Consumer continuously displaying info message
Seeking to offset 0 for partition
Key: KAFKA-8883
URL: https://issues.apache.org/jira
avaiable
> > > > > in consumers.
> > > > > >
> > > > > > Thanks,
> > > > > > Harsha
> > > > > >
> > > > > > On Wed, May 22, 2019, at 12:24 AM, Liquan Pei wrote:
> > > > > >> +1 (
Hi,
Maybe it would be helpful if you can attach the logs for your consumers
when you notice they stopped consuming from some of your partitions
On Tue, 23 Jul 2019 at 16:56, Sergey Fedorov wrote:
> Hello. I was using Kafka 2.1.1 and facing a problem where our consumers
> sometimes intermittentl
Hello. I was using Kafka 2.1.1 and facing a problem where our consumers
sometimes intermittently stop consuming from one or two of the partitions. My
config
Pavel Rogovoy created KAFKA-8697:
Summary: Kafka consumer group auto removal
Key: KAFKA-8697
URL: https://issues.apache.org/jira/browse/KAFKA-8697
Project: Kafka
Issue Type: Improvement
antly less.
If the Kafka Consumer reported a latency metric it would be easier to build
Service Level Agreements (SLAs) based on non-functional requirements of the
streaming system. For example, the system must never have a latency of
greater than 10 minutes. This SLA could be used in monitoring a
Hi kafka-dev,
I've created KIP-489 as a proposal for adding latency metrics to the Kafka
Consumer in a similar way as record-lag metrics are implemented.
https://cwiki.apache.org/confluence/display/KAFKA/489%3A+Kafka+Consumer+Record+Latency+Metric
Regards,
Sean
--
Principal Eng
Sean Glover created KAFKA-8656:
--
Summary: Kafka Consumer Record Latency Metric
Key: KAFKA-8656
URL: https://issues.apache.org/jira/browse/KAFKA-8656
Project: Kafka
Issue Type: New Feature
> > > > >>
> > > > >> On Tue, May 21, 2019 at 11:34 PM Boyang Chen >
> > > > wrote:
> > > > >>
> > > > >>> Thank you Guozhang for all the hard work.
> > > > >>>
> > > > >&
2019, at 12:24 AM, Liquan Pei wrote:
> > > >> +1 (non-binding)
> > > >>
> > > >> On Tue, May 21, 2019 at 11:34 PM Boyang Chen
> > > wrote:
> > > >>
> > > >>> Thank you Guozhang for all the hard work.
> > > >>>
> >
Liquan Pei wrote:
> > >> +1 (non-binding)
> > >>
> > >> On Tue, May 21, 2019 at 11:34 PM Boyang Chen
> > wrote:
> > >>
> > >>> Thank you Guozhang for all the hard work.
> > >>>
> > >>> +
Gaurav created KAFKA-8506:
-
Summary: Kafka Consumer broker never stops on connection failure
Key: KAFKA-8506
URL: https://issues.apache.org/jira/browse/KAFKA-8506
Project: Kafka
Issue Type: Bug
t; >> On Tue, May 21, 2019 at 11:34 PM Boyang Chen
> wrote:
> >>
> >>> Thank you Guozhang for all the hard work.
> >>>
> >>> +1 (non-binding)
> >>>
> >>>
> >>> From: Guozha
Richard Yu created KAFKA-8431:
-
Summary: Add a onTimeoutExpired callback to Kafka Consumer
Key: KAFKA-8431
URL: https://issues.apache.org/jira/browse/KAFKA-8431
Project: Kafka
Issue Type
21, 2019 at 11:34 PM Boyang Chen wrote:
>>
>>> Thank you Guozhang for all the hard work.
>>>
>>> +1 (non-binding)
>>>
>>>
>>> From: Guozhang Wang
>>> Sent: Wednesday, May 22, 2019 1:32 A
d work.
> >
> > +1 (non-binding)
> >
> >
> > From: Guozhang Wang
> > Sent: Wednesday, May 22, 2019 1:32 AM
> > To: dev
> > Subject: [VOTE] KIP-429: Kafka Consumer Incremental Rebalance Protocol
> >
> > Hel
+1 (non-binding)
On Tue, May 21, 2019 at 11:34 PM Boyang Chen wrote:
> Thank you Guozhang for all the hard work.
>
> +1 (non-binding)
>
>
> From: Guozhang Wang
> Sent: Wednesday, May 22, 2019 1:32 AM
> To: dev
> Subject: [VO
Thank you Guozhang for all the hard work.
+1 (non-binding)
From: Guozhang Wang
Sent: Wednesday, May 22, 2019 1:32 AM
To: dev
Subject: [VOTE] KIP-429: Kafka Consumer Incremental Rebalance Protocol
Hello folks,
I'd like to start the voting for KIP-42
Hello folks,
I'd like to start the voting for KIP-429 now, details can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-429%3A+Kafka+Consumer+Incremental+Rebalance+Protocol#KIP-429:KafkaConsumerIncrementalRebalanceProtocol-RebalanceCallbackErrorHandling
And the on-goin
Hi Kafka developers,
*The question:* How can I randomly fetch an old chunk of messages with a
given range definition of [partition, start offset, end offset]. Hopefully
ranges from multiple partitions at once (one range for each partition).
This needs to be supported in a concurrent environment to
Hi everyone:
I found that KafkaConsumer will not refresh its topic metadata when some unused
topics are deleted, which leads to continuous UNKNOWN_TOPIC_PARTITION warning.
In source code KafkaProducer will remove unused topic after a expire time but
KafkaConsumer not. I know it may be a design t
Guozhang Wang created KAFKA-8179:
Summary: Incremental Rebalance Protocol for Kafka Consumer
Key: KAFKA-8179
URL: https://issues.apache.org/jira/browse/KAFKA-8179
Project: Kafka
Issue Type
://issues.apache.org/jira/browse/KAFKA-5154 now.
> Two instances of kafka consumer reading the same partition within a consumer
> group
> --
>
> Key: KAFKA-6681
> URL: https://issues.apac
Shengnan YU created KAFKA-8100:
--
Summary: If delete expired topic, kafka consumer will keep
flushing unknown_topic warning in log
Key: KAFKA-8100
URL: https://issues.apache.org/jira/browse/KAFKA-8100
Boyang Chen created KAFKA-7995:
--
Summary: Augment singleton protocol type to list for Kafka
Consumer
Key: KAFKA-7995
URL: https://issues.apache.org/jira/browse/KAFKA-7995
Project: Kafka
@Team,
I am trying to connect and consume from remote kafka, i am able to get number
of partitions but when tried to consume from it not getting any data either
start consuming from beginning or from latest. We have multiple topic in this
kafka cluster, and there are consumer running for other
I implemented one application using @kafkaListener. All listened events
will be kept in Redis for some time. WIth the help of Spring scheduler,
above events will be spanned across multiple threads and processed some
business logic.
Can you suggest me how to commit the events to kafka after process
leibo created KAFKA-7684:
Summary: kafka consumer SchemaException occurred: Error reading
field 'brokers':
Key: KAFKA-7684
URL: https://issues.apache.org/jira/browse/KAFKA-7684
Proj
1 - 100 of 550 matches
Mail list logo