elivery. I previously thought idempotent producer could be a
solution, but I later found that idempotent producer could only guarantee
ordering when kafka is retrying producer batch internally.
Thanks and regards,
William
Greg Harris 于2024年3月12日周二 00:50写道:
> Hi William,
>
> From your descrip
, and a metadata
refresh does not trigger a rebalance.
-Matthias
On 3/10/24 5:56 PM, Venkatesh Nagarajan wrote:
Hi all,
A Kafka Streams application sometimes stops consuming events during load
testing. Please find below the details:
Details of the app:
* Kafka Streams Version: 3.5.1
ble, or would you want that record
to also be rejected?
This sounds like a use-case for transactional producers [1] utilizing
Exactly Once delivery. You can start a transaction, emit records, have
them persisted in Kafka, perform some computation afterwards, and then
decide whether to commit or ab
Hi Haruki,
Thanks for your answer.
> I still don't get why you need this behavior though
The reason is I have to ensure message ordering per partition strictly.
Once there is an exception in the producer callback, it indicates that the
exception is not a retryable exception(from kafka produce
Hi.
> I immediately stop sending more new records and stop the kafka
producer, but some extra records were still sent
I still don't get why you need this behavior though, as long as you set
max.in.flight.requests.per.connection to greater than 1, it's impossible to
avoid this beca
Hi all,
I am facing a problem when I detect an exception in kafka producer
callback, I immediately stop sending more new records and stop the kafka
producer, but some extra records were still sent.
I found a way to resolve this issue: setting
max.in.flight.requests.per.connection to 1 and closing
Hi All,
I'm working on setting up RBAC for Apache Kafka using Ranger. Right now,
I'm facing an authorization issue while testing the console producer script
in Kafka. I need help in properly configuring Kafka with Ranger. Below are
the steps I performed.
- I successfully installed the ranger
Hi all,
A Kafka Streams application sometimes stops consuming events during load
testing. Please find below the details:
Details of the app:
* Kafka Streams Version: 3.5.1
* Kafka: AWS MSK v3.6.0
* Consumes events from 6 topics
* Calls APIs to enrich events
* Sometimes
>
> I'm working on setting up RBAC for Apache Kafka using Ranger. Right now,
> I'm facing an authorization issue while testing the console producer script
> in Kafka. I need help in properly configuring Kafka with Ranger. Below are
> the steps I performed.
>
>
>- I su
Hi All,
I'm working on setting up RBAC for Apache Kafka using Ranger. Right now,
I'm facing an authorization issue while testing the console producer script
in Kafka. I need help in properly configuring Kafka with Ranger. Below are
the steps I performed.
- I successfully installed the ranger
> Thanks for your question. It certainly isn't clear from the original
> > > KIP-298, the attached discussion, or the follow-up KIP-610 as to why
> > > the situation is asymmetric.
> > >
> > > The reason as I understand it is: Source connectors are re
Mar 5, 2024 at 5:49 PM Greg Harris
> wrote:
>
> > Hi Yeikel,
> >
> > Thanks for your question. It certainly isn't clear from the original
> > KIP-298, the attached discussion, or the follow-up KIP-610 as to why
> > the situation is asymmetric.
> >
&
>
> The reason as I understand it is: Source connectors are responsible
> for importing data to Kafka. If an error occurs during this process,
> then writing useful information to a dead letter queue about the
> failure is at least as difficult as importing the record correctly.
>
>
Hi Yeikel,
Thanks for your question. It certainly isn't clear from the original
KIP-298, the attached discussion, or the follow-up KIP-610 as to why
the situation is asymmetric.
The reason as I understand it is: Source connectors are responsible
for importing data to Kafka. If an error occurs
to
either fail the connector or employ logging to track source failures
It seems that for now, I'll need to apply the transformations as a sink and
possibly reinsert them back to Kafka for downstream consumption, but that
sounds unnecessary
[1]https://cwiki.apache.org/confluence/plugins/servlet
Hey there folks!
My team is working on migrating application code that handles wire
formatting of messages: serialization, deserialization, encryption, and
signature. We are moving to custom Ser/Des classes implementing this
interface
https://kafka.apache.org/24/javadoc/org/apache/kafka/common
e docs in different branches. Really appreciate
> your hard work on this one.
>
> Thank you all contributors! Your contributions is what makes Apache Kafka
> community awesome <3
>
> There are many impactful changes in this release but the one closest to my
> heart is https:/
Thank you Stanislav for running the release, especially fixing the whole
mess with out of sync site docs in different branches. Really appreciate
your hard work on this one.
Thank you all contributors! Your contributions is what makes Apache Kafka
community awesome <3
There are many impact
islav Kozlovski wrote:
> > The Apache Kafka community is pleased to announce the release of
> > Apache Kafka 3.7.0
> >
> > This is a minor release that includes new features, fixes, and
> > improvements from 296 JIRAs
> >
> > An overview of the release and its notab
Thanks Stan and all contributors for the release!
Best,
Bruno
On 2/27/24 7:01 PM, Stanislav Kozlovski wrote:
The Apache Kafka community is pleased to announce the release of
Apache Kafka 3.7.0
This is a minor release that includes new features, fixes, and
improvements from 296 JIRAs
Thanks for running this release, Stanislav! And thanks to all the
contributors who helped implement all the bug fixes and new features we got
to put out this time around.
On Tue, Feb 27, 2024, 13:03 Stanislav Kozlovski <
stanislavkozlov...@apache.org> wrote:
> The Apache Kafka
how I have to replace producer() into the latest kafka client?
The Apache Kafka community is pleased to announce the release of
Apache Kafka 3.7.0
This is a minor release that includes new features, fixes, and
improvements from 296 JIRAs
An overview of the release and its notable changes can be found in the
release blog post:
https://kafka.apache.org/blog
Case closed, behaviour is actually as expected. - The source topic contains
multiplied data that gets propagated into the join just as it should. I'm
leveraging a stream processor for deduplication now.
Best wishes
Karsten
Vikram Singh schrieb am Fr.,
23. Feb. 2024, 12:13:
> +Ajit Kharpude
>
+Ajit Kharpude
On Fri, Feb 23, 2024 at 1:14 PM Karsten Stöckmann <
karsten.stoeckm...@gmail.com> wrote:
> Hi,
>
> I am observing somewhat unexpected (from my point of view) behaviour
> while ke-key / re-partitioning operations in order to prepare a
> KTable-KTable join.
>
> Assume two
Hi,
I am observing somewhat unexpected (from my point of view) behaviour
while ke-key / re-partitioning operations in order to prepare a
KTable-KTable join.
Assume two (simplified) source data structures from two respective topics:
class User {
Long id; // PK
String name;
}
class Attribute
Hi Luke
Sure, I will create a ticket after creating a JIRA account.
Cheers.
From: Luke Chen
Date: Wednesday, 21 February 2024 at 8:59 pm
To: users@kafka.apache.org
Subject: Re: Possible bug on Kafka documentation
CAUTION: This email originated from outside of the organisation. Do not click
Hi Federico,
Thanks for reporting the issue.
We've fixed that in v3.5 and later in this PR:
https://github.com/apache/kafka/pull/13115.
But we didn't update for the older versions of docs.
Are you willing to file a PR to kafka-site repo to fix that? Or create a
JIRA issue for it?
Thanks.
Luke
In documentation from version 3.1 to version 3.4, it looks like the retries
explanation has a bug related to max.in.flight.request.per.connection related
parameter and possible message reordering.
https://kafka.apache.org/31/documentation.html#producerconfigs_retries
Hi Apache Kafka Community,
Assumethatyourtestenvironmentonlyopensthehttp(s)servicetotheoutsideworld,sothelocalkafkaconsumersorproducerscannotdirectlyaccessmultiplekafkaserversinthetestenvironmentthroughthekafkaAPI.Isthereasimilarplug-intosetupatunnelfromthelocaltothetestenvironment
Hi Zhifeng,
I granted you permissions in Jira. You should now be able to assign
issues to yourself.
Thanks,
Mickael
On Fri, Feb 16, 2024 at 9:37 AM Chen Zhifeng wrote:
>
> Hi Apache Kafka Community
>
> Following the guide
> <https://cwiki.apache.org/confluence/display/KAFK
Hi Apache Kafka Community
Following the guide
<https://cwiki.apache.org/confluence/display/KAFKA/Reporting+Issues+in+Apache+Kafka>
to
join kafka developer community to contribute to kafka
Kafka Jira ID: ericzhifengchen
Who AM I: I'm a Software Engineer at Uber technology, works on
Hi,
is anyone here familiar with Quarkus Kafka Streams applications? If so
- is there a way to control output topic configuration when streaming
aggregate data into a sink like so:
KTable aggregate = ...;
aggregate.toStream().to("topic", );
-> Can I programmatically (or by appli
https://cwiki.apache.org/confluence/display/KAFKA/Time+Based+Release+Plan#TimeBasedReleasePlan-WhatIsOurEOLPolicy?
On 2/11/24 8:08 PM, Sahil Sharma D wrote:
Hi team,
Can you please share the EOS date for Kafka Version 3.5.1?
Regards,
Sahil
Hi team,
Can you please share the EOS date for Kafka Version 3.5.1?
Regards,
Sahil
attach to Schematic diagram!
qiaoy...@urbackyard.cn
发件人: 乔咏
发送时间: 2024-02-03 18:21
收件人: users
主题: Access kafka service through http(s) tunnel
Hell everyone ,
Assume that your test environment only opens the http(s) service to the outside
world, so the local kafka consumers or producers
Hell everyone ,
Assume that your test environment only opens the http(s) service to the outside
world, so the local kafka consumers or producers cannot directly access
multiple kafka servers in the test environment through the kafka API. Is there
a similar plug-in to set up a tunnel from
d whether kafka streams can solve this efficiently.
Since the number of files is unbound how would kafka manage intermediate
topics for groupBy operation? How many partitions will it use etc? Can't
find this details in the docs. Also let's say chunk has a flag that
indicates EOF. How to
discussion with Partner Manager.
> On 01/24/2024 11:50 PM +08 Dharin Shah wrote:
>
>
> Hi Karsten,
>
> Before delving deeper into Kafka Streams, it's worth considering if direct
> aggregation in the database might be a more straightforward solution,
> unless there's a co
t reflect in KStreams
> (in fact, those records are ignored).
>
> We are generally speaking of roughly 20 tables involved in CDC,
> constructing two different kinds of aggregate objects. The largest
> 'leading' table features around 80M records. I'm not yet familiar with size
'leading' table features around 80M records. I'm not yet familiar with size
and performance requirements in Kafka as we are still somewhere at the
beginning of implementing our indexing solution. Initial Debezium snapshots
were quite fast from my point of view, resulting in overall broker disk
usage
Hi Karsten,
Before delving deeper into Kafka Streams, it's worth considering if direct
aggregation in the database might be a more straightforward solution,
unless there's a compelling reason to avoid it. Aggregating data at the
database level often leads to more efficient and maintainable
Hi,
we're currently in the process of evaluating Debezium and Kafka as a CDC
system for our Postgres database in order to build an indexing solution
(i.e. Solr or OpenSearch).
Debezium captures changes per table and propagates them into dedicated
Kafka topics each. The ingested tables originally
that might look liketopic('chunks')
.groupByKey((fileId, chunk) -> fileId)
.sortBy((fileId, chunk) -> chunk.offset)
.aggregate((fileId, chunk) -> store.append(fileId, chunk));
I want to understand whether kafka streams can solve this efficiently. Since the number of files is unbound
Hi,
As Artem mentioned, I did some tests with setting replication factor 1 and
3 for two different topics
One of the kafka broker is down:
The command works if the replication factor is 3. (*testtopicreplica3 is
created with rf 3)*
*[root@node-223 kafka_2.12-2.8.2]# ./bin/kafka-consumer
l-request-queue situation is not desired so you should
figure out the cause and address that for stable cluster operation.
2024年1月22日(月) 11:53 dong yu :
> I have a question: why does the overall CPU of the cluster decrease when
> the KAFKA cluster traffic increases, the request queue is full
I have a question: why does the overall CPU of the cluster decrease when
the KAFKA cluster traffic increases, the request queue is full, and the
idle rate is low?
Haruki Okada 于2024年1月15日周一 21:56写道:
> You should investigate the cause of request-queue full situation first.
> Since I gue
Hi,
Just a long shoot, but I might be wrong. You have
offsets.topic.replication.factor=1 in you config, when one broker is down,
some partitions of __consumer_offsets topic will be down either. So
kafka-consumer-groups can't get offsets from it. Maybe it's just a little
misleading error message
Hi, sorry for the confusion, here is details:
I have 3 broker nodes: 192.168.20.223 / 224 / 225
When all kafka services are UP:
[image: image.png]
I stopped the kafka service on *node 225*:
[image: image.png]
Then i tried the command on node223 with --bootstrap-server
192.168.20.223
Hi.
Which server did you shutdown in testing?
If it was 192.168.20.223, that is natural kafka-consumer-groups script
fails because you passed only 192.168.20.223 to the bootstrap-server arg.
In HA setup, you have to pass multiple brokers (as the comma separated
string) to bootstrap-server so
Hi all,
I'm trying to do some tests about high availability on kafka v2.8.2
I have 3 kafka brokers and 3 zookeeper instances.
when i shutdown one of the kafka service only in one server i got this
error:
[root@node-223 ~]# /root/kafka_2.12-2.8.2/bin/kafka-consumer-groups.sh
--bootstrap-server
nsform()`
because keying must be preserved -- if you want to change the keying
you need to use `KTable#groupBy()` (data needs to be repartitioned if
you change the key).
HTH.
-Matthias
On 1/12/24 11:47 AM, Igor Maznitsa wrote:
Hello
Is there any way in Kafka Streams API to define processors for K
You should investigate the cause of request-queue full situation first.
Since I guess low network idle ratio is the consequence of that.
(Network-threads would block on queueing when request-queue is full)
I recommend running async-profiler to take the profile of the broker
process if possible
This is my problem
1.The request queue is always at 500
2.There are 130 machines in the cluster, and the network idle rate of 30
machines is less than 20.
This is my BROKER configuration
num.io.threads=32
num.network.threads=64
How should I locate the problem? I tried to increase the parameters
Hi,
Most likely not.
Many network devices do have capability to send logs to remote syslog server,
and from there one can pick up those for example with Filebeat or Fluent-bit
and send them to Kafka.
-pasi
From: Francisco Serrano
Date: Thursday, 11. January 2024 at 15.41
ransform()`
because keying must be preserved -- if you want to change the keying
you need to use `KTable#groupBy()` (data needs to be repartitioned if
you change the key).
HTH.
-Matthias
On 1/12/24 11:47 AM, Igor Maznitsa wrote:
Hello
Is there any way in Kafka Streams API to define p
hange the keying you
need to use `KTable#groupBy()` (data needs to be repartitioned if you
change the key).
HTH.
-Matthias
On 1/12/24 11:47 AM, Igor Maznitsa wrote:
Hello
Is there any way in Kafka Streams API to define processors for KTable
and KGroupedStream like KStream#transform? How
Hello
Is there any way in Kafka Streams API to define processors for KTable
and KGroupedStream like KStream#transform? How to provide a custom
processor for KTable or KGroupedStream which could for instance provide
way to not downstream selected events?
--
Igor Maznitsa
email: rrg4
Hello, Is possble integrate one network devices to send syslog messages to
Kafka ?
This is my problem
1.The request queue is always at 500
2.NetworkProcessorAvgIdlePercent is lower than 0.2
This is my BROKER configuration
num.io.threads=32
num.network.threads=64
How have I identified the cause and how to optimize my KAFKA cluster?
THKS。
Hello,
Currently, I have a capacity issue with my current kafka cluster where in
the existing EBS volume is proving insufficient for the new load of
incoming data.
I need to find a solution to persist more data in my cluster and I see two
options
- one to vertically scale volume size
messages in order
> for a given partition. There isn’t a guarantee across partitions.
>
>
> Andrew
>
> Sent from my iPhone
>
> > On Jan 7, 2024, at 9:36 AM, Winstein Martins
> wrote:
> >
> > Hello everyone, I have two questions about Kafka's operation.
&g
.
Andrew
Sent from my iPhone
> On Jan 7, 2024, at 9:36 AM, Winstein Martins wrote:
>
> Hello everyone, I have two questions about Kafka's operation.
>
> 1. Can I modify events after they are written to Kafka, or are they
> immutable?
> 2. Do consumers always receive
Hello everyone, I have two questions about Kafka's operation.
1. Can I modify events after they are written to Kafka, or are they
immutable?
2. Do consumers always receive messages in the order they were sent by
Kafka?
Thank you in advance!
over time. There is a similar bug opened for tests (
https://issues.apache.org/jira/browse/KAFKA-8782) - don't know why it was
assumed as a test problem only.
Hi, I have problem with SASL_SSL configuration of Kafka. In Server.log is
strange error:
2023-12-21 00:22:17,254] DEBUG Setting SASL/SCRAM_SHA_256 server state to
FAILED (org.apache.kafka.common.security.scram.internals.ScramSaslServer)
[2023-12-21 00:22:17,256] DEBUG Set SASL server state
Hi,
Is there any workaround or fix for
https://issues.apache.org/jira/browse/KAFKA-4090 ?
Thank you.
-Sreekanth
t-lgn-clz-com-v0.0.43-INS_CLZ_COM-TEST_123-7e87eb30-69e9-4746-b351-beac3a085383-admin]
> Error connecting to node kafka-0-0.kafka.test.svc.cluster.local:9092 (id: 0
> rack: null)
> java.net.UnknownHostException: kafka-0-0.kafka.test.svc.cluster.local
> at java.net.InetAddress.getAllByName0(Ine
.. Going
to request metadata update now
On Thu, Dec 14, 2023 at 1:50 AM David Arthur
wrote:
> Only brokers can be specified as --bootstrap-servers for AdminClient (the
> bin/kafka-* scripts).
>
> In 3.7, we are adding the ability to bootstrap from KRaft controllers for
>
Only brokers can be specified as --bootstrap-servers for AdminClient (the
bin/kafka-* scripts).
In 3.7, we are adding the ability to bootstrap from KRaft controllers for
certain scripts. In this case, the scripts will use --bootstrap-controllers
(the details are in
https://cwiki.apache.org
Hello Luke,
Please look into below logs,
12:32:15.813 WARN org.apache.kafka.clients.NetworkClient - [AdminClient
clientId=test-lgn-clz-com-v0.0.43-INS_CLZ_COM-TEST_123-7e87eb30-69e9-4746-b351-beac3a085383-admin]
Error connecting to node kafka-0-0.kafka.test.svc.cluster.local:9092 (id: 0
rack
Hi Vikram,
It would be good if you could share client and broker logs for troubleshooting.
Thanks.
Luke
On Wed, Dec 13, 2023 at 1:15 PM Vikram Singh
wrote:
>
> Hello,
> I have 3 node kafka cluster, when one node goes down for some reason the
> request which are serving
Hello,
I have 3 node kafka cluster, when one node goes down for some reason the
request which are serving on down node is not routing to other running
node, It takes me always to restart the services,
Running on kafka version 3.2.1 (kraft mode)
On Mon, Dec 11, 2023 at 12:40 PM Luke Chen wrote
Hello,
I have 3 node kafka cluster, when one node goes down for some reason the
request which are serving on down node is not routing to other running
node, It takes me always to restart the services,
Running on kafka version 3.2.1 (kraft mode)
On Mon, Dec 11, 2023 at 12:33 PM Luke Chen wrote
Hi all,
I'd like to solicit input from users and maintainers on a problem we've
been dealing with for source task cleanup logic.
If you'd like to pore over some Jira history, here's the primary link:
https://issues.apache.org/jira/browse/KAFKA-15090
To summarize, we accidentally introduced
Hello,
Is there a "git" issue with 3.5.2. When I look at github I see the 3.5.2
tag. But if I make the repo an upstream remote target I don't see 3.5.2.
Any ideas what could be up?
Thanks!
ttyl
Dima
On Mon, Dec 11, 2023 at 3:36 AM Luke Chen wrote:
> The Apache Kafka communi
Thanks for running the release, and thanks to all the contributors!
Mickael
On Mon, Dec 11, 2023 at 1:56 PM Josep Prat wrote:
>
> Thanks Luke for running the release!
>
> Best!
>
> On Mon, Dec 11, 2023 at 12:34 PM Luke Chen wrote:
>
> > The Apache Kafka commu
Thanks Luke for running the release!
Best!
On Mon, Dec 11, 2023 at 12:34 PM Luke Chen wrote:
> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 3.5.2
>
> This is a bugfix release. It contains many bug fixes including
> upgrades the Sna
The Apache Kafka community is pleased to announce the release for
Apache Kafka 3.5.2
This is a bugfix release. It contains many bug fixes including
upgrades the Snappy and Rocksdb dependencies.
All of the changes in this release can be found in the release notes:
https://www.apache.org/dist
Hi Team,
We are using Apache Kafka 3.3.1 in our application.
Scenario - 1 --> We have created an Kafka admin client from our java
application and have not configured the property "connections.max.idle.ms" so
its default value which is 5 minutes is used.
In the above scenario
On Fri, Dec 8, 2023 at 11:06 PM Ankit Nigam
wrote:
> Hi Team,
>
>
>
> We are using Apache Kafka 3.3.1 in our application. We have created Kafka
> Admin Client , Kafka Producer and Kafka consumer in the application using
> the default properties.
>
>
>
> Once our ap
Hi Dima,
You can set "process.roles=controller,broker" to get what you want.
Otherwise, the controller role cannot be served as a broker.
Thanks.
Luke
On Sat, Dec 9, 2023 at 3:59 AM Dima Brodsky wrote:
> Hello,
>
> Would the following configuration be valid in a kafka k
Hello,
Would the following configuration be valid in a kafka kraft cluster
So lets say we had the following configs for a controller and a broker:
=== controller -
https://github.com/apache/kafka/blob/6d1d68617ecd023b787f54aafc24a4232663428d/config/kraft/controller.properties
process.roles
As I told you on the Strimzi mailing list:
* You should check the paths where the vulnerabilities are found and in
which component they are => that should tell you where it needs to be
addressed
* You should also compare which ones are already addressed in the
upcoming Kafka 3.5.2 (or in Ka
Hi Team,
We are using Apache Kafka 3.3.1 in our application. We have created Kafka
Admin Client , Kafka Producer and Kafka consumer in the application using the
default properties.
Once our application starts we are observing below disconnects logs every 5
minutes for Admin client and once
Hi Mickael,
Thanks for running this release!
Luke
On Thu, Dec 7, 2023 at 7:13 PM Mickael Maison wrote:
> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 3.6.1
>
> This is a bug fix release and it includes fixes and improvements from 30
>
Hi Team,
We are using Apache Kafka 3.3.1 in our application.
Scenario - 1 --> We have created an Kafka admin client from our java
application and have not configured the property "connections.max.idle.ms" so
its default value which is 5 minutes is used.
In the above scenario
The Apache Kafka community is pleased to announce the release for
Apache Kafka 3.6.1
This is a bug fix release and it includes fixes and improvements from 30 JIRAs.
All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/3.6.1/RELEASE_NOTES.html
Hi Lud,
This is a known issue(KAFKA-15353
<https://issues.apache.org/jira/browse/KAFKA-15353>) and I've fixed it in
v3.5.2 (will get released soon) and v3.6.0.
Thanks.
Luke
On Mon, Dec 4, 2023 at 6:01 PM Lud Antonie
wrote:
> Hi Megh,
>
> No, the number of partitions ha
Hello, question,
If I have my kafka cluster behind a VIP for bootstrapping, is it possible
to have the controllers participate in the bootstrap process or only
brokers can?
Thanks!
ttyl
Dima
--
ddbrod...@gmail.com
"The price of reliability is the pursuit of the utmost simpl
Hello everyone,
Is there any mechanism to force Kafka Connect to ingest at a given rate per
second as opposed to tasks?
I am operating in a shared environment where the ingestion rate needs to be as
low as possible (for example, 5 requests/second as an upper limit), and as far
as I can
ced issues with 2.8.0, in which we had increased the
> number of partitions for some topics, and for those topics we used to see
> under replicated partitions after every restart.
>
> The reason this happened was, there was a bug in Kafka which assigned a new
> topicId (different from
Hi,
Just rerun the upgrade from 2.7.2 to 3.5.1 and got the same under
replicated issue:
kafka@playground2:~/kafka/bin$ ./kafka-topics.sh --describe
--under-replicated-partitions --bootstrap-server pg1:30092
Topic: smo-test Partition: 3 Leader: 1 Replicas: 1,3,2 Isr: 3,2
So it looks like
ule required
> user_super="adminsecret";
};
Now after making this change, I can make the kafka nodes as world-readable
and modifiable only by brokers (as mentioned in kafka doc)
Thanks and regards
Arjun S V
On Thu, Nov 23, 2023 at 10:57 AM arjun s v wrote:
> Hi Alex Brekken,
>
>
, and for those topics we used to see
under replicated partitions after every restart.
The reason this happened was, there was a bug in Kafka which assigned a new
topicId (different from the original topicId) to newly added partitions in
the partition.metadata file, and upon restart of kafka brokers
Hi.
I'm not sure if KafkaManager has such bug though, you should check if
there's any under replicated partitions actually by `kafka-topics.sh`
command with `--under-replicated-partitions` option first.
2023年11月30日(木) 23:41 Lud Antonie :
> Hello,
>
> After upgrading from 2.7.2 to 3
Hello,
After upgrading from 2.7.2 to 3.5.1 some topics are missing a partition for
one or two brokers.
The kafka manager shows "Under replicated%" for the topic.
Looking at the topic for some brokers (of 3) partitions are missing (in my
case 1 partition).
A rollback will restore
ssage will be consumed from
> that partition. Is my understanding correct?
>
> I am using Kafka client 3.5.1 with Apache Kafka broker 2.8.1 with all
> default settings on the consumer configs.
>
--
Okada Haruki
ocadar...@gmail.com
understanding correct?
I am using Kafka client 3.5.1 with Apache Kafka broker 2.8.1 with all
default settings on the consumer configs.
to a
graceful shutdown.
On Thu, Nov 23, 2023 at 12:40 PM Denis Santangelo <
denis.santang...@scorechain.com> wrote:
> Hello Denis,
>
> I'm encountering a peculiar issue with my Kafka cluster.
>
> I've been running 8 brokers on version 3.4.0 for several months, a
101 - 200 of 16316 matches
Mail list logo