A 5-node quorum doesn't make a lot of sense in a setting where those nodes
are also Kafka brokers. When they're ZooKeeper voters, a quorum* of 5
makes a lot of sense, because you can take an unscheduled voter failure
during a rolling-reboot scheduled maintenance without significant service
't expect an improvement on latencies as well.
>
> On Thu, 1 Feb 2024 at 22:53, Michael K. Edwards
> wrote:
>
> > The interesting numbers are the recovery times after 1) the Kafka broker
> > currently acting as the "active" controller (or the sole controller in a
>
The interesting numbers are the recovery times after 1) the Kafka broker
currently acting as the "active" controller (or the sole controller in a
ZooKeeper-based deployment) goes away; 2) the Kafka broker currently acting
as the consumer group coordinator for a consumer group with many partitions
FKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum#KIP-500:ReplaceZooKeeperwithaSelf-ManagedMetadataQuorum-Motivation
>
> Ron
>
> > On May 10, 2020, at 1:35 PM, Michael K. Edwards
> wrote:
> >
> > What is the actual goal of removing the ZooKeep
What is the actual goal of removing the ZooKeeper dependency? In my
experience, if ZooKeeper is properly provisioned and deployed, it's largely
trouble-free. (You do need to know how to use observers properly.) There
are some subtleties about timeouts and leadership changes, but they're
pretty
gt; Jamie
>
> -----Original Message-
> From: Michael K. Edwards
> To: dev@kafka.apache.org
> Sent: Tue, 14 Apr 2020 20:50
> Subject: Re: Preferred Partition Leaders
>
> I have clients with a similar need relating to disaster recovery. (Three
> replicas per partitio
I have clients with a similar need relating to disaster recovery. (Three
replicas per partition within a data center / AWS AZ/region, fourth replica
elsewhere, ineligible to become the partition leader without manual
intervention.)
On Tue, Apr 14, 2020 at 12:31 PM Ćukasz Antoniak
wrote:
> Hi