This is an automated email from the ASF dual-hosted git repository.

chia7712 pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new 7413a5a9e4d KAFKA-17913 Fix KRaft controller count recommendations 
(#17657)
7413a5a9e4d is described below

commit 7413a5a9e4d23b092d77a488c56faf0f82fac113
Author: xijiu <[email protected]>
AuthorDate: Fri Nov 8 04:18:30 2024 +0800

    KAFKA-17913 Fix KRaft controller count recommendations (#17657)
    
    Reviewers: Chia-Ping Tsai <[email protected]>
---
 docs/ops.html | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/ops.html b/docs/ops.html
index 3270ad18356..d3990010cc5 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -3755,7 +3755,7 @@ customized state stores; for built-in state stores, 
currently we have:
 
   <p>In KRaft mode, specific Kafka servers are selected to be controllers 
(unlike the ZooKeeper-based mode, where any server can become the Controller). 
The servers selected to be controllers will participate in the metadata quorum. 
Each controller is either an active or a hot standby for the current active 
controller.</p>
 
-  <p>A Kafka admin will typically select 3 servers for this role, depending on 
factors like cost and the number of concurrent failures your system should 
withstand without availability impact. A majority of the controllers must be 
alive in order to maintain availability. With 3 controllers, the cluster can 
tolerate 1 controller failure; for configurations with more than 3 controllers, 
see <a href="#kraft_deployment">Deploying Considerations</a> for more 
details.</p>
+  <p>A Kafka admin will typically select 3 or 5 servers for this role, 
depending on factors like cost and the number of concurrent failures your 
system should withstand without availability impact. A majority of the 
controllers must be alive in order to maintain availability. With 3 
controllers, the cluster can tolerate 1 controller failure; with 5 controllers, 
the cluster can tolerate 2 controller failures.</p>
 
   <p>All of the servers in a Kafka cluster discover the active controller 
using the <code>controller.quorum.bootstrap.servers</code> property. All the 
controllers should be enumerated in this property. Each controller is 
identified with their <code>host</code> and <code>port</code> information. For 
example:</p>
 
@@ -3941,7 +3941,7 @@ foo
 
   <ul>
     <li>Kafka server's <code>process.role</code> should be set to either 
<code>broker</code> or <code>controller</code> but not both. Combined mode can 
be used in development environments, but it should be avoided in critical 
deployment environments.</li>
-    <li>For redundancy, a Kafka cluster should use 3 controllers. More than 3 
controllers is not recommended in critical environments. In the rare case of a 
partial network failure it is possible for the cluster metadata quorum to 
become unavailable. This limitation will be addressed in a future release of 
Kafka.</li>
+    <li>For redundancy, a Kafka cluster should use 3 or 5 controllers, 
depending on factors like cost and the number of concurrent failures your 
system should withstand without availability impact. In the rare case of a 
partial network failure it is possible for the cluster metadata quorum to 
become unavailable. This limitation will be addressed in a future release of 
Kafka.</li>
     <li>The Kafka controllers store all the metadata for the cluster in memory 
and on disk. We believe that for a typical Kafka cluster 5GB of main memory and 
5GB of disk space on the metadata log director is sufficient.</li>
   </ul>
 

Reply via email to