This is an automated email from the ASF dual-hosted git repository.

mimaison pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new ff50f28  MINOR: fix docs around leader rebalancing to reflect default 
of true (#7614)
ff50f28 is described below

commit ff50f28794ab65b5c5d30564c60a769e7f1a1af3
Author: Alice <[email protected]>
AuthorDate: Sun Nov 10 18:15:44 2019 +0000

    MINOR: fix docs around leader rebalancing to reflect default of true (#7614)
    
    Reviewers: Mickael Maison <[email protected]>
---
 docs/ops.html | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/docs/ops.html b/docs/ops.html
index 0cd20bf..0b0e7ae 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -85,17 +85,17 @@
 
   <h4><a id="basic_ops_leader_balancing" 
href="#basic_ops_leader_balancing">Balancing leadership</a></h4>
 
-  Whenever a broker stops or crashes leadership for that broker's partitions 
transfers to other replicas. This means that by default when the broker is 
restarted it will only be a follower for all its partitions, meaning it will 
not be used for client reads and writes.
+  Whenever a broker stops or crashes, leadership for that broker's partitions 
transfers to other replicas. When the broker is restarted it will only be a 
follower for all its partitions, meaning it will not be used for client reads 
and writes.
   <p>
-  To avoid this imbalance, Kafka has a notion of preferred replicas. If the 
list of replicas for a partition is 1,5,9 then node 1 is preferred as the 
leader to either node 5 or 9 because it is earlier in the replica list. You can 
have the Kafka cluster try to restore leadership to the restored replicas by 
running the command:
-  <pre class="brush: bash;">
-  &gt; bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot
-  </pre>
+  To avoid this imbalance, Kafka has a notion of preferred replicas. If the 
list of replicas for a partition is 1,5,9 then node 1 is preferred as the 
leader to either node 5 or 9 because it is earlier in the replica list. By 
default the Kafka cluster will try to restore leadership to the restored 
replicas.  This behaviour is configured with:
 
-  Since running this command can be tedious you can also configure Kafka to do 
this automatically by setting the following configuration:
   <pre class="brush: text;">
       auto.leader.rebalance.enable=true
   </pre>
+    You can also set this to false, but you will then need to manually restore 
leadership to the restored replicas by running the command:
+  <pre class="brush: bash;">
+  &gt; bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot
+  </pre>
 
   <h4><a id="basic_ops_racks" href="#basic_ops_racks">Balancing Replicas 
Across Racks</a></h4>
   The rack awareness feature spreads replicas of the same partition across 
different racks. This extends the guarantees Kafka provides for broker-failure 
to cover rack-failure, limiting the risk of data loss should all the brokers on 
a rack fail at once. The feature can also be applied to other broker groupings 
such as availability zones in EC2.

Reply via email to