This is an automated email from the ASF dual-hosted git repository.

showuon pushed a commit to branch 3.9
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/3.9 by this push:
     new 0129877edef MINOR: Improve the KIP-853 documentation (#17598)
0129877edef is described below

commit 0129877edefadbe4effc1e51003a97b3b2ac36cd
Author: Luke Chen <[email protected]>
AuthorDate: Thu Oct 31 01:41:19 2024 +0900

    MINOR: Improve the KIP-853 documentation (#17598)
    
    In docs/ops.html, add a section discussion the difference between static 
and dynamic quorums. This section also discusses how to find out which quorum 
type you have. Also discuss the current limitations, such as the inability to 
transition from static quorums to dynamic.
    
    Add a brief section to docs/upgrade.html discussing controller membership 
change.
    
    Co-authored-by: Federico Valeri <[email protected]>, Colin P. McCabe 
<[email protected]>
    Reviewers: Justine Olshan <[email protected]>
---
 docs/ops.html     | 60 ++++++++++++++++++++++++++++++++++++++++++--
 docs/upgrade.html | 75 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 133 insertions(+), 2 deletions(-)

diff --git a/docs/ops.html b/docs/ops.html
index 405137f7b7d..de3281eec8b 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -3824,8 +3824,64 @@ In the replica description 
0@controller-0:1234:3Db5QLSqSZieL3rJBUUegA, 0 is the
 
   <h4 class="anchor-heading"><a id="kraft_reconfig" class="anchor-link"></a><a 
href="#kraft_reconfig">Controller membership changes</a></h4>
 
+  <h5 class="anchor-heading"><a id="static_versus_dynamic_kraft_quorums" 
class="anchor-link"></a><a href="#static_versus_dynamic_kraft_quorums">Static 
versus Dynamic KRaft Quorums</a></h5>
+  There are two ways to run KRaft: the old way using static controller 
quorums, and the new way
+  using KIP-853 dynamic controller quorums.<p>
+
+  When using a static quorum, the configuration file for each broker and 
controller must specify
+  the IDs, hostnames, and ports of all controllers in 
<code>controller.quorum.voters</code>.<p>
+
+  In contrast, when using a dynamic quorum, you should set
+  <code>controller.quorum.bootstrap.servers</code> instead. This configuration 
key need not
+  contain all the controllers, but it should contain as many as possible so 
that all the servers
+  can locate the quorum. In other words, its function is much like the
+  <code>bootstrap.servers</code> configuration used by Kafka clients.<p>
+
+  If you are not sure whether you are using static or dynamic quorums, you can 
determine this by
+  running something like the following:<p>
+
+<pre><code class="language-bash">
+  $ bin/kafka-features.sh --bootstrap-controller localhost:9093 describe
+</code></pre><p>
+
+  If the <code>kraft.version</code> field is level 0 or absent, you are using 
a static quorum. If
+  it is 1 or above, you are using a dynamic quorum. For example, here is an 
example of a static
+  quorum:<p/>
+<pre><code class="language-bash">
+Feature: kraft.version  SupportedMinVersion: 0  SupportedMaxVersion: 1  
FinalizedVersionLevel: 0 Epoch: 5
+Feature: metadata.version       SupportedMinVersion: 3.0-IV1    
SupportedMaxVersion: 3.9-IV0 FinalizedVersionLevel: 3.9-IV0  Epoch: 5
+</code></pre><p/>
+
+  Here is another example of a static quorum:<p/>
+<pre><code class="language-bash">
+Feature: metadata.version       SupportedMinVersion: 3.0-IV1    
SupportedMaxVersion: 3.8-IV0 FinalizedVersionLevel: 3.8-IV0  Epoch: 5
+</code></pre><p/>
+
+  Here is an example of a dynamic quorum:<p/>
+<pre><code class="language-bash">
+Feature: kraft.version  SupportedMinVersion: 0  SupportedMaxVersion: 1  
FinalizedVersionLevel: 1 Epoch: 5
+Feature: metadata.version       SupportedMinVersion: 3.0-IV1    
SupportedMaxVersion: 3.9-IV0 FinalizedVersionLevel: 3.9-IV0  Epoch: 5
+</code></pre><p/>
+
+  The static versus dynamic nature of the quorum is determined at the time of 
formatting.
+  Specifically, the quorum will be formatted as dynamic if 
<code>controller.quorum.voters</code> is
+  <b>not</b> present, and if the software version is Apache Kafka 3.9 or 
newer. If you have
+  followed the instructions earlier in this document, you will get a dynamic 
quorum.<p>
+
+  If you would like the formatting process to fail if a dynamic quorum cannot 
be achieved, format your
+  controllers using the <code>--feature kraft.version=1</code>. (Note that you 
should not supply
+  this flag when formatting brokers -- only when formatting controllers.)<p>
+
+<pre><code class="language-bash">
+  $ bin/kafka-storage.sh format -t KAFKA_CLUSTER_ID --feature kraft.version=1 
-c controller_static.properties
+  Cannot set kraft.version to 1 unless KIP-853 configuration is present. Try 
removing the --feature flag for kraft.version.
+</code></pre><p>
+
+  Note: Currently it is <b>not</b> possible to convert clusters using a static 
controller quorum to
+  use a dynamic controller quorum. This function will be supported in the 
future release.
+
   <h5 class="anchor-heading"><a id="kraft_reconfig_add" 
class="anchor-link"></a><a href="#kraft_reconfig_add">Add New 
Controller</a></h5>
-  If the KRaft Controller cluster already exists, the cluster can be expanded 
by first provisioning a new controller using the <a 
href="#kraft_storage_observers">kafka-storage tool</a> and starting the 
controller.
+  If a dynamic controller cluster already exists, it can be expanded by first 
provisioning a new controller using the <a 
href="#kraft_storage_observers">kafka-storage.sh tool</a> and starting the 
controller.
 
   After starting the controller, the replication to the new controller can be 
monitored using the <code>kafka-metadata-quorum describe --replication</code> 
command. Once the new controller has caught up to the active controller, it can 
be added to the cluster using the <code>kafka-metadata-quorum 
add-controller</code> command.
 
@@ -3836,7 +3892,7 @@ In the replica description 
0@controller-0:1234:3Db5QLSqSZieL3rJBUUegA, 0 is the
   <pre><code class="language-bash">$ bin/kafka-metadata-quorum 
--command-config controller.properties --bootstrap-controller localhost:9092 
add-controller</code></pre>
 
   <h5 class="anchor-heading"><a id="kraft_reconfig_remove" 
class="anchor-link"></a><a href="#kraft_reconfig_remove">Remove 
Controller</a></h5>
-  If the KRaft Controller cluster already exists, the cluster can be shrunk 
using the <code>kafka-metadata-quorum remove-controller</code> command. Until 
KIP-996: Pre-vote has been implemented and released, it is recommended to 
shutdown the controller that will be removed before running the 
remove-controller command.
+  If the dynamic controller cluster already exists, it can be shrunk using the 
<code>bin/kafka-metadata-quorum.sh remove-controller</code> command. Until 
KIP-996: Pre-vote has been implemented and released, it is recommended to 
shutdown the controller that will be removed before running the 
remove-controller command.
 
   When using broker endpoints use the --bootstrap-server flag:
   <pre><code class="language-bash">$ bin/kafka-metadata-quorum 
--bootstrap-server localhost:9092 remove-controller --controller-id &lt;id&gt; 
--controller-directory-id &lt;directory-id&gt;</code></pre>
diff --git a/docs/upgrade.html b/docs/upgrade.html
index 478d9f0488c..d450f0f5146 100644
--- a/docs/upgrade.html
+++ b/docs/upgrade.html
@@ -41,6 +41,81 @@
                     See <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-956+Tiered+Storage+Quotas";>KIP-956</a>
 for more details.</li>
             </ul>
         </li>
+        <li>Controller membership change in KRaft is now supported when 
formatting with `--standalone` or `--initial-controllers` option.
+            See <a 
href="/{{version}}/documentation.html#kraft_reconfig">here</a> for more 
details.</li>
+    </ul>
+
+<h4><a id="upgrade_3_8_1" href="#upgrade_3_8_1">Upgrading to 3.8.1 from any 
version 0.8.x through 3.7.x</a></h4>
+
+    <h5><a id="upgrade_381_zk" href="#upgrade_381_zk">Upgrading 
ZooKeeper-based clusters</a></h5>
+    <p><b>If you are upgrading from a version prior to 2.1.x, please see the 
note in step 5 below about the change to the schema used to store consumer 
offsets.
+        Once you have changed the inter.broker.protocol.version to the latest 
version, it will not be possible to downgrade to a version prior to 2.1.</b></p>
+
+    <p><b>For a rolling upgrade:</b></p>
+
+    <ol>
+        <li>Update server.properties on all brokers and add the following 
properties. CURRENT_KAFKA_VERSION refers to the version you
+            are upgrading from. CURRENT_MESSAGE_FORMAT_VERSION refers to the 
message format version currently in use. If you have previously
+            overridden the message format version, you should keep its current 
value. Alternatively, if you are upgrading from a version prior
+            to 0.11.0.x, then CURRENT_MESSAGE_FORMAT_VERSION should be set to 
match CURRENT_KAFKA_VERSION.
+            <ul>
+                <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 
<code>3.7</code>, <code>3.6</code>, etc.)</li>
+                <li>log.message.format.version=CURRENT_MESSAGE_FORMAT_VERSION  
(See <a href="#upgrade_10_performance_impact">potential performance impact
+                    following the upgrade</a> for the details on what this 
configuration does.)</li>
+            </ul>
+            If you are upgrading from version 0.11.0.x or above, and you have 
not overridden the message format, then you only need to override
+            the inter-broker protocol version.
+            <ul>
+                <li>inter.broker.protocol.version=CURRENT_KAFKA_VERSION (e.g. 
<code>3.7</code>, <code>3.6</code>, etc.)</li>
+            </ul>
+        </li>
+        <li>Upgrade the brokers one at a time: shut down the broker, update 
the code, and restart it. Once you have done so, the
+            brokers will be running the latest version and you can verify that 
the cluster's behavior and performance meets expectations.
+            It is still possible to downgrade at this point if there are any 
problems.
+        </li>
+        <li>Once the cluster's behavior and performance has been verified, 
bump the protocol version by editing
+            <code>inter.broker.protocol.version</code> and setting it to 
<code>3.8</code>.
+        </li>
+        <li>Restart the brokers one by one for the new protocol version to 
take effect. Once the brokers begin using the latest
+            protocol version, it will no longer be possible to downgrade the 
cluster to an older version.
+        </li>
+        <li>If you have overridden the message format version as instructed 
above, then you need to do one more rolling restart to
+            upgrade it to its latest version. Once all (or most) consumers 
have been upgraded to 0.11.0 or later,
+            change log.message.format.version to 3.8 on each broker and 
restart them one by one. Note that the older Scala clients,
+            which are no longer maintained, do not support the message format 
introduced in 0.11, so to avoid conversion costs
+            (or to take advantage of <a 
href="#upgrade_11_exactly_once_semantics">exactly once semantics</a>),
+            the newer Java clients must be used.
+        </li>
+    </ol>
+
+    <h5><a id="upgrade_380_kraft" href="#upgrade_380_kraft">Upgrading 
KRaft-based clusters</a></h5>
+    <p><b>If you are upgrading from a version prior to 3.3.0, please see the 
note in step 3 below. Once you have changed the metadata.version to the latest 
version, it will not be possible to downgrade to a version prior to 
3.3-IV0.</b></p>
+
+    <p><b>For a rolling upgrade:</b></p>
+
+    <ol>
+        <li>Upgrade the brokers one at a time: shut down the broker, update 
the code, and restart it. Once you have done so, the
+            brokers will be running the latest version and you can verify that 
the cluster's behavior and performance meets expectations.
+        </li>
+        <li>Once the cluster's behavior and performance has been verified, 
bump the metadata.version by running
+            <code>
+                bin/kafka-features.sh upgrade --metadata 3.8
+            </code>
+        </li>
+        <li>Note that cluster metadata downgrade is not supported in this 
version since it has metadata changes.
+            Every <a 
href="https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java";>MetadataVersion</a>
+            after 3.2.x has a boolean parameter that indicates if there are 
metadata changes (i.e. <code>IBP_3_3_IV3(7, "3.3", "IV3", true)</code> means 
this version has metadata changes).
+            Given your current and target versions, a downgrade is only 
possible if there are no metadata changes in the versions between.</li>
+    </ol>
+
+    <h5><a id="upgrade_381_notable" href="#upgrade_381_notable">Notable 
changes in 3.8.1</a></h5>
+    <ul>
+        <li>In case you run your Kafka clusters with no execution permission 
for the <code>/tmp</code> partition, Kafka will not work properly. It might 
either refuse to start or fail
+            when producing and consuming messages. This is due to the 
compression libraries <code>zstd-jni</code> and <code>snappy</code>.
+            To remediate this problem you need to pass the following JVM flags 
to Kafka <code>ZstdTempFolder</code> and <code>org.xerial.snappy.tempdir</code> 
pointing to a directory with execution permissions.
+            For example, this could be done via the <code>KAFKA_OPTS</code> 
environment variable like follows: <code>export 
KAFKA_OPTS="-DZstdTempFolder=/opt/kafka/tmp 
-Dorg.xerial.snappy.tempdir=/opt/kafka/tmp"</code>.
+            This is a known issue for version 3.8.0.
+        </li>
         <li>In 3.8.0 the <code>kafka.utils.Thottler</code> metric was 
accidentally renamed to 
<code>org.apache.kafka.storage.internals.utils.Throttler</code>.
             This change has been reverted and the metric is now named 
<code>kafka.utils.Thottler</code> again.
         </li>

Reply via email to