This is an automated email from the ASF dual-hosted git repository.

stanislavkozlovski pushed a commit to branch update-36
in repository https://gitbox.apache.org/repos/asf/kafka-site.git

commit 46fdab20350f51bdd3c94f180185ddda07874b59
Author: Stanislav <[email protected]>
AuthorDate: Thu Feb 22 14:08:28 2024 +0100

    MINOR: Copy over apache/kafka/3.6 docs into here
---
 36/documentation.html                   | 12 ++---
 36/generated/connect_metrics.html       |  4 +-
 36/generated/connect_rest.yaml          |  2 +-
 36/generated/streams_config.html        |  8 +--
 36/js/templateData.js                   |  2 +-
 36/ops.html                             | 90 ++++-----------------------------
 36/streams/developer-guide/dsl-api.html | 16 ++++++
 36/streams/upgrade-guide.html           | 13 ++++-
 36/toc.html                             |  2 +-
 36/upgrade.html                         | 75 +++++++++++++--------------
 10 files changed, 86 insertions(+), 138 deletions(-)

diff --git a/36/documentation.html b/36/documentation.html
index 6dad7fe3..3589c446 100644
--- a/36/documentation.html
+++ b/36/documentation.html
@@ -33,7 +33,7 @@
     <!--//#include virtual="../includes/_docs_banner.htm" -->
     
     <h1>Documentation</h1>
-    <h3>Kafka 3.6 Documentation</h3>
+    <h3>Kafka 3.4 Documentation</h3>
     Prior releases: <a href="/07/documentation.html">0.7.x</a>, 
                     <a href="/08/documentation.html">0.8.0</a>, 
                     <a href="/081/documentation.html">0.8.1.X</a>, 
@@ -54,12 +54,10 @@
                     <a href="/26/documentation.html">2.6.X</a>, 
                     <a href="/27/documentation.html">2.7.X</a>,
                     <a href="/28/documentation.html">2.8.X</a>,
-                    <a href="/30/documentation.html">3.0.X</a>,
-                    <a href="/31/documentation.html">3.1.X</a>,
-                    <a href="/32/documentation.html">3.2.X</a>,
-                    <a href="/33/documentation.html">3.3.X</a>,
-                    <a href="/34/documentation.html">3.4.X</a>,
-                    <a href="/35/documentation.html">3.5.X</a>.
+                    <a href="/30/documentation.html">3.0.X</a>.
+                    <a href="/31/documentation.html">3.1.X</a>.
+                    <a href="/32/documentation.html">3.2.X</a>.
+                    <a href="/33/documentation.html">3.3.X</a>.
 
    <h2 class="anchor-heading"><a id="gettingStarted" 
class="anchor-link"></a><a href="#gettingStarted">1. Getting Started</a></h2>
       <h3 class="anchor-heading"><a id="introduction" 
class="anchor-link"></a><a href="#introduction">1.1 Introduction</a></h3>
diff --git a/36/generated/connect_metrics.html 
b/36/generated/connect_metrics.html
index 17b78e89..addf60ce 100644
--- a/36/generated/connect_metrics.html
+++ b/36/generated/connect_metrics.html
@@ -1,5 +1,5 @@
-[2023-09-15 00:40:42,725] INFO Metrics scheduler closed 
(org.apache.kafka.common.metrics.Metrics:693)
-[2023-09-15 00:40:42,729] INFO Metrics reporters closed 
(org.apache.kafka.common.metrics.Metrics:703)
+[2024-02-22 11:02:50,169] INFO Metrics scheduler closed 
(org.apache.kafka.common.metrics.Metrics:693)
+[2024-02-22 11:02:50,170] INFO Metrics reporters closed 
(org.apache.kafka.common.metrics.Metrics:703)
 <table class="data-table"><tbody>
 <tr>
 <td colspan=3 class="mbeanName" style="background-color:#ccc; font-weight: 
bold;">kafka.connect:type=connect-worker-metrics</td></tr>
diff --git a/36/generated/connect_rest.yaml b/36/generated/connect_rest.yaml
index 03d98874..03d51602 100644
--- a/36/generated/connect_rest.yaml
+++ b/36/generated/connect_rest.yaml
@@ -8,7 +8,7 @@ info:
     name: Apache 2.0
     url: https://www.apache.org/licenses/LICENSE-2.0.html
   title: Kafka Connect REST API
-  version: 3.6.1
+  version: 3.6.2-SNAPSHOT
 paths:
   /:
     get:
diff --git a/36/generated/streams_config.html b/36/generated/streams_config.html
index 695de446..bc329f99 100644
--- a/36/generated/streams_config.html
+++ b/36/generated/streams_config.html
@@ -34,7 +34,7 @@
 <p>Directory location for state store. This path must be unique for each 
streams instance sharing the same underlying filesystem.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
-<tr><th>Default:</th><td>/var/folders/1w/r49gc42j1ml6ddw0lhlvt9pw0000gn/T//kafka-streams</td></tr>
+<tr><th>Default:</th><td>/var/folders/z6/tv_ggjzd3v3b5vl2jy2bscph0000gp/T//kafka-streams</td></tr>
 <tr><th>Valid Values:</th><td></td></tr>
 <tr><th>Importance:</th><td>high</td></tr>
 </tbody></table>
@@ -285,7 +285,7 @@
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>none</td></tr>
-<tr><th>Valid 
Values:</th><td>org.apache.kafka.streams.StreamsConfig$$Lambda$17/0x0000000800094840@5f341870</td></tr>
+<tr><th>Valid 
Values:</th><td>org.apache.kafka.streams.StreamsConfig$$Lambda$21/0x0000000800084000@59ec2012</td></tr>
 <tr><th>Importance:</th><td>medium</td></tr>
 </tbody></table>
 </li>
@@ -541,11 +541,11 @@
 </li>
 <li>
 <h4><a id="upgrade.from"></a><a id="streamsconfigs_upgrade.from" 
href="#streamsconfigs_upgrade.from">upgrade.from</a></h4>
-<p>Allows upgrading in a backward compatible way. This is needed when 
upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 
2.4+. When upgrading from 3.3 to a newer version it is not required to specify 
this config. Default is `null`. Accepted values are "0.10.0", "0.10.1", 
"0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", 
"2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4" (for upgrading from the 
corresponding old version).</p>
+<p>Allows upgrading in a backward compatible way. This is needed when 
upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 
2.4+. When upgrading from 3.3 to a newer version it is not required to specify 
this config. Default is `null`. Accepted values are "0.10.0", "0.10.1", 
"0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", 
"2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4", "3.5(for upgrading from 
the corresponding old version).</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
 <tr><th>Default:</th><td>null</td></tr>
-<tr><th>Valid Values:</th><td>[null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 
2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4]</td></tr>
+<tr><th>Valid Values:</th><td>[null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 
2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4, 
3.5]</td></tr>
 <tr><th>Importance:</th><td>low</td></tr>
 </tbody></table>
 </li>
diff --git a/36/js/templateData.js b/36/js/templateData.js
index 36c20595..a391696d 100644
--- a/36/js/templateData.js
+++ b/36/js/templateData.js
@@ -19,6 +19,6 @@ limitations under the License.
 var context={
     "version": "36",
     "dotVersion": "3.6",
-    "fullDotVersion": "3.6.1",
+    "fullDotVersion": "3.6.2-SNAPSHOT",
     "scalaVersion": "2.13"
 };
diff --git a/36/ops.html b/36/ops.html
index 0ae686da..b6a30aa2 100644
--- a/36/ops.html
+++ b/36/ops.html
@@ -1453,8 +1453,8 @@ $ bin/kafka-acls.sh \
       </tr>
       <tr>
         <td>Byte in rate from other brokers</td>
-        
<td>kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesInPerSec</td>
-        <td>Byte in (from the other brokers) rate across all topics.</td>
+        
<td>kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesInPerSec,topic=([-.\w]+)</td>
+        <td>Byte in (from the other brokers) rate per topic. Omitting 
'topic=(...)' will yield the all-topic rate.</td>
       </tr>
       <tr>
         <td>Controller Request rate from Broker</td>
@@ -1537,8 +1537,8 @@ $ bin/kafka-acls.sh \
       </tr>
       <tr>
         <td>Byte out rate to other brokers</td>
-        
<td>kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesOutPerSec</td>
-        <td>Byte out (to the other brokers) rate across all topics</td>
+        
<td>kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesOutPerSec,topic=([-.\w]+)</td>
+        <td>Byte out (to the other brokers) rate per topic. Omitting 
'topic=(...)' will yield the all-topic rate.</td>
       </tr>
       <tr>
         <td>Rejected byte rate</td>
@@ -3984,95 +3984,27 @@ listeners=CONTROLLER://:9093
   If unset, The value in <code>retention.ms</code> and 
<code>retention.bytes</code> will be used.
 </p>
 
-<h4 class="anchor-heading"><a id="tiered_storage_config_ex" 
class="anchor-link"></a><a href="#tiered_storage_config_ex">Quick Start 
Example</a></h4>
-
-<p>Apache Kafka doesn't provide an out-of-the-box RemoteStorageManager 
implementation. To have a preview of the tiered storage
-  feature, the <a 
href="https://github.com/apache/kafka/blob/trunk/storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorage.java";>LocalTieredStorage</a>
-  implemented for integration test can be used, which will create a temporary 
directory in local storage to simulate the remote storage.
-</p>
-
-<p>To adopt the `LocalTieredStorage`, the test library needs to be built 
locally</p>
-<pre># please checkout to the specific version tag you're using before 
building it
-# ex: `git checkout 3.6.1`
-./gradlew clean :storage:testJar</pre>
-<p>After build successfully, there should be a `kafka-storage-x.x.x-test.jar` 
file under `storage/build/libs`.
-Next, setting configurations in the broker side to enable tiered storage 
feature.</p>
+<h4 class="anchor-heading"><a id="tiered_storage_config_ex" 
class="anchor-link"></a><a href="#tiered_storage_config_ex">Configurations 
Example</a></h4>
 
+<p>Here is a sample configuration to enable tiered storage feature in broker 
side:
 <pre>
 # Sample Zookeeper/Kraft broker server.properties listening on 
PLAINTEXT://:9092
 remote.log.storage.system.enable=true
-
-# Setting the listener for the clients in RemoteLogMetadataManager to talk to 
the brokers.
+# Please provide the implementation for remoteStorageManager. This is the 
mandatory configuration for tiered storage.
+# 
remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.NoOpRemoteStorageManager
+# Using the "PLAINTEXT" listener for the clients in RemoteLogMetadataManager 
to talk to the brokers.
 remote.log.metadata.manager.listener.name=PLAINTEXT
-
-# Please provide the implementation info for remoteStorageManager.
-# This is the mandatory configuration for tiered storage.
-# Here, we use the `LocalTieredStorage` built above.
-remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.LocalTieredStorage
-remote.log.storage.manager.class.path=/PATH/TO/kafka-storage-x.x.x-test.jar
-
-# These 2 prefix are default values, but customizable
-remote.log.storage.manager.impl.prefix=rsm.config.
-remote.log.metadata.manager.impl.prefix=rlmm.config.
-
-# Configure the directory used for `LocalTieredStorage`
-# Note, please make sure the brokers need to have access to this directory
-rsm.config.dir=/tmp/kafka-remote-storage
-
-# This needs to be changed if number of brokers in the cluster is more than 1
-rlmm.config.remote.log.metadata.topic.replication.factor=1
-
-# Try to speed up the log retention check interval for testing
-log.retention.check.interval.ms=1000
 </pre>
 </p>
 
-<p>Following <a href="#quickstart_startserver">quick start guide</a> to start 
up the kafka environment.
-  Then, create a topic with tiered storage enabled with configs:
-
-<pre>
-# remote.storage.enable=true -> enables tiered storage on the topic
-# local.retention.ms=1000 -> The number of milliseconds to keep the local log 
segment before it gets deleted.
-  Note that a local log segment is eligible for deletion only after it gets 
uploaded to remote.
-# retention.ms=3600000 -> when segments exceed this time, the segments in 
remote storage will be deleted
-# segment.bytes=1048576 -> for test only, to speed up the log segment rolling 
interval
-# file.delete.delay.ms=10000 -> for test only, to speed up the local-log 
segment file delete delay
-
-bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 \
---config remote.storage.enable=true --config local.retention.ms=1000 --config 
retention.ms=3600000 \
---config segment.bytes=1048576 --config file.delete.delay.ms=1000
+<p>After broker is started, creating a topic with tiered storage enabled, and 
a small log time retention value to try this feature:
+<pre>bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 --config remote.storage.enable=true --config 
local.retention.ms=1000
 </pre>
 </p>
 
-<p>Try to send messages to the `tieredTopic` topic to roll the log segment:</p>
-
-<pre>
-bin/kafka-producer-perf-test.sh --topic tieredTopic --num-records 1200 
--record-size 1024 --throughput -1 --producer-props 
bootstrap.servers=localhost:9092
-</pre>
-
 <p>Then, after the active segment is rolled, the old segment should be moved 
to the remote storage and get deleted.
-  This can be verified by checking the remote log directory configured above. 
For example:
 </p>
 
-<pre> > ls 
/tmp/kafka-remote-storage/kafka-tiered-storage/tieredTopic-0-jF8s79t9SrG_PNqlwv7bAA
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.index
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.snapshot
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.leader_epoch_checkpoint
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.timeindex
-00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.log
-</pre>
-
-<p>Lastly, we can try to consume some data from the beginning and print offset 
number, to make sure it will successfully fetch offset 0 from the remote 
storage.</p>
-
-<pre>bin/kafka-console-consumer.sh --topic tieredTopic --from-beginning 
--max-messages 1 --bootstrap-server localhost:9092 --property 
print.offset=true</pre>
-
-<p>Please note, if you want to disable tiered storage at the cluster level, 
you should delete the tiered storage enabled topics explicitly.
-  Attempting to disable tiered storage at the cluster level without deleting 
the topics using tiered storage will result in an exception during startup.</p>
-
-<pre>bin/kafka-topics.sh --delete --topic tieredTopic --bootstrap-server 
localhost:9092</pre>
-
-<p>After topics are deleted, you're safe to set 
<code>remote.log.storage.system.enable=false</code> in the broker 
configuration.</p>
-
 <h4 class="anchor-heading"><a id="tiered_storage_limitation" 
class="anchor-link"></a><a 
href="#tiered_storage_limitation">Limitations</a></h4>
 
 <p>While the early access release of Tiered Storage offers the opportunity to 
try out this new feature, it is important to be aware of the following 
limitations:
diff --git a/36/streams/developer-guide/dsl-api.html 
b/36/streams/developer-guide/dsl-api.html
index 08bf2ef8..ed2afb58 100644
--- a/36/streams/developer-guide/dsl-api.html
+++ b/36/streams/developer-guide/dsl-api.html
@@ -2818,6 +2818,7 @@ KStream&lt;String, String&gt; joined = left.join(right,
     (leftValue, rightValue) -&gt; &quot;left=&quot; + leftValue + &quot;, 
right=&quot; + rightValue, /* ValueJoiner */
     Joined.keySerde(Serdes.String()) /* key */
       .withValueSerde(Serdes.Long()) /* left value */
+      .withGracePeriod(Duration.ZERO) /* grace period */
   );
 
 // Java 7 example
@@ -2830,6 +2831,7 @@ KStream&lt;String, String&gt; joined = left.join(right,
     },
     Joined.keySerde(Serdes.String()) /* key */
       .withValueSerde(Serdes.Long()) /* left value */
+      .withGracePeriod(Duration.ZERO) /* grace period */
   );</code></pre>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -2849,6 +2851,12 @@ KStream&lt;String, String&gt; joined = left.join(right,
                                         <li>When the table is <a 
class="reference internal" href="#versioned-state-stores"><span class="std 
std-ref">versioned</span></a>,
                                             the table record to join with is 
determined by performing a timestamped lookup, i.e., the table record which is 
joined will be the latest-by-timestamp record with timestamp
                                             less than or equal to the stream 
record timestamp. If the stream record timestamp is older than the table's 
history retention, then the record is dropped.</li>
+                                        <li>To use the grace period, the table 
needs to be <a class="reference internal" href="#versioned-state-stores"><span 
class="std std-ref">versioned</span></a>.
+                                            This will cause the stream to 
buffer for the specified grace period before trying to find a matching record 
with the right timestamp in the table.
+                                            The case where the grace period 
would be used for is if a record in the table has a timestamp less than or 
equal to the stream record timestamp but arrives after the stream record.
+                                            If the table record arrives within 
the grace period the join will still occur.
+                                            If the table record does not 
arrive before the grace period the join will continue as normal.
+                                        </li>
                                     </ul>
                                     <p class="last">See the semantics overview 
at the bottom of this section for a detailed description.</p>
                                 </td>
@@ -2872,6 +2880,7 @@ KStream&lt;String, String&gt; joined = 
left.leftJoin(right,
     (leftValue, rightValue) -&gt; &quot;left=&quot; + leftValue + &quot;, 
right=&quot; + rightValue, /* ValueJoiner */
     Joined.keySerde(Serdes.String()) /* key */
       .withValueSerde(Serdes.Long()) /* left value */
+      .withGracePeriod(Duration.ZERO) /* grace period */
   );
 
 // Java 7 example
@@ -2884,6 +2893,7 @@ KStream&lt;String, String&gt; joined = 
left.leftJoin(right,
     },
     Joined.keySerde(Serdes.String()) /* key */
       .withValueSerde(Serdes.Long()) /* left value */
+      .withGracePeriod(Duration.ZERO) /* grace period */
   );</code></pre>
                                     <p>Detailed behavior:</p>
                                     <ul>
@@ -2906,6 +2916,12 @@ KStream&lt;String, String&gt; joined = 
left.leftJoin(right,
                                         <li>When the table is <a 
class="reference internal" href="#versioned-state-stores"><span class="std 
std-ref">versioned</span></a>,
                                             the table record to join with is 
determined by performing a timestamped lookup, i.e., the table record which is 
joined will be the latest-by-timestamp record with timestamp
                                             less than or equal to the stream 
record timestamp. If the stream record timestamp is older than the table's 
history retention, then the record that is joined will be <code class="docutils 
literal"><span class="pre">null</span></code>.</li>
+                                        <li>To use the grace period, the table 
needs to be <a class="reference internal" href="#versioned-state-stores"><span 
class="std std-ref">versioned</span></a>.
+                                            This will cause the stream to 
buffer for the specified grace period before trying to find a matching record 
with the right timestamp in the table.
+                                            The case where the grace period 
would be used for is if a record in the table has a timestamp less than or 
equal to the stream record timestamp but arrives after the stream record.
+                                            If the table record arrives within 
the grace period the join will still occur.
+                                            If the table record does not 
arrive before the grace period the join will continue as normal.
+                                        </li>
                                     </ul>
                                     <p class="last">See the semantics overview 
at the bottom of this section for a detailed description.</p>
                                 </td>
diff --git a/36/streams/upgrade-guide.html b/36/streams/upgrade-guide.html
index 6d75d724..6f40747d 100644
--- a/36/streams/upgrade-guide.html
+++ b/36/streams/upgrade-guide.html
@@ -147,6 +147,15 @@
       as upper and lower bound (with semantics "no bound") to simplify the 
usage of the <code>RangeQuery</code> class.
     </p>
 
+    <p>
+        KStreams-to-KTable joins now have an option for adding a grace period.
+        The grace period is enabled on the <code>Joined</code> object using 
with <code>withGracePeriod()</code> method.
+        This change was introduced in <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-923%3A+Add+A+Grace+Period+to+Stream+Table+Join";>KIP-923</a>.
+        To use the grace period option in the Stream-Table join the table must 
be
+        <a 
href="/{{version}}/documentation/streams/developer-guide/dsl-api.html#versioned-state-stores">versioned</a>.
+        For more information, including how it can be enabled and further 
configured, see the <a 
href="/{{version}}/documentation/streams/developer-guide/config-streams.html#rack-aware-assignment-strategy"><b>Kafka
 Streams Developer Guide</b></a>.
+    </p>
+
     <h3><a id="streams_api_changes_350" 
href="#streams_api_changes_350">Streams API changes in 3.5.0</a></h3>
     <p>      
       A new state store type, versioned key-value stores, was introduced in
@@ -1324,7 +1333,7 @@
             <td>Kafka Streams API (rows)</td>
             <td>0.10.0.x</td>
             <td>0.10.1.x and 0.10.2.x</td>
-            <td>0.11.0.x and<br>1.0.x and<br>1.1.x and<br>2.0.x and<br>2.1.x 
and<br>2.2.x and<br>2.3.x and<br>2.4.x and<br>2.5.x and<br>2.6.x and<br>2.7.x 
and<br>2.8.x and<br>3.0.x and<br>3.1.x and<br>3.2.x and<br>3.3.x and<br>3.4.x 
and<br>3.5.x</td>
+            <td>0.11.0.x and<br>1.0.x and<br>1.1.x and<br>2.0.x and<br>2.1.x 
and<br>2.2.x and<br>2.3.x and<br>2.4.x and<br>2.5.x and<br>2.6.x and<br>2.7.x 
and<br>2.8.x and<br>3.0.x and<br>3.1.x and<br>3.2.x and<br>3.3.x and<br>3.4.x 
and<br>3.5.x and<br>3.6.x</td>
           </tr>
           <tr>
             <td>0.10.0.x</td>
@@ -1351,7 +1360,7 @@
             <td>compatible; requires message format 0.10 or higher;<br>if 
message headers are used, message format 0.11<br>or higher required</td>
           </tr>
           <tr>
-            <td>2.2.1 and<br>2.3.x and<br>2.4.x and<br>2.5.x and<br>2.6.x 
and<br>2.7.x and<br>2.8.x and<br>3.0.x and<br>3.1.x and<br>3.2.x and<br>3.3.x 
and<br>3.4.x and<br>3.5.x</td>
+            <td>2.2.1 and<br>2.3.x and<br>2.4.x and<br>2.5.x and<br>2.6.x 
and<br>2.7.x and<br>2.8.x and<br>3.0.x and<br>3.1.x and<br>3.2.x and<br>3.3.x 
and<br>3.4.x and<br>3.5.x and<br>3.6.x</td>
             <td></td>
             <td></td>
             <td>compatible; requires message format 0.11 or 
higher;<br>enabling exactly-once v2 requires 2.4.x or higher</td>
diff --git a/36/toc.html b/36/toc.html
index 73bd66ae..737ef887 100644
--- a/36/toc.html
+++ b/36/toc.html
@@ -173,7 +173,7 @@
                     <ul>
                         <li><a href="#tiered_storage_overview">Tiered Storage 
Overview</a></li>
                         <li><a 
href="#tiered_storage_config">Configuration</a></li>
-                        <li><a href="#tiered_storage_config_ex">Quick Start 
Example</a></li>
+                        <li><a href="#tiered_storage_config_ex">Configurations 
Example</a></li>
                         <li><a 
href="#tiered_storage_limitation">Limitations</a></li>
                     </ul>
                 </li>
diff --git a/36/upgrade.html b/36/upgrade.html
index cb0ef015..e9a98507 100644
--- a/36/upgrade.html
+++ b/36/upgrade.html
@@ -19,6 +19,7 @@
 
 <script id="upgrade-template" type="text/x-handlebars-template">
 
+
 <h4><a id="upgrade_3_6_1" href="#upgrade_3_6_1">Upgrading to 3.6.1 from any 
version 0.8.x through 3.5.x</a></h4>
 
     <h5><a id="upgrade_361_zk" href="#upgrade_361_zk">Upgrading 
ZooKeeper-based clusters</a></h5>
@@ -62,7 +63,7 @@
         </li>
     </ol>
 
-    <h5><a id="upgrade_360_kraft" href="#upgrade_360_kraft">Upgrading 
KRaft-based clusters</a></h5>
+    <h5><a id="upgrade_361_kraft" href="#upgrade_361_kraft">Upgrading 
KRaft-based clusters</a></h5>
     <p><b>If you are upgrading from a version prior to 3.3.0, please see the 
note in step 3 below. Once you have changed the metadata.version to the latest 
version, it will not be possible to downgrade to a version prior to 
3.3-IV0.</b></p>
 
     <p><b>For a rolling upgrade:</b></p>
@@ -117,10 +118,9 @@
             <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes";>Tiered
 Storage Early Access Release Note</a>.
         </li>
         <li>Transaction partition verification (<a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense";>KIP-890</a>)
-            has been added to data partitions to prevent hanging transactions. 
Workloads with compression can experience InvalidRecordExceptions and 
UnknownServerExceptions.
-            This feature can be disabled by setting 
<code>transaction.partition.verification.enable</code> to false. Note that the 
default for 3.6 is true.
-            The configuration can also be updated dynamically and is applied 
to the broker.
-            This will be fixed in 3.6.1. See <a 
href="https://issues.apache.org/jira/browse/KAFKA-15653";>KAFKA-15653</a> for 
more details.
+            has been added to data partitions to prevent hanging transactions. 
This feature is enabled by default and can be disabled by setting 
<code>transaction.partition.verification.enable</code> to false.
+            The configuration can also be updated dynamically and is applied 
to the broker. Workloads running on version 3.6.0 with compression can 
experience
+            InvalidRecordExceptions and UnknownServerExceptions. Upgrading to 
3.6.1 or newer or disabling the feature fixes the issue.
         </li>
     </ul>
 
@@ -128,34 +128,34 @@
     All upgrade steps remain same as <a href="#upgrade_3_5_0">upgrading to 
3.5.0</a>
     <h5><a id="upgrade_352_notable" href="#upgrade_352_notable">Notable 
changes in 3.5.2</a></h5>
     <ul>
-        <li>
-            When migrating producer ID blocks from ZK to KRaft, there could be 
duplicate producer IDs being given to
-            transactional or idempotent producers. This can cause long term 
problems since the producer IDs are
-            persisted and reused for a long time.
-            See <a 
href="https://issues.apache.org/jira/browse/KAFKA-15552";>KAFKA-15552</a> for 
more details.
-        </li>
-        <li>
-            In 3.5.0 and 3.5.1, there could be an issue that the empty ISR is 
returned from controller after AlterPartition request
-            during rolling upgrade. This issue will impact the availability of 
the topic partition.
-            See <a 
href="https://issues.apache.org/jira/browse/KAFKA-15353";>KAFKA-15353</a> for 
more details.
-        </li>
-    </ul>
+    <li>
+        When migrating producer ID blocks from ZK to KRaft, there could be 
duplicate producer IDs being given to
+        transactional or idempotent producers. This can cause long term 
problems since the producer IDs are
+        persisted and reused for a long time.
+        See <a 
href="https://issues.apache.org/jira/browse/KAFKA-15552";>KAFKA-15552</a> for 
more details.
+    </li>
+    <li>
+        In 3.5.0 and 3.5.1, there could be an issue that the empty ISR is 
returned from controller after AlterPartition request
+        during rolling upgrade. This issue will impact the availability of the 
topic partition.
+        See <a 
href="https://issues.apache.org/jira/browse/KAFKA-15353";>KAFKA-15353</a> for 
more details.
+    </li>
+</ul>
 
 <h4><a id="upgrade_3_5_1" href="#upgrade_3_5_1">Upgrading to 3.5.1 from any 
version 0.8.x through 3.4.x</a></h4>
     All upgrade steps remain same as <a href="#upgrade_3_5_0">upgrading to 
3.5.0</a>
     <h5><a id="upgrade_351_notable" href="#upgrade_351_notable">Notable 
changes in 3.5.1</a></h5>
     <ul>
-        <li>
-            Upgraded the dependency, snappy-java, to a version which is not 
vulnerable to
-            <a 
href="https://nvd.nist.gov/vuln/detail/CVE-2023-34455";>CVE-2023-34455.</a>
-            You can find more information about the CVE at <a 
href="https://kafka.apache.org/cve-list#CVE-2023-34455";>Kafka CVE list.</a>
-        </li>
-        <li>
-            Fixed a regression introduced in 3.3.0, which caused 
<code>security.protocol</code> configuration values to be restricted to
-            upper case only. After the fix, <code>security.protocol</code> 
values are case insensitive.
-            See <a 
href="https://issues.apache.org/jira/browse/KAFKA-15053";>KAFKA-15053</a> for 
details.
-        </li>
-    </ul>
+    <li>
+        Upgraded the dependency, snappy-java, to a version which is not 
vulnerable to
+        <a 
href="https://nvd.nist.gov/vuln/detail/CVE-2023-34455";>CVE-2023-34455.</a>
+        You can find more information about the CVE at <a 
href="https://kafka.apache.org/cve-list#CVE-2023-34455";>Kafka CVE list.</a>
+    </li>
+    <li>
+        Fixed a regression introduced in 3.3.0, which caused 
<code>security.protocol</code> configuration values to be restricted to
+        upper case only. After the fix, <code>security.protocol</code> values 
are case insensitive.
+        See <a 
href="https://issues.apache.org/jira/browse/KAFKA-15053";>KAFKA-15053</a> for 
details.
+    </li>
+</ul>
 
 <h4><a id="upgrade_3_5_0" href="#upgrade_3_5_0">Upgrading to 3.5.0 from any 
version 0.8.x through 3.4.x</a></h4>
 
@@ -214,10 +214,8 @@
                 ./bin/kafka-features.sh upgrade --metadata 3.5
             </code>
         </li>
-        <li>Note that cluster metadata downgrade is not supported in this 
version since it has metadata changes.
-            Every <a 
href="https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java";>MetadataVersion</a>
-            after 3.2.x has a boolean parameter that indicates if there are 
metadata changes (i.e. <code>IBP_3_3_IV3(7, "3.3", "IV3", true)</code> means 
this version has metadata changes).
-            Given your current and target versions, a downgrade is only 
possible if there are no metadata changes in the versions between.</li>
+        <li>Note that the cluster metadata version cannot be downgraded to a 
pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded.
+            However, it is possible to downgrade to production versions such 
as 3.3-IV0, 3.3-IV1, etc.</li>
     </ol>
 
     <h5><a id="upgrade_350_notable" href="#upgrade_350_notable">Notable 
changes in 3.5.0</a></h5>
@@ -307,10 +305,8 @@
                 ./bin/kafka-features.sh upgrade --metadata 3.4
             </code>
         </li>
-        <li>Note that cluster metadata downgrade is not supported in this 
version since it has metadata changes.
-            Every <a 
href="https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java";>MetadataVersion</a>
-            after 3.2.x has a boolean parameter that indicates if there are 
metadata changes (i.e. <code>IBP_3_3_IV3(7, "3.3", "IV3", true)</code> means 
this version has metadata changes).
-            Given your current and target versions, a downgrade is only 
possible if there are no metadata changes in the versions between.</li>
+        <li>Note that the cluster metadata version cannot be downgraded to a 
pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded.
+            However, it is possible to downgrade to production versions such 
as 3.3-IV0, 3.3-IV1, etc.</li>
     </ol>
 
 <h5><a id="upgrade_340_notable" href="#upgrade_340_notable">Notable changes in 
3.4.0</a></h5>
@@ -377,10 +373,7 @@
         ./bin/kafka-features.sh upgrade --metadata 3.3
         </code>
     </li>
-    <li>Note that cluster metadata downgrade is not supported in this version 
since it has metadata changes.
-        Every <a 
href="https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java";>MetadataVersion</a>
-        after 3.2.x has a boolean parameter that indicates if there are 
metadata changes (i.e. <code>IBP_3_3_IV3(7, "3.3", "IV3", true)</code> means 
this version has metadata changes).
-        Given your current and target versions, a downgrade is only possible 
if there are no metadata changes in the versions between.</li>
+    <li>Note that the cluster metadata version cannot be downgraded to a 
pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded. 
However, it is possible to downgrade to production versions such as 3.3-IV0, 
3.3-IV1, etc.</li>
 </ol>
 
 <h5><a id="upgrade_331_notable" href="#upgrade_331_notable">Notable changes in 
3.3.1</a></h5>
@@ -467,7 +460,7 @@
             <a href="https://www.slf4j.org/codes.html#no_tlm";>possible 
compatibility issues originating from the logging framework</a>.</li>
         <li>The example connectors, <code>FileStreamSourceConnector</code> and 
<code>FileStreamSinkConnector</code>, have been
             removed from the default classpath. To use them in Kafka Connect 
standalone or distributed mode they need to be
-            explicitly added, for example 
<code>CLASSPATH=./libs/connect-file-3.2.0.jar 
./bin/connect-distributed.sh</code>.</li>
+            explicitly added, for example 
<code>CLASSPATH=./lib/connect-file-3.2.0.jar 
./bin/connect-distributed.sh</code>.</li>
     </ul>
 
 <h4><a id="upgrade_3_1_0" href="#upgrade_3_1_0">Upgrading to 3.1.0 from any 
version 0.8.x through 3.0.x</a></h4>

Reply via email to