mimaison commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499499955


##########
36/documentation.html:
##########
@@ -54,12 +54,10 @@ <h3>Kafka 3.6 Documentation</h3>
                     <a href="/26/documentation.html">2.6.X</a>, 
                     <a href="/27/documentation.html">2.7.X</a>,
                     <a href="/28/documentation.html">2.8.X</a>,
-                    <a href="/30/documentation.html">3.0.X</a>,
-                    <a href="/31/documentation.html">3.1.X</a>,
-                    <a href="/32/documentation.html">3.2.X</a>,
-                    <a href="/33/documentation.html">3.3.X</a>,
-                    <a href="/34/documentation.html">3.4.X</a>,
-                    <a href="/35/documentation.html">3.5.X</a>.
+                    <a href="/30/documentation.html">3.0.X</a>.
+                    <a href="/31/documentation.html">3.1.X</a>.
+                    <a href="/32/documentation.html">3.2.X</a>.
+                    <a href="/33/documentation.html">3.3.X</a>.

Review Comment:
   This is the 3.6 docs, so it should point to all previous releases including 
3.4 and 3.5. Why are we removing them?



##########
36/generated/connect_rest.yaml:
##########
@@ -8,7 +8,7 @@ info:
     name: Apache 2.0
     url: https://www.apache.org/licenses/LICENSE-2.0.html
   title: Kafka Connect REST API
-  version: 3.6.1
+  version: 3.6.2-SNAPSHOT

Review Comment:
   Again we don't want this change. The docs should cover the last released 
version for 3.6, hence 3.6.1.



##########
36/ops.html:
##########
@@ -3984,95 +3984,27 @@ <h5 class="anchor-heading"><a 
id="tiered_storage_config_topic" class="anchor-lin
   If unset, The value in <code>retention.ms</code> and 
<code>retention.bytes</code> will be used.
 </p>
 
-<h4 class="anchor-heading"><a id="tiered_storage_config_ex" 
class="anchor-link"></a><a href="#tiered_storage_config_ex">Quick Start 
Example</a></h4>
-
-<p>Apache Kafka doesn't provide an out-of-the-box RemoteStorageManager 
implementation. To have a preview of the tiered storage
-  feature, the <a 
href="https://github.com/apache/kafka/blob/trunk/storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorage.java";>LocalTieredStorage</a>
-  implemented for integration test can be used, which will create a temporary 
directory in local storage to simulate the remote storage.
-</p>
-
-<p>To adopt the `LocalTieredStorage`, the test library needs to be built 
locally</p>
-<pre># please checkout to the specific version tag you're using before 
building it
-# ex: `git checkout 3.6.1`
-./gradlew clean :storage:testJar</pre>
-<p>After build successfully, there should be a `kafka-storage-x.x.x-test.jar` 
file under `storage/build/libs`.
-Next, setting configurations in the broker side to enable tiered storage 
feature.</p>
+<h4 class="anchor-heading"><a id="tiered_storage_config_ex" 
class="anchor-link"></a><a href="#tiered_storage_config_ex">Configurations 
Example</a></h4>
 
+<p>Here is a sample configuration to enable tiered storage feature in broker 
side:
 <pre>
 # Sample Zookeeper/Kraft broker server.properties listening on 
PLAINTEXT://:9092
 remote.log.storage.system.enable=true
-
-# Setting the listener for the clients in RemoteLogMetadataManager to talk to 
the brokers.
+# Please provide the implementation for remoteStorageManager. This is the 
mandatory configuration for tiered storage.
+# 
remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.NoOpRemoteStorageManager
+# Using the "PLAINTEXT" listener for the clients in RemoteLogMetadataManager 
to talk to the brokers.
 remote.log.metadata.manager.listener.name=PLAINTEXT
-
-# Please provide the implementation info for remoteStorageManager.
-# This is the mandatory configuration for tiered storage.
-# Here, we use the `LocalTieredStorage` built above.
-remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.LocalTieredStorage
-remote.log.storage.manager.class.path=/PATH/TO/kafka-storage-x.x.x-test.jar
-
-# These 2 prefix are default values, but customizable
-remote.log.storage.manager.impl.prefix=rsm.config.
-remote.log.metadata.manager.impl.prefix=rlmm.config.
-
-# Configure the directory used for `LocalTieredStorage`
-# Note, please make sure the brokers need to have access to this directory
-rsm.config.dir=/tmp/kafka-remote-storage
-
-# This needs to be changed if number of brokers in the cluster is more than 1
-rlmm.config.remote.log.metadata.topic.replication.factor=1
-
-# Try to speed up the log retention check interval for testing
-log.retention.check.interval.ms=1000
 </pre>
 </p>
 
-<p>Following <a href="#quickstart_startserver">quick start guide</a> to start 
up the kafka environment.
-  Then, create a topic with tiered storage enabled with configs:
-
-<pre>
-# remote.storage.enable=true -> enables tiered storage on the topic
-# local.retention.ms=1000 -> The number of milliseconds to keep the local log 
segment before it gets deleted.
-  Note that a local log segment is eligible for deletion only after it gets 
uploaded to remote.
-# retention.ms=3600000 -> when segments exceed this time, the segments in 
remote storage will be deleted
-# segment.bytes=1048576 -> for test only, to speed up the log segment rolling 
interval
-# file.delete.delay.ms=10000 -> for test only, to speed up the local-log 
segment file delete delay
-
-bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 \
---config remote.storage.enable=true --config local.retention.ms=1000 --config 
retention.ms=3600000 \
---config segment.bytes=1048576 --config file.delete.delay.ms=1000
+<p>After broker is started, creating a topic with tiered storage enabled, and 
a small log time retention value to try this feature:
+<pre>bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 --config remote.storage.enable=true --config 
local.retention.ms=1000
 </pre>
 </p>
 
-<p>Try to send messages to the `tieredTopic` topic to roll the log segment:</p>

Review Comment:
   Yes I'd keep the larger section we currently have in the docs.



##########
36/upgrade.html:
##########
@@ -467,7 +460,7 @@ <h5><a id="upgrade_320_notable" 
href="#upgrade_320_notable">Notable changes in 3
             <a href="https://www.slf4j.org/codes.html#no_tlm";>possible 
compatibility issues originating from the logging framework</a>.</li>
         <li>The example connectors, <code>FileStreamSourceConnector</code> and 
<code>FileStreamSinkConnector</code>, have been
             removed from the default classpath. To use them in Kafka Connect 
standalone or distributed mode they need to be
-            explicitly added, for example 
<code>CLASSPATH=./libs/connect-file-3.2.0.jar 
./bin/connect-distributed.sh</code>.</li>
+            explicitly added, for example 
<code>CLASSPATH=./lib/connect-file-3.2.0.jar 
./bin/connect-distributed.sh</code>.</li>

Review Comment:
   The correct path is `./libs` so we should not change this line.



##########
36/documentation.html:
##########
@@ -33,7 +33,7 @@
     <!--//#include virtual="../includes/_docs_banner.htm" -->
     
     <h1>Documentation</h1>
-    <h3>Kafka 3.6 Documentation</h3>
+    <h3>Kafka 3.4 Documentation</h3>

Review Comment:
   This does not seem right. This is the 3.6 documentation so it should be 
`Kafka 3.6`. Why are we doing this change?



##########
36/js/templateData.js:
##########
@@ -19,6 +19,6 @@ limitations under the License.
 var context={
     "version": "36",
     "dotVersion": "3.6",
-    "fullDotVersion": "3.6.1",
+    "fullDotVersion": "3.6.2-SNAPSHOT",

Review Comment:
   Yes we want this to stay `3.6.1`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to