gharris1727 commented on code in PR #547:
URL: https://github.com/apache/kafka-site/pull/547#discussion_r1350763659


##########
blog.html:
##########
@@ -22,6 +22,146 @@
         <!--#include virtual="includes/_nav.htm" -->
         <div class="right">
             <h1 class="content-title">Blog</h1>
+            <article>
+                <h2 class="bullet">
+                    <a id="apache_kafka_360_release_announcement"></a>
+                    <a href="#apache_kafka_360_release_announcement">Apache 
Kafka 3.6.0 Release Announcement</a>
+                </h2>
+                08 Oct 2023 - Satish Duggana (<a 
href="https://twitter.com/0xeed";>@SatishDuggana</a>)
+                <p>We are proud to announce the release of Apache Kafka 3.6.0. 
This release contains many new features and improvements. This blog post will 
highlight some of the more prominent features. For a full list of changes, be 
sure to check the <a 
href="https://downloads.apache.org/kafka/3.6.0/RELEASE_NOTES.html";>release 
notes</a>.</p>
+                <p>See the <a 
href="https://kafka.apache.org/36/documentation.html#upgrade_3_6_0";>Upgrading 
to 3.6.0 from any version 0.8.x through 3.5.x</a> section in the documentation 
for the list of notable changes and detailed upgrade steps.</p>
+                <p>
+                    The ability to migrate Kafka clusters from a ZooKeeper 
metadata system to a KRaft metadata system is
+                    now ready for usage in production environments. See the 
ZooKeeper to KRaft migration
+                    <a 
href="https://kafka.apache.org/documentation/#kraft_zk_migration";>operations 
documentation</a> for
+                    details. Note that support for JBOD is still not available 
for KRaft clusters, therefore clusters
+                    utilizing JBOD can not be migrated. See <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft";>KIP-858</a>
+                    for details regarding KRaft and JBOD.
+                </p>
+                <p>Support for Delegation Tokens in KRaft (<a 
href="https://issues.apache.org/jira/browse/KAFKA-15219";>KAFKA-15219</a>) was 
completed in 3.6, further reducing the gap of features between ZooKeeper-based 
Kafka clusters and KRaft. Migration of delegation tokens from ZooKeeper to 
KRaft is also included in 3.6.</p>
+                <p><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage";>Tiered
 Storage</a> is an early access feature. It is currently only suitable for 
testing in non production environments. See the <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes";>Early
 Access Release Notes</a> for more details.</p>
+
+                <p><i>Note: ZooKeeper is marked as deprecated since 3.5.0 
release. ZooKeeper is planned to be removed in Apache Kafka 4.0. For more 
information, please see the documentation for <a 
href="/documentation#zk_depr">ZooKeeper Deprecation</a></i><p>
+                <h3>Kafka Broker, Controller, Producer, Consumer and Admin 
Client</h3>
+                <ul>
+                    <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage";>KIP-405</a>:
+                        Kafka Tiered Storage (Early Access): 
</b><br>Introduces Tiered Storage to Kafka. Note that this
+                        is an early access feature only advised for use in 
non-production environments (see the <a
+                                
href="https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes";>early
+                            access notes</a> for more information). This 
feature provides a separation of computation
+                        and storage in the broker for pluggable storage 
tiering natively in Kafka Tiered Storage brings
+                        a seamless extension of storage to remote objects with 
minimal operational changes.
+                    </li>
+                    <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-405%3A+Kafka+Tiered+Storage";>KIP-890</a>:
+                        Transactions Server Side Defense (Part 1): 
</b><br>Hanging transactions can negatively impact
+                        your read committed consumers and prevent compacted 
logs from being compacted. KIP-890 helps
+                        address hanging transactions by verifying partition 
additions. Part 2 of KIP-890 will optimize
+                        verification, which currently adds an extra hop.
+                    </li>
+                    <li><b><a
+                            
href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=195726330";>KIP-797</a>:
+                        Accept duplicate listener on port for IPv4/IPv6: 
</b><br>Until now, Kafka has not supported
+                        duplicate listeners on the same port. This works when 
using only a single IP stack, but presents
+                        an issue if you are working with both IPv4 and IPv6. 
With KIP-797, brokers can be configured
+                        with listeners that have the same port on different IP 
stacks. This update does not affect
+                        advertised listeners, which already have this feature.
+                    </li>
+                    <li><b><a
+                            
href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=225152035";>KIP-863</a>:
+                        Reduce CompletedFetch#parseRecord() memory copy: 
</b><br>reduces memory allocation and improves
+                        memory performance during record deserialization by 
using a ByteBuffer instead of byte[] for
+                        deserialization, which improves efficiency. Updated 
public interfaces include the Deserializer
+                        class, ByteBufferDeserializer class, and 
StringDeserializer class.
+                    </li>
+                    <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-868+Metadata+Transactions";>KIP-868</a>:
+                        Metadata Transactions: </b><br> improves the overall 
durability of the KRaft layer by adding
+                        metadata transactions that consist of:
+                        <ul>
+                            <li>ssBeginTransaction</li>
+                            <li>Number of records</li>
+                            <li>EndTransaction or AbortTransaction</li>
+                        </ul>
+                        KRaft uses record batches as a mechanism for 
atomicity. Typically, there was a limit to the
+                        fetch size on the Raft consensus layer, and the 
controller could generate a set of atomic
+                        records that exceeded this limit. This update 
introduces marker records that allow larger sets
+                        of atomic records to be sent to the Raft consensus 
layer in multiple batches. This bypasses the
+                        fetch limit.
+                    </li>
+                    <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-902%3A+Upgrade+Zookeeper+to+3.8.2";>KIP-902</a>:
+                        Upgrade Zookeeper to 3.8.2:</b>upgrades the ZooKeeper 
version that is bundled with Kafka to
+                        version 3.8.2. The new version includes several 
updates and security improvements. This is the
+                        last planned update for ZooKeeper in the 3.8.x 
releases, as it will be removed in Apache Kafka
+                        4.0 and replaced with KRaft.
+                    </li>
+                    <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-917%3A+Additional+custom+metadata+for+remote+log+segment";>KIP-917</a>:
+                        Additional custom metadata for remote log 
segment:</b><br> It introduces having an optional
+                        custom metadata as part of remote log segment 
metadata. RemoteStorageManager returns the
+                        optional custom metadata when copyLogSegmentData() is 
invoked. It will be passed along with
+                        remote log segment metadata.
+                    </li>
+                    <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-937%3A+Improve+Message+Timestamp+Validation";>KIP-937</a>:
+                        Improve Message Timestamp Validation: </b> Improves 
data integrity and prevents potential
+                        pitfalls caused by inaccurate timestamp handling by 
adding more validation logic for message
+                        timestamps. While past timestamps are a normal 
occurrence in Kafka, future timestamps might
+                        represent an incorrectly formatted integer. KIP-937 
rejects messages with future timestamps and
+                        provides a descriptive exemption.
+                    </li>
+                    <li><b><a
+                            
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-938%3A+Add+more+metrics+for+measuring+KRaft+performance";>KIP-938</a>:
+                        Add more metrics for measuring KRaft performance: 
</b><br> Adds new controller, loader, and
+                        snapshot emitter KRaft performance metrics.
+                    </li>
+                </ul>
+                <h3>Kafka Streams</h3>
+                <ul>
+                <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-923%3A+Add+A+Grace+Period+to+Stream+Table+Join";>KIP-923</a>:
 Add A Grace Period to Stream Table Join: </b>Adds a grace period to 
stream-table joins to improve table-side out-of-order data handling. The joined 
object has a new method called .withGracePeriod that will cause the table side 
lookup to only happen after the grace period has passed.</li>
+                <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-941%3A+Range+queries+to+accept+null+lower+and+upper+bounds";>KIP-941</a>:
 Range queries to accept null lower and upper bounds: </b>Previously, 
RangeQuery did not support null to specify “no upper/lower bound”.
+                    KIP-941 allows users to pass null into withRange(...) for 
lower/upper bounds to specify a full or half-open range:
+                    <ul>
+                    <li>`withRange(null, null)` == `withNoBounds()`</li>
+                    <li>`withRange(lower, null)` == 
`withLowerBound(lower)`</li>
+                    <li>`withRange(null, upper)` == 
`withUpperBound(upper)`</li>
+                    </ul>
+                </li>
+                </ul>
+                <h3>Kafka Connect</h3>
+                <ul>
+                    <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-793%3A+Allow+sink+connectors+to+be+used+with+topic-mutating+SMTs";>KIP-793</a>:
 Allow sink connectors to be used with topic-mutating SMTs: </b><br>Adds 
support for topic-mutating SMTs for async sink connectors. This is to address 
an incompatibility between sink connectors overriding the SinkTask::preCommit 
method and SMTs that mutate the topic field of a SinkRecord .</li>
+                    <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-875%3A+First-class+offsets+support+in+Kafka+Connect";>KIP-875</a>:
 First-class offsets support in Kafka Connect: </b><br>Provides first-class 
admin support for offsets in Kafka Connect. KIP-875 Part 1 added endpoints to 
get offsets and a new STOPPED state for connectors. The alter offsets and reset 
offsets endpoints have now been added.
+                        <table>
+                            <tr>
+                                <th>Action</th>
+                                <th>Description</th>
+                            </tr>
+                            <tr>
+                                <td>GET /connectors/{connector}/offsets</td>
+                                <td>Retrieve the offsets for a connector; the 
connector must exist</td>
+                            </tr>
+                            <tr>
+                                <td>PATCH /connectors/{connector}/offsets</td>
+                                <td>Alter the offsets for a connector; the 
connector must exist, and must be in the STOPPED state
+                                </td>
+                            </tr>
+                            <tr>
+                                <td>DELETE /connectors/{connector}/offsets</td>
+                                <td>Reset the offsets for a connector; the 
connector must exist, and must be in the STOPPED state
+                                </td>
+                            </tr>
+                            <tr>
+                                <td>PUT /connectors/{connector}/pause</td>
+                                <td>Pause the connector; the connector must 
exist</td>
+                            </tr>
+                        </table>
+                    </li>
+                    <li><b><a 
href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-898%3A+Modernize+Connect+plugin+discovery";>KIP-898</a>:
 Modernize Connect plugin discovery: </b><br>With KIP-898, Connect workers now 
read from ServiceLoader manifests and module info directly during startup for 
more efficient plugin class discovery. Note that this update requires connector 
developers to add service declarations to their plugins.</li>

Review Comment:
   Alternatively, just soften "requires" so that someone reading the release 
blog and not the KIP doesn't get the impression that this is a breaking change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to