This is an automated email from the ASF dual-hosted git repository.

viktor pushed a commit to branch 4.0
in repository https://gitbox.apache.org/repos/asf/kafka.git

commit b53bba7f3647da86699b4cddd9bb2c590ecedba3
Author: Viktor Somogyi-Vass <[email protected]>
AuthorDate: Wed Feb 11 14:28:29 2026 +0100

    MINOR: Update version in docs
---
 docs/apis/_index.md                               | 10 +++++-----
 docs/getting-started/docker.md                    |  8 ++++----
 docs/getting-started/quickstart.md                | 18 +++++++++---------
 docs/getting-started/upgrade.md                   | 10 +++++-----
 docs/operations/tiered-storage.md                 |  4 ++--
 docs/streams/developer-guide/datatypes.md         |  2 +-
 docs/streams/developer-guide/dsl-api.md           |  6 +++---
 docs/streams/developer-guide/testing.md           |  2 +-
 docs/streams/developer-guide/write-streams-app.md | 12 ++++++------
 docs/streams/quickstart.md                        |  6 +++---
 docs/streams/tutorial.md                          |  2 +-
 docs/streams/upgrade-guide.md                     | 12 ++++++------
 12 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/docs/apis/_index.md b/docs/apis/_index.md
index dceba612b33..411e5afce77 100644
--- a/docs/apis/_index.md
+++ b/docs/apis/_index.md
@@ -47,7 +47,7 @@ To use the producer, you can use the following maven 
dependency:
     <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
-       <version>4.0.1</version>
+       <version>4.0.2</version>
     </dependency>
 
 # Consumer API
@@ -62,7 +62,7 @@ To use the consumer, you can use the following maven 
dependency:
     <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
-       <version>4.0.1</version>
+       <version>4.0.2</version>
     </dependency>
 
 # Streams API
@@ -79,7 +79,7 @@ To use Kafka Streams you can use the following maven 
dependency:
     <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-streams</artifactId>
-       <version>4.0.1</version>
+       <version>4.0.2</version>
     </dependency>
 
 When using Scala you may optionally include the `kafka-streams-scala` library. 
Additional documentation on using the Kafka Streams DSL for Scala is available 
[in the developer 
guide](/40/documentation/streams/developer-guide/dsl-api.html#scala-dsl). 
@@ -90,7 +90,7 @@ To use Kafka Streams DSL for Scala 2.13 you can use the 
following maven dependen
     <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-streams-scala_2.13</artifactId>
-       <version>4.0.1</version>
+       <version>4.0.2</version>
     </dependency>
 
 # Connect API
@@ -111,7 +111,7 @@ To use the Admin API, add the following Maven dependency:
     <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
-       <version>4.0.1</version>
+       <version>4.0.2</version>
     </dependency>
 
 For more information about the Admin APIs, see the 
[javadoc](/40/javadoc/index.html?org/apache/kafka/clients/admin/Admin.html 
"Kafka 4.0 Javadoc"). 
diff --git a/docs/getting-started/docker.md b/docs/getting-started/docker.md
index f78f1d91501..c645259bc27 100644
--- a/docs/getting-started/docker.md
+++ b/docs/getting-started/docker.md
@@ -33,7 +33,7 @@ type: docs
 Docker image can be pulled from Docker Hub using the following command: 
     
     
-    $ docker pull apache/kafka:4.0.1
+    $ docker pull apache/kafka:4.0.2
 
 If you want to fetch the latest version of the Docker image use following 
command: 
     
@@ -43,7 +43,7 @@ If you want to fetch the latest version of the Docker image 
use following comman
 To start the Kafka container using this Docker image with default configs and 
on default port 9092: 
     
     
-    $ docker run -p 9092:9092 apache/kafka:4.0.1
+    $ docker run -p 9092:9092 apache/kafka:4.0.2
 
 ## GraalVM Based Native Apache Kafka Docker Image
 
@@ -53,7 +53,7 @@ NOTE: This image is experimental and intended for local 
development and testing
 Docker image can be pulled from Docker Hub using the following command: 
     
     
-    $ docker pull apache/kafka-native:4.0.1
+    $ docker pull apache/kafka-native:4.0.2
 
 If you want to fetch the latest version of the Docker image use following 
command: 
     
@@ -63,7 +63,7 @@ If you want to fetch the latest version of the Docker image 
use following comman
 To start the Kafka container using this Docker image with default configs and 
on default port 9092: 
     
     
-    $ docker run -p 9092:9092 apache/kafka-native:4.0.1
+    $ docker run -p 9092:9092 apache/kafka-native:4.0.2
 
 ## Usage guide
 
diff --git a/docs/getting-started/quickstart.md 
b/docs/getting-started/quickstart.md
index e0d18c697df..15f9ead96fc 100644
--- a/docs/getting-started/quickstart.md
+++ b/docs/getting-started/quickstart.md
@@ -28,11 +28,11 @@ type: docs
 
 ## Step 1: Get Kafka
 
-[Download](https://www.apache.org/dyn/closer.cgi?path=/kafka/4.0.1/kafka_2.13-4.0.1.tgz)
 the latest Kafka release and extract it: 
+[Download](https://www.apache.org/dyn/closer.cgi?path=/kafka/4.0.2/kafka_2.13-4.0.2.tgz)
 the latest Kafka release and extract it:
     
     
-    $ tar -xzf kafka_2.13-4.0.1.tgz
-    $ cd kafka_2.13-4.0.1
+    $ tar -xzf kafka_2.13-4.0.2.tgz
+    $ cd kafka_2.13-4.0.2
 
 ## Step 2: Start the Kafka environment
 
@@ -64,24 +64,24 @@ Once the Kafka server has successfully launched, you will 
have a basic Kafka env
 Get the Docker image:
     
     
-    $ docker pull apache/kafka:4.0.1
+    $ docker pull apache/kafka:4.0.2
 
 Start the Kafka Docker container: 
     
     
-    $ docker run -p 9092:9092 apache/kafka:4.0.1
+    $ docker run -p 9092:9092 apache/kafka:4.0.2
 
 ### Using GraalVM Based Native Apache Kafka Docker Image
 
 Get the Docker image:
     
     
-    $ docker pull apache/kafka-native:4.0.1
+    $ docker pull apache/kafka-native:4.0.2
 
 Start the Kafka Docker container:
     
     
-    $ docker run -p 9092:9092 apache/kafka-native:4.0.1
+    $ docker run -p 9092:9092 apache/kafka-native:4.0.2
 
 ## Step 3: Create a topic to store your events
 
@@ -135,12 +135,12 @@ You probably have lots of data in existing systems like 
relational databases or
 
 In this quickstart we'll see how to run Kafka Connect with simple connectors 
that import data from a file to a Kafka topic and export data from a Kafka 
topic to a file. 
 
-First, make sure to add `connect-file-4.0.1.jar` to the `plugin.path` property 
in the Connect worker's configuration. For the purpose of this quickstart we'll 
use a relative path and consider the connectors' package as an uber jar, which 
works when the quickstart commands are run from the installation directory. 
However, it's worth noting that for production deployments using absolute paths 
is always preferable. See 
[plugin.path](../../configuration/kafka-connect-configs/#connectconfigs_ [...]
+First, make sure to add `connect-file-4.0.2.jar` to the `plugin.path` property 
in the Connect worker's configuration. For the purpose of this quickstart we'll 
use a relative path and consider the connectors' package as an uber jar, which 
works when the quickstart commands are run from the installation directory. 
However, it's worth noting that for production deployments using absolute paths 
is always preferable. See 
[plugin.path](../../configuration/kafka-connect-configs/#connectconfigs_ [...]
 
 Edit the `config/connect-standalone.properties` file, add or change the 
`plugin.path` configuration property match the following, and save the file: 
     
     
-    $ echo "plugin.path=libs/connect-file-4.0.1.jar" >> 
config/connect-standalone.properties
+    $ echo "plugin.path=libs/connect-file-4.0.2.jar" >> 
config/connect-standalone.properties
 
 Then, start by creating some seed data to test with: 
     
diff --git a/docs/getting-started/upgrade.md b/docs/getting-started/upgrade.md
index 796e7b91cb7..b265a3e3b1a 100644
--- a/docs/getting-started/upgrade.md
+++ b/docs/getting-started/upgrade.md
@@ -26,9 +26,9 @@ type: docs
 -->
 
 
-## Upgrading to 4.0.1
+## Upgrading to 4.0.2
 
-### Upgrading Clients to 4.0.1
+### Upgrading Clients to 4.0.2
 
 **For a rolling upgrade:**
 
@@ -37,7 +37,7 @@ type: docs
 
 
 
-### Upgrading Servers to 4.0.1 from any version 3.3.x through 3.9.x
+### Upgrading Servers to 4.0.2 from any version 3.3.x through 3.9.x
 
 Note: Apache Kafka 4.0 only supports KRaft mode - ZooKeeper mode has been 
removed. As such, **broker upgrades to 4.0.0 (and higher) require KRaft mode 
and the software and metadata versions must be at least 3.3.x** (the first 
version when KRaft mode was deemed production ready). For clusters in KRaft 
mode with versions older than 3.3.x, we recommend upgrading to 3.9.x before 
upgrading to 4.0.x. Clusters in ZooKeeper mode have to be [migrated to KRaft 
mode](/40/operations/kraft/#zookeeper [...]
 
@@ -67,7 +67,7 @@ Note: Apache Kafka 4.0 only supports KRaft mode - ZooKeeper 
mode has been remove
 ### Notable changes in 4.0.0
 
   * Old protocol API versions have been removed. Users should ensure brokers 
are version 2.1 or higher before upgrading Java clients (including Connect and 
Kafka Streams which use the clients internally) to 4.0. Similarly, users should 
ensure their Java clients (including Connect and Kafka Streams) version is 2.1 
or higher before upgrading brokers to 4.0. Finally, care also needs to be taken 
when it comes to kafka clients that are not part of Apache Kafka, please see 
[KIP-896](https://cw [...]
-  * Apache Kafka 4.0 only supports KRaft mode - ZooKeeper mode has been 
removed. About version upgrade, check [Upgrading to 4.0.1 from any version 
3.3.x through 
3.9.x](/40/getting-started/upgrade/#upgrading-servers-to-401-from-any-version-33x-through-39x)
 for more info. 
+  * Apache Kafka 4.0 only supports KRaft mode - ZooKeeper mode has been 
removed. About version upgrade, check [Upgrading to 4.0.2 from any version 
3.3.x through 
3.9.x](/40/getting-started/upgrade/#upgrading-servers-to-402-from-any-version-33x-through-39x)
 for more info.
   * Apache Kafka 4.0 ships with a brand-new group coordinator implementation 
(See 
[here](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=217387038#KIP848:TheNextGenerationoftheConsumerRebalanceProtocol-GroupCoordinator)).
 Functionally speaking, it implements all the same APIs. There are reasonable 
defaults, but the behavior of the new group coordinator can be tuned by setting 
the configurations with prefix `group.coordinator`. 
   * The Next Generation of the Consumer Rebalance Protocol 
([KIP-848](https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol))
 is now Generally Available (GA) in Apache Kafka 4.0. The protocol is 
automatically enabled on the server when the upgrade to 4.0 is finalized. Note 
that once the new protocol is used by consumer groups, the cluster can only 
downgrade to version 3.4.1 or newer. Check 
[here](/40/documentation.html#consume [...]
   * Transactions Server Side Defense 
([KIP-890](https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense))
 brings a strengthened transactional protocol to Apache Kafka 4.0. The new and 
improved transactional protocol is enabled when the upgrade to 4.0 is 
finalized. When using 4.0 producer clients, the producer epoch is bumped on 
every transaction to ensure every transaction includes the intended messages 
and duplicates are not written as part of the n [...]
@@ -167,7 +167,7 @@ Note: Apache Kafka 4.0 only supports KRaft mode - ZooKeeper 
mode has been remove
       * **TransactionAbortableException:** Specifically introduced to signal 
errors that should lead to transaction abortion, ensuring this exception is 
properly handled is critical for maintaining the integrity of transactional 
processing.
       * To ensure seamless operation and compatibility with future Kafka 
versions, developers are encouraged to update their error-handling logic to 
treat both exceptions as triggers for aborting transactions. This approach is 
pivotal for preserving exactly-once semantics.
       * See 
[KIP-890](https://cwiki.apache.org/confluence/display/KAFKA/KIP-890%3A+Transactions+Server-Side+Defense)
 and 
[KIP-1050](https://cwiki.apache.org/confluence/display/KAFKA/KIP-1050%3A+Consistent+error+handling+for+Transactions)
 for more details 
-    * The filename for rotated `state-change.log` files incorrectly rotates to 
`stage-change.log.[date]` (changing state to stage). This issue is corrected in 
4.0.1. See [KAFKA-19576](https://issues.apache.org/jira/browse/KAFKA-19576) for 
details. 
+    * The filename for rotated `state-change.log` files incorrectly rotates to 
`stage-change.log.[date]` (changing state to stage). This issue is corrected in 
4.0.2. See [KAFKA-19576](https://issues.apache.org/jira/browse/KAFKA-19576) for 
details.
 
 
 
diff --git a/docs/operations/tiered-storage.md 
b/docs/operations/tiered-storage.md
index 517bdcb8d05..43301f9e04f 100644
--- a/docs/operations/tiered-storage.md
+++ b/docs/operations/tiered-storage.md
@@ -63,7 +63,7 @@ To adopt the `LocalTieredStorage`, the test library needs to 
be built locally
     
     
     # please checkout to the specific version tag you're using before building 
it
-    # ex: `git checkout 4.0.1`
+    # ex: `git checkout 4.0.2`
     $ ./gradlew clean :storage:testJar
 
 After build successfully, there should be a `kafka-storage-x.x.x-test.jar` 
file under `storage/build/libs`. Next, setting configurations in the broker 
side to enable tiered storage feature.
@@ -79,7 +79,7 @@ After build successfully, there should be a 
`kafka-storage-x.x.x-test.jar` file
     # This is the mandatory configuration for tiered storage.
     # Here, we use the `LocalTieredStorage` built above.
     
remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.LocalTieredStorage
-    remote.log.storage.manager.class.path=/PATH/TO/kafka-storage-4.0.1-test.jar
+    remote.log.storage.manager.class.path=/PATH/TO/kafka-storage-4.0.2-test.jar
     
     # These 2 prefix are default values, but customizable
     remote.log.storage.manager.impl.prefix=rsm.config.
diff --git a/docs/streams/developer-guide/datatypes.md 
b/docs/streams/developer-guide/datatypes.md
index f37953d4c1d..eea7cc6e8cf 100644
--- a/docs/streams/developer-guide/datatypes.md
+++ b/docs/streams/developer-guide/datatypes.md
@@ -92,7 +92,7 @@ Apache Kafka includes several built-in serde implementations 
for Java primitives
     <dependency>
         <groupId>org.apache.kafka</groupId>
         <artifactId>kafka-clients</artifactId>
-        <version>4.0.1</version>
+        <version>4.0.2</version>
     </dependency>
 
 This artifact provides the following serde implementations under the package 
[org.apache.kafka.common.serialization](https://github.com/apache/kafka/blob/4.0/clients/src/main/java/org/apache/kafka/common/serialization),
 which you can leverage when e.g., defining default serializers in your Streams 
configuration.  
diff --git a/docs/streams/developer-guide/dsl-api.md 
b/docs/streams/developer-guide/dsl-api.md
index 0fe0438c846..ff4a15ce58f 100644
--- a/docs/streams/developer-guide/dsl-api.md
+++ b/docs/streams/developer-guide/dsl-api.md
@@ -5010,7 +5010,7 @@ The following deprecated methods are no longer available 
in Kafka Streams:
 
 The Processor API now serves as a unified replacement for all these methods. 
It simplifies the API surface while maintaining support for both stateless and 
stateful operations.
 
-**CAUTION:** If you are using `KStream.transformValues()` or 
`KStream.flatTransformValues()` and you have the "merge repartition topics" 
optimization enabled, rewriting your program to `KStream.processValues()` might 
not be safe due to 
[KAFKA-19668](https://issues.apache.org/jira/browse/KAFKA-19668). For this 
case, you should not upgrade to Kafka Streams 4.0.0 or 4.1.0, but use Kafka 
Streams 4.0.1 or 4.1.1 instead, which contain a fix. Note, that the fix is not 
enabled by default for bac [...]
+**CAUTION:** If you are using `KStream.transformValues()` or 
`KStream.flatTransformValues()` and you have the "merge repartition topics" 
optimization enabled, rewriting your program to `KStream.processValues()` might 
not be safe due to 
[KAFKA-19668](https://issues.apache.org/jira/browse/KAFKA-19668). For this 
case, you should not upgrade to Kafka Streams 4.0.0 or 4.1.0, but use Kafka 
Streams 4.0.2 or 4.1.1 instead, which contain a fix. Note, that the fix is not 
enabled by default for bac [...]
     
     
     final Properties properties = new Properties();
@@ -5785,7 +5785,7 @@ The library is cross-built with Scala 2.12 and 2.13. To 
reference the library co
     <dependency>
       <groupId>org.apache.kafka</groupId>
       <artifactId>kafka-streams-scala_2.13</artifactId>
-      <version>4.0.1</version>
+      <version>4.0.2</version>
     </dependency>
 
 To use the library compiled against Scala 2.12 replace the `artifactId` with 
`kafka-streams-scala_2.12`.
@@ -5793,7 +5793,7 @@ To use the library compiled against Scala 2.12 replace 
the `artifactId` with `ka
 When using SBT then you can reference the correct library using the following:
     
     
-    libraryDependencies += "org.apache.kafka" %% "kafka-streams-scala" % 
"4.0.1"
+    libraryDependencies += "org.apache.kafka" %% "kafka-streams-scala" % 
"4.0.2"
 
 ## Sample Usage
 
diff --git a/docs/streams/developer-guide/testing.md 
b/docs/streams/developer-guide/testing.md
index ef668229263..c86b6c235cc 100644
--- a/docs/streams/developer-guide/testing.md
+++ b/docs/streams/developer-guide/testing.md
@@ -39,7 +39,7 @@ To test a Kafka Streams application, Kafka provides a 
test-utils artifact that c
     <dependency>
         <groupId>org.apache.kafka</groupId>
         <artifactId>kafka-streams-test-utils</artifactId>
-        <version>4.0.1</version>
+        <version>4.0.2</version>
         <scope>test</scope>
     </dependency>
 
diff --git a/docs/streams/developer-guide/write-streams-app.md 
b/docs/streams/developer-guide/write-streams-app.md
index 13f14218e84..c5f2d35aab9 100644
--- a/docs/streams/developer-guide/write-streams-app.md
+++ b/docs/streams/developer-guide/write-streams-app.md
@@ -73,7 +73,7 @@ Description
 </td>  
 <td>
 
-`4.0.1`
+`4.0.2`
 </td>  
 <td>
 
@@ -90,7 +90,7 @@ Description
 </td>  
 <td>
 
-`4.0.1`
+`4.0.2`
 </td>  
 <td>
 
@@ -107,7 +107,7 @@ Description
 </td>  
 <td>
 
-`4.0.1`
+`4.0.2`
 </td>  
 <td>
 
@@ -124,17 +124,17 @@ Example `pom.xml` snippet when using Maven:
     <dependency>
         <groupId>org.apache.kafka</groupId>
         <artifactId>kafka-streams</artifactId>
-        <version>4.0.1</version>
+        <version>4.0.2</version>
     </dependency>
     <dependency>
         <groupId>org.apache.kafka</groupId>
         <artifactId>kafka-clients</artifactId>
-        <version>4.0.1</version>
+        <version>4.0.2</version>
     </dependency>
         <dependency>
         <groupId>org.apache.kafka</groupId>
         <artifactId>kafka-streams-scala_2.13</artifactId>
-        <version>4.0.1</version>
+        <version>4.0.2</version>
     </dependency>
 
 # Using Kafka Streams within your application code
diff --git a/docs/streams/quickstart.md b/docs/streams/quickstart.md
index 6f1569f8ecd..0657fa8dcc1 100644
--- a/docs/streams/quickstart.md
+++ b/docs/streams/quickstart.md
@@ -66,11 +66,11 @@ As the first step, we will start Kafka (unless you already 
have it started) and
 
 ### Step 1: Download the code
 
-[Download](https://www.apache.org/dyn/closer.cgi?path=/kafka/4.0.1/kafka_2.13-4.0.1.tgz
 "Kafka downloads") the 4.0.1 release and un-tar it. Note that there are 
multiple downloadable Scala versions and we choose to use the recommended 
version (2.13) here: 
+[Download](https://www.apache.org/dyn/closer.cgi?path=/kafka/4.0.2/kafka_2.13-4.0.2.tgz
 "Kafka downloads") the 4.0.2 release and un-tar it. Note that there are 
multiple downloadable Scala versions and we choose to use the recommended 
version (2.13) here:
     
     
-    $ tar -xzf kafka_2.13-4.0.1.tgz
-    $ cd kafka_2.13-4.0.1
+    $ tar -xzf kafka_2.13-4.0.2.tgz
+    $ cd kafka_2.13-4.0.2
 
 ### Step 2: Start the Kafka server
 
diff --git a/docs/streams/tutorial.md b/docs/streams/tutorial.md
index b3bf211bf8d..5b9f0777805 100644
--- a/docs/streams/tutorial.md
+++ b/docs/streams/tutorial.md
@@ -38,7 +38,7 @@ We are going to use a Kafka Streams Maven Archetype for 
creating a Streams proje
     $ mvn archetype:generate \
     -DarchetypeGroupId=org.apache.kafka \
     -DarchetypeArtifactId=streams-quickstart-java \
-    -DarchetypeVersion=4.0.1 \
+    -DarchetypeVersion=4.0.2 \
     -DgroupId=streams.examples \
     -DartifactId=streams-quickstart \
     -Dversion=0.1 \
diff --git a/docs/streams/upgrade-guide.md b/docs/streams/upgrade-guide.md
index 21f90088ec4..baeacd09e3b 100644
--- a/docs/streams/upgrade-guide.md
+++ b/docs/streams/upgrade-guide.md
@@ -28,20 +28,20 @@ type: docs
 
 # Upgrade Guide and API Changes
 
-Upgrading from any older version to 4.0.1 is possible: if upgrading from 3.4 
or below, you will need to do two rolling bounces, where during the first 
rolling bounce phase you set the config `upgrade.from="older version"` 
(possible values are `"0.10.0" - "3.4"`) and during the second you remove it. 
This is required to safely handle 3 changes. The first is introduction of the 
new cooperative rebalancing protocol of the embedded consumer. The second is a 
change in foreign-key join serializ [...]
+Upgrading from any older version to 4.0.2 is possible: if upgrading from 3.4 
or below, you will need to do two rolling bounces, where during the first 
rolling bounce phase you set the config `upgrade.from="older version"` 
(possible values are `"0.10.0" - "3.4"`) and during the second you remove it. 
This is required to safely handle 3 changes. The first is introduction of the 
new cooperative rebalancing protocol of the embedded consumer. The second is a 
change in foreign-key join serializ [...]
 
   * prepare your application instances for a rolling bounce and make sure that 
config `upgrade.from` is set to the version from which it is being upgrade.
   * bounce each instance of your application once 
-  * prepare your newly deployed 4.0.1 application instances for a second round 
of rolling bounces; make sure to remove the value for config `upgrade.from`
+  * prepare your newly deployed 4.0.2 application instances for a second round 
of rolling bounces; make sure to remove the value for config `upgrade.from`
   * bounce each instance of your application once more to complete the upgrade 
 
 
 
-As an alternative, an offline upgrade is also possible. Upgrading from any 
versions as old as 0.10.0.x to 4.0.1 in offline mode require the following 
steps: 
+As an alternative, an offline upgrade is also possible. Upgrading from any 
versions as old as 0.10.0.x to 4.0.2 in offline mode require the following 
steps:
 
   * stop all old (e.g., 0.10.0.x) application instances 
   * update your code and swap old code and jar file with new code and new jar 
file 
-  * restart all new (4.0.1) application instances 
+  * restart all new (4.0.2) application instances
 
 
 
@@ -67,7 +67,7 @@ Since 2.6.0 release, Kafka Streams depends on a RocksDB 
version that requires Ma
 
 To run a Kafka Streams application version 2.2.1, 2.3.0, or higher a broker 
version 0.11.0 or higher is required and the on-disk message format must be 
0.11 or higher. Brokers must be on version 0.10.1 or higher to run a Kafka 
Streams application version 0.10.1 to 2.2.0. Additionally, on-disk message 
format must be 0.10 or higher to run a Kafka Streams application version 1.0 to 
2.2.0. For Kafka Streams 0.10.0, broker version 0.10.0 or higher is required. 
 
-In deprecated `KStreamBuilder` class, when a `KTable` is created from a source 
topic via `KStreamBuilder.table()`, its materialized state store will reuse the 
source topic as its changelog topic for restoring, and will disable logging to 
avoid appending new updates to the source topic; in the `StreamsBuilder` class 
introduced in 1.0, this behavior was changed accidentally: we still reuse the 
source topic as the changelog topic for restoring, but will also create a 
separate changelog topi [...]
+In deprecated `KStreamBuilder` class, when a `KTable` is created from a source 
topic via `KStreamBuilder.table()`, its materialized state store will reuse the 
source topic as its changelog topic for restoring, and will disable logging to 
avoid appending new updates to the source topic; in the `StreamsBuilder` class 
introduced in 1.0, this behavior was changed accidentally: we still reuse the 
source topic as the changelog topic for restoring, but will also create a 
separate changelog topi [...]
 
 ## Streams API changes in 4.0.0
 
@@ -225,7 +225,7 @@ Kafka Streams does not send a "leave group" request when an 
instance is closed.
   * `KStream<KOut,VOut> KStream.process(ProcessorSupplier, ...)`
   * `KStream<K,VOut> KStream.processValues(FixedKeyProcessorSupplier, ...)`
 
-Both new methods have multiple overloads and return a `KStream` instead of 
`void` as the deprecated `process()` methods did. In addition, 
`FixedKeyProcessor`, `FixedKeyRecord`, `FixedKeyProcessorContext`, and 
`ContextualFixedKeyProcessor` are introduced to guard against disallowed key 
modification inside `processValues()`. Furthermore, `ProcessingContext` is 
added for a better interface hierarchy. **CAUTION:** The newly added 
`KStream.processValues()` method introduced a regression bug ( [...]
+Both new methods have multiple overloads and return a `KStream` instead of 
`void` as the deprecated `process()` methods did. In addition, 
`FixedKeyProcessor`, `FixedKeyRecord`, `FixedKeyProcessorContext`, and 
`ContextualFixedKeyProcessor` are introduced to guard against disallowed key 
modification inside `processValues()`. Furthermore, `ProcessingContext` is 
added for a better interface hierarchy. **CAUTION:** The newly added 
`KStream.processValues()` method introduced a regression bug ( [...]
 
 Emitting a windowed aggregation result only after a window is closed is 
currently supported via the `suppress()` operator. However, `suppress()` uses 
an in-memory implementation and does not support RocksDB. To close this gap, 
[KIP-825](https://cwiki.apache.org/confluence/display/KAFKA/KIP-825%3A+introduce+a+new+API+to+control+when+aggregated+results+are+produced)
 introduces "emit strategies", which are built into the aggregation operator 
directly to use the already existing RocksDB stor [...]
 

Reply via email to