This is an automated email from the ASF dual-hosted git repository.

mjsax pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
     new baea94d  KAFKA-10722: Improve JavaDoc for KGroupedStream.aggregate and 
other similar methods (#9606)
baea94d is described below

commit baea94d9264b87d777f67afcc1e161fb51ac57ec
Author: fml2 <[email protected]>
AuthorDate: Thu Dec 24 00:50:52 2020 +0100

    KAFKA-10722: Improve JavaDoc for KGroupedStream.aggregate and other similar 
methods (#9606)
    
    Reviewer: Matthias J. Sax <[email protected]>
---
 .../kafka/streams/kstream/CogroupedKStream.java    | 12 +++++---
 .../kafka/streams/kstream/KGroupedStream.java      | 30 ++++++++++++------
 .../kstream/TimeWindowedCogroupedKStream.java      | 12 +++++---
 .../kafka/streams/kstream/TimeWindowedKStream.java | 36 ++++++++++++++--------
 4 files changed, 60 insertions(+), 30 deletions(-)

diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/CogroupedKStream.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/CogroupedKStream.java
index 0f881a6..3a07cad 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/kstream/CogroupedKStream.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/kstream/CogroupedKStream.java
@@ -92,7 +92,8 @@ public interface CogroupedKStream<K, VOut> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to query
      * the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot
      * contain characters other than ASCII alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -141,7 +142,8 @@ public interface CogroupedKStream<K, VOut> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to query
      * the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot
      * contain characters other than ASCII alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -191,7 +193,8 @@ public interface CogroupedKStream<K, VOut> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to query
      * the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot
      * contain characters other than ASCII alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -243,7 +246,8 @@ public interface CogroupedKStream<K, VOut> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to query
      * the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot
      * contain characters other than ASCII alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java 
b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
index a416b8f..e883581 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
@@ -55,7 +55,8 @@ public interface KGroupedStream<K, V> {
      * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -82,7 +83,8 @@ public interface KGroupedStream<K, V> {
      * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -125,7 +127,8 @@ public interface KGroupedStream<K, V> {
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot contain characters other than ASCII
      * alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -170,7 +173,8 @@ public interface KGroupedStream<K, V> {
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot contain characters other than ASCII
      * alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -210,7 +214,8 @@ public interface KGroupedStream<K, V> {
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -272,7 +277,8 @@ public interface KGroupedStream<K, V> {
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -335,7 +341,8 @@ public interface KGroupedStream<K, V> {
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -381,7 +388,8 @@ public interface KGroupedStream<K, V> {
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -438,7 +446,8 @@ public interface KGroupedStream<K, V> {
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot contain characters other than ASCII
      * alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -496,7 +505,8 @@ public interface KGroupedStream<K, V> {
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot contain characters other than ASCII
      * alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedCogroupedKStream.java
 
b/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedCogroupedKStream.java
index fe5af0b..c7b789e 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedCogroupedKStream.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedCogroupedKStream.java
@@ -77,7 +77,8 @@ public interface TimeWindowedCogroupedKStream<K, V> {
      * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -116,7 +117,8 @@ public interface TimeWindowedCogroupedKStream<K, V> {
      * parameters for {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG 
cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -171,7 +173,8 @@ public interface TimeWindowedCogroupedKStream<K, V> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the {@link Materialized} instance 
must be a valid Kafka topic name and
      * cannot contain characters other than ASCII alphanumerics, '.', '_' and 
'-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -227,7 +230,8 @@ public interface TimeWindowedCogroupedKStream<K, V> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the {@link Materialized} instance 
must be a valid Kafka topic name and
      * cannot contain characters other than ASCII alphanumerics, '.', '_' and 
'-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
diff --git 
a/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedKStream.java
 
b/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedKStream.java
index 81a6b2c..e600edb 100644
--- 
a/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedKStream.java
+++ 
b/streams/src/main/java/org/apache/kafka/streams/kstream/TimeWindowedKStream.java
@@ -67,7 +67,8 @@ public interface TimeWindowedKStream<K, V> {
      * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -96,7 +97,8 @@ public interface TimeWindowedKStream<K, V> {
      * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -141,7 +143,8 @@ public interface TimeWindowedKStream<K, V> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot
      * contain characters other than ASCII alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -189,7 +192,8 @@ public interface TimeWindowedKStream<K, V> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot
      * contain characters other than ASCII alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -234,7 +238,8 @@ public interface TimeWindowedKStream<K, V> {
      * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -278,7 +283,8 @@ public interface TimeWindowedKStream<K, V> {
      * parameters for {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG 
cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -337,7 +343,8 @@ public interface TimeWindowedKStream<K, V> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the {@link Materialized} instance 
must be a valid Kafka topic name and
      * cannot contain characters other than ASCII alphanumerics, '.', '_' and 
'-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -397,7 +404,8 @@ public interface TimeWindowedKStream<K, V> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the {@link Materialized} instance 
must be a valid Kafka topic name and
      * cannot contain characters other than ASCII alphanumerics, '.', '_' and 
'-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -451,7 +459,8 @@ public interface TimeWindowedKStream<K, V> {
      * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -495,7 +504,8 @@ public interface TimeWindowedKStream<K, V> {
      * {@link StreamsConfig#CACHE_MAX_BYTES_BUFFERING_CONFIG cache size}, and
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit interval}.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.
      * The changelog topic will be named 
"${applicationId}-${internalStoreName}-changelog", where "applicationId" is
      * user-specified in {@link StreamsConfig} via parameter
      * {@link StreamsConfig#APPLICATION_ID_CONFIG APPLICATION_ID_CONFIG}, 
"internalStoreName" is an internal name
@@ -554,7 +564,8 @@ public interface TimeWindowedKStream<K, V> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot
      * contain characters other than ASCII alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is
@@ -616,7 +627,8 @@ public interface TimeWindowedKStream<K, V> {
      * For non-local keys, a custom RPC mechanism must be implemented using 
{@link KafkaStreams#allMetadata()} to
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedWindowStore} -- regardless of what
+     * is specified in the parameter {@code materialized}) will be backed by 
an internal changelog topic that will be created in Kafka.
      * Therefore, the store name defined by the Materialized instance must be 
a valid Kafka topic name and cannot
      * contain characters other than ASCII alphanumerics, '.', '_' and '-'.
      * The changelog topic will be named 
"${applicationId}-${storeName}-changelog", where "applicationId" is

Reply via email to