mjsax commented on a change in pull request #9606:
URL: https://github.com/apache/kafka/pull/9606#discussion_r525580343



##########
File path: 
streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
##########
@@ -381,7 +381,8 @@
      * {@link StreamsConfig#COMMIT_INTERVAL_MS_CONFIG commit intervall}.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore}) will be backed by
+     * an internal changelog topic that will be created in Kafka.

Review comment:
       Should apply the same improvement to `reduce()` and `count()` overloads? 
Also for `CogroupedKStream#aggregate()`?
   
   What about `TimeWindowedKStream` and `TimeWindowedCogroupedKStream` ?
   
   Also `StreamsBuilder#table()` (and `#globalTable()`) might need an update?

##########
File path: 
streams/src/main/java/org/apache/kafka/streams/kstream/KGroupedStream.java
##########
@@ -438,7 +439,8 @@
      * query the value of the key on a parallel running instance of your Kafka 
Streams application.
      *
      * <p>
-     * For failure and recovery the store will be backed by an internal 
changelog topic that will be created in Kafka.
+     * For failure and recovery the store (which always will be of type {@link 
TimestampedKeyValueStore} -- regardless of what
+     * is specified in the parameter {@materialized}) will be backed by an 
internal changelog topic that will be created in Kafka.

Review comment:
       `{@materialized}` is not valid markup as far as I know. Should we 
`{@code materialized}`? (same below)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to