This is an automated email from the ASF dual-hosted git repository.
guozhang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git
The following commit(s) were added to refs/heads/trunk by this push:
new 498cf73 MINOR: Fix typo in KTable javadoc
498cf73 is described below
commit 498cf73f54c202417ce0e12d8ef7f552da850f7d
Author: Dmitry Minkovsky <[email protected]>
AuthorDate: Thu Feb 1 10:14:38 2018 -0800
MINOR: Fix typo in KTable javadoc
*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*
*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*
Author: Dmitry Minkovsky <[email protected]>
Reviewers: Joel Hamill <[email protected]>, Matthias J.
Sax <[email protected]>, Guozhang Wang <[email protected]>
Closes #4483 from dminkovsky/fix-javadoc-typo
github comments
---
.../org/apache/kafka/streams/kstream/KTable.java | 150 ++++++++++-----------
1 file changed, 75 insertions(+), 75 deletions(-)
diff --git a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
index 3290150..c1288f1 100644
--- a/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
+++ b/streams/src/main/java/org/apache/kafka/streams/kstream/KTable.java
@@ -74,16 +74,16 @@ public interface KTable<K, V> {
* Create a new {@code KTable} that consists of all records of this {@code
KTable} which satisfy the given
* predicate.
* All records that do not satisfy the predicate are dropped.
- * For each {@code KTable} update the filter is evaluated on the update
record to produce an update record for the
- * result {@code KTable}.
+ * For each {@code KTable} update, the filter is evaluated based on the
current update
+ * record and then an update record is produced for the result {@code
KTable}.
* This is a stateless record-by-record operation.
* <p>
- * Note that {@code filter} for a <i>changelog stream</i> works different
to {@link KStream#filter(Predicate)
+ * Note that {@code filter} for a <i>changelog stream</i> works
differently than {@link KStream#filter(Predicate)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided filter predicate is not evaluated but
the tombstone record is forwarded
* directly if required (i.e., if there is anything to be deleted).
- * Furthermore, for each record that gets dropped (i.e., dot not satisfy
the given predicate) a tombstone record
+ * Furthermore, for each record that gets dropped (i.e., does not satisfy
the given predicate) a tombstone record
* is forwarded.
*
* @param predicate a filter {@link Predicate} that is applied to each
record
@@ -96,16 +96,16 @@ public interface KTable<K, V> {
* Create a new {@code KTable} that consists of all records of this {@code
KTable} which satisfy the given
* predicate.
* All records that do not satisfy the predicate are dropped.
- * For each {@code KTable} update the filter is evaluated on the update
record to produce an update record for the
- * result {@code KTable}.
+ * For each {@code KTable} update, the filter is evaluated based on the
current update
+ * record and then an update record is produced for the result {@code
KTable}.
* This is a stateless record-by-record operation.
* <p>
- * Note that {@code filter} for a <i>changelog stream</i> works different
to {@link KStream#filter(Predicate)
+ * Note that {@code filter} for a <i>changelog stream</i> works
differently than {@link KStream#filter(Predicate)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided filter predicate is not evaluated but
the tombstone record is forwarded
* directly if required (i.e., if there is anything to be deleted).
- * Furthermore, for each record that gets dropped (i.e., dot not satisfy
the given predicate) a tombstone record
+ * Furthermore, for each record that gets dropped (i.e., does not satisfy
the given predicate) a tombstone record
* is forwarded.
* <p>
* To query the local {@link KeyValueStore} it must be obtained via
@@ -134,16 +134,16 @@ public interface KTable<K, V> {
* Create a new {@code KTable} that consists of all records of this {@code
KTable} which satisfy the given
* predicate.
* All records that do not satisfy the predicate are dropped.
- * For each {@code KTable} update the filter is evaluated on the update
record to produce an update record for the
+ * For each {@code KTable} update the filter is evaluated on the updated
record to produce an updated record for the
* result {@code KTable}.
* This is a stateless record-by-record operation.
* <p>
- * Note that {@code filter} for a <i>changelog stream</i> works different
to {@link KStream#filter(Predicate)
+ * Note that {@code filter} for a <i>changelog stream</i> works
differently than {@link KStream#filter(Predicate)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided filter predicate is not evaluated but
the tombstone record is forwarded
* directly if required (i.e., if there is anything to be deleted).
- * Furthermore, for each record that gets dropped (i.e., dot not satisfy
the given predicate) a tombstone record
+ * Furthermore, for each record that gets dropped (i.e., does not satisfy
the given predicate) a tombstone record
* is forwarded.
* <p>
* To query the local {@link KeyValueStore} it must be obtained via
@@ -174,16 +174,16 @@ public interface KTable<K, V> {
* Create a new {@code KTable} that consists of all records of this {@code
KTable} which satisfy the given
* predicate.
* All records that do not satisfy the predicate are dropped.
- * For each {@code KTable} update the filter is evaluated on the update
record to produce an update record for the
+ * For each {@code KTable} update the filter is evaluated on the updated
record to produce an updated record for the
* result {@code KTable}.
* This is a stateless record-by-record operation.
* <p>
- * Note that {@code filter} for a <i>changelog stream</i> works different
to {@link KStream#filter(Predicate)
+ * Note that {@code filter} for a <i>changelog stream</i> works
differently than {@link KStream#filter(Predicate)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided filter predicate is not evaluated but
the tombstone record is forwarded
* directly if required (i.e., if there is anything to be deleted).
- * Furthermore, for each record that gets dropped (i.e., dot not satisfy
the given predicate) a tombstone record
+ * Furthermore, for each record that gets dropped (i.e., does not satisfy
the given predicate) a tombstone record
* is forwarded.
* <p>
* To query the local {@link KeyValueStore} it must be obtained via
@@ -212,11 +212,11 @@ public interface KTable<K, V> {
* Create a new {@code KTable} that consists all records of this {@code
KTable} which do <em>not</em> satisfy the
* given predicate.
* All records that <em>do</em> satisfy the predicate are dropped.
- * For each {@code KTable} update the filter is evaluated on the update
record to produce an update record for the
- * result {@code KTable}.
+ * For each {@code KTable} update, the filter is evaluated based on the
current update
+ * record and then an update record is produced for the result {@code
KTable}.
* This is a stateless record-by-record operation.
* <p>
- * Note that {@code filterNot} for a <i>changelog stream</i> works
different to {@link KStream#filterNot(Predicate)
+ * Note that {@code filterNot} for a <i>changelog stream</i> works
differently than {@link KStream#filterNot(Predicate)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided filter predicate is not evaluated but
the tombstone record is forwarded
@@ -234,11 +234,11 @@ public interface KTable<K, V> {
* Create a new {@code KTable} that consists all records of this {@code
KTable} which do <em>not</em> satisfy the
* given predicate.
* All records that <em>do</em> satisfy the predicate are dropped.
- * For each {@code KTable} update the filter is evaluated on the update
record to produce an update record for the
- * result {@code KTable}.
+ * For each {@code KTable} update, the filter is evaluated based on the
current update
+ * record and then an update record is produced for the result {@code
KTable}.
* This is a stateless record-by-record operation.
* <p>
- * Note that {@code filterNot} for a <i>changelog stream</i> works
different to {@link KStream#filterNot(Predicate)
+ * Note that {@code filterNot} for a <i>changelog stream</i> works
differently than {@link KStream#filterNot(Predicate)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided filter predicate is not evaluated but
the tombstone record is forwarded
@@ -270,11 +270,11 @@ public interface KTable<K, V> {
* Create a new {@code KTable} that consists all records of this {@code
KTable} which do <em>not</em> satisfy the
* given predicate.
* All records that <em>do</em> satisfy the predicate are dropped.
- * For each {@code KTable} update the filter is evaluated on the update
record to produce an update record for the
+ * For each {@code KTable} update the filter is evaluated on the updated
record to produce an updated record for the
* result {@code KTable}.
* This is a stateless record-by-record operation.
* <p>
- * Note that {@code filterNot} for a <i>changelog stream</i> works
different to {@link KStream#filterNot(Predicate)
+ * Note that {@code filterNot} for a <i>changelog stream</i> works
differently than {@link KStream#filterNot(Predicate)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided filter predicate is not evaluated but
the tombstone record is forwarded
@@ -307,11 +307,11 @@ public interface KTable<K, V> {
* Create a new {@code KTable} that consists all records of this {@code
KTable} which do <em>not</em> satisfy the
* given predicate.
* All records that <em>do</em> satisfy the predicate are dropped.
- * For each {@code KTable} update the filter is evaluated on the update
record to produce an update record for the
+ * For each {@code KTable} update the filter is evaluated on the updated
record to produce an updated record for the
* result {@code KTable}.
* This is a stateless record-by-record operation.
* <p>
- * Note that {@code filterNot} for a <i>changelog stream</i> works
different to {@link KStream#filterNot(Predicate)
+ * Note that {@code filterNot} for a <i>changelog stream</i> works
differently than {@link KStream#filterNot(Predicate)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided filter predicate is not evaluated but
the tombstone record is forwarded
@@ -346,8 +346,8 @@ public interface KTable<K, V> {
/**
* Create a new {@code KTable} by transforming the value of each record in
this {@code KTable} into a new value
* (with possible new type) in the new {@code KTable}.
- * For each {@code KTable} update the provided {@link ValueMapper} is
applied to the value of the update record and
- * computes a new value for it, resulting in an update record for the
result {@code KTable}.
+ * For each {@code KTable} update the provided {@link ValueMapper} is
applied to the value of the updated record and
+ * computes a new value for it, resulting in an updated record for the
result {@code KTable}.
* Thus, an input record {@code <K,V>} can be transformed into an output
record {@code <K:V'>}.
* This is a stateless record-by-record operation.
* <p>
@@ -365,7 +365,7 @@ public interface KTable<K, V> {
* Thus, <em>no</em> internal data redistribution is required if a key
based operator (like a join) is applied to
* the result {@code KTable}.
* <p>
- * Note that {@code mapValues} for a <i>changelog stream</i> works
different to {@link KStream#mapValues(ValueMapper)
+ * Note that {@code mapValues} for a <i>changelog stream</i> works
differently than {@link KStream#mapValues(ValueMapper)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided value-mapper is not evaluated but the
tombstone record is forwarded directly to
@@ -381,7 +381,7 @@ public interface KTable<K, V> {
* Create a new {@code KTable} by transforming the value of each record in
this {@code KTable} into a new value
* (with possible new type) in the new {@code KTable}.
* For each {@code KTable} update the provided {@link ValueMapperWithKey}
is applied to the value of the update
- * record and computes a new value for it, resulting in an update record
for the result {@code KTable}.
+ * record and computes a new value for it, resulting in an updated record
for the result {@code KTable}.
* Thus, an input record {@code <K,V>} can be transformed into an output
record {@code <K:V'>}.
* This is a stateless record-by-record operation.
* <p>
@@ -400,7 +400,7 @@ public interface KTable<K, V> {
* Thus, <em>no</em> internal data redistribution is required if a key
based operator (like a join) is applied to
* the result {@code KTable}.
* <p>
- * Note that {@code mapValues} for a <i>changelog stream</i> works
different to {@link KStream#mapValues(ValueMapperWithKey)
+ * Note that {@code mapValues} for a <i>changelog stream</i> works
differently than {@link KStream#mapValues(ValueMapperWithKey)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided value-mapper is not evaluated but the
tombstone record is forwarded directly to
@@ -415,8 +415,8 @@ public interface KTable<K, V> {
/**
* Create a new {@code KTable} by transforming the value of each record in
this {@code KTable} into a new value
* (with possible new type) in the new {@code KTable}.
- * For each {@code KTable} update the provided {@link ValueMapper} is
applied to the value of the update record and
- * computes a new value for it, resulting in an update record for the
result {@code KTable}.
+ * For each {@code KTable} update the provided {@link ValueMapper} is
applied to the value of the updated record and
+ * computes a new value for it, resulting in an updated record for the
result {@code KTable}.
* Thus, an input record {@code <K,V>} can be transformed into an output
record {@code <K:V'>}.
* This is a stateless record-by-record operation.
* <p>
@@ -440,7 +440,7 @@ public interface KTable<K, V> {
* Thus, <em>no</em> internal data redistribution is required if a key
based operator (like a join) is applied to
* the result {@code KTable}.
* <p>
- * Note that {@code mapValues} for a <i>changelog stream</i> works
different to {@link KStream#mapValues(ValueMapper)
+ * Note that {@code mapValues} for a <i>changelog stream</i> works
differently than {@link KStream#mapValues(ValueMapper)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided value-mapper is not evaluated but the
tombstone record is forwarded directly to
@@ -460,7 +460,7 @@ public interface KTable<K, V> {
* Create a new {@code KTable} by transforming the value of each record in
this {@code KTable} into a new value
* (with possible new type) in the new {@code KTable}.
* For each {@code KTable} update the provided {@link ValueMapperWithKey}
is applied to the value of the update
- * record and computes a new value for it, resulting in an update record
for the result {@code KTable}.
+ * record and computes a new value for it, resulting in an updated record
for the result {@code KTable}.
* Thus, an input record {@code <K,V>} can be transformed into an output
record {@code <K:V'>}.
* This is a stateless record-by-record operation.
* <p>
@@ -485,7 +485,7 @@ public interface KTable<K, V> {
* Thus, <em>no</em> internal data redistribution is required if a key
based operator (like a join) is applied to
* the result {@code KTable}.
* <p>
- * Note that {@code mapValues} for a <i>changelog stream</i> works
different to {@link KStream#mapValues(ValueMapper)
+ * Note that {@code mapValues} for a <i>changelog stream</i> works
differently than {@link KStream#mapValues(ValueMapper)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided value-mapper is not evaluated but the
tombstone record is forwarded directly to
@@ -504,8 +504,8 @@ public interface KTable<K, V> {
/**
* Create a new {@code KTable} by transforming the value of each record in
this {@code KTable} into a new value
* (with possible new type) in the new {@code KTable}.
- * For each {@code KTable} update the provided {@link ValueMapper} is
applied to the value of the update record and
- * computes a new value for it, resulting in an update record for the
result {@code KTable}.
+ * For each {@code KTable} update the provided {@link ValueMapper} is
applied to the value of the updated record and
+ * computes a new value for it, resulting in an updated record for the
result {@code KTable}.
* Thus, an input record {@code <K,V>} can be transformed into an output
record {@code <K:V'>}.
* This is a stateless record-by-record operation.
* <p>
@@ -529,7 +529,7 @@ public interface KTable<K, V> {
* Thus, <em>no</em> internal data redistribution is required if a key
based operator (like a join) is applied to
* the result {@code KTable}.
* <p>
- * Note that {@code mapValues} for a <i>changelog stream</i> works
different to {@link KStream#mapValues(ValueMapper)
+ * Note that {@code mapValues} for a <i>changelog stream</i> works
differently than {@link KStream#mapValues(ValueMapper)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided value-mapper is not evaluated but the
tombstone record is forwarded directly to
@@ -552,8 +552,8 @@ public interface KTable<K, V> {
/**
* Create a new {@code KTable} by transforming the value of each record in
this {@code KTable} into a new value
* (with possible new type) in the new {@code KTable}.
- * For each {@code KTable} update the provided {@link ValueMapper} is
applied to the value of the update record and
- * computes a new value for it, resulting in an update record for the
result {@code KTable}.
+ * For each {@code KTable} update the provided {@link ValueMapper} is
applied to the value of the updated record and
+ * computes a new value for it, resulting in an updated record for the
result {@code KTable}.
* Thus, an input record {@code <K,V>} can be transformed into an output
record {@code <K:V'>}.
* This is a stateless record-by-record operation.
* <p>
@@ -577,7 +577,7 @@ public interface KTable<K, V> {
* Thus, <em>no</em> internal data redistribution is required if a key
based operator (like a join) is applied to
* the result {@code KTable}.
* <p>
- * Note that {@code mapValues} for a <i>changelog stream</i> works
different to {@link KStream#mapValues(ValueMapper)
+ * Note that {@code mapValues} for a <i>changelog stream</i> works
differently than {@link KStream#mapValues(ValueMapper)
* record stream filters}, because {@link KeyValue records} with {@code
null} values (so-called tombstone records)
* have delete semantics.
* Thus, for tombstones the provided value-mapper is not evaluated but the
tombstone record is forwarded directly to
@@ -597,7 +597,7 @@ public interface KTable<K, V> {
/**
- * Print the update records of this {@code KTable} to {@code System.out}.
+ * Print the updated records of this {@code KTable} to {@code System.out}.
* This function will use the generated name of the parent processor node
to label the key/value pairs printed to
* the console.
* <p>
@@ -608,7 +608,7 @@ public interface KTable<K, V> {
* {@link Integer} etc. to get meaningful information.
* <p>
* Note that {@code print()} is not applied to the internal state store
and only called for each new {@code KTable}
- * update record.
+ * updated record.
* @deprecated Use the Interactive Queries APIs (e.g., {@link
KafkaStreams#store(String, QueryableStoreType) }
* followed by {@link ReadOnlyKeyValueStore#all()}) to iterate over the
keys of a KTable. Alternatively
* convert to a {@link KStream} using {@link #toStream()} and then use
@@ -618,7 +618,7 @@ public interface KTable<K, V> {
void print();
/**
- * Print the update records of this {@code KTable} to {@code System.out}.
+ * Print the updated records of this {@code KTable} to {@code System.out}.
* This function will use the given name to label the key/value pairs
printed to the console.
* <p>
* The provided serde will be used to deserialize the key or value in case
the type is {@code byte[]} before calling
@@ -628,7 +628,7 @@ public interface KTable<K, V> {
* {@link Integer} etc. to get meaningful information.
* <p>
* Note that {@code print()} is not applied to the internal state store
and only called for each new {@code KTable}
- * update record.
+ * updated record.
*
* @param label the name used to label the key/value pairs printed to the
console
* @deprecated Use the Interactive Queries APIs (e.g., {@link
KafkaStreams#store(String, QueryableStoreType) }
@@ -640,7 +640,7 @@ public interface KTable<K, V> {
void print(final String label);
/**
- * Print the update records of this {@code KTable} to {@code System.out}.
+ * Print the updated records of this {@code KTable} to {@code System.out}.
* This function will use the generated name of the parent processor node
to label the key/value pairs printed to
* the console.
* <p>
@@ -651,7 +651,7 @@ public interface KTable<K, V> {
* {@link Integer} etc. to get meaningful information.
* <p>
* Note that {@code print()} is not applied to the internal state store
and only called for each new {@code KTable}
- * update record.
+ * updated record.
*
* @param keySerde key serde used to deserialize key if type is {@code
byte[]},
* @param valSerde value serde used to deserialize value if type is {@code
byte[]}
@@ -665,7 +665,7 @@ public interface KTable<K, V> {
final Serde<V> valSerde);
/**
- * Print the update records of this {@code KTable} to {@code System.out}.
+ * Print the updated records of this {@code KTable} to {@code System.out}.
* This function will use the given name to label the key/value pairs
printed to the console.
* <p>
* The provided serde will be used to deserialize the key or value in case
the type is {@code byte[]} before calling
@@ -675,7 +675,7 @@ public interface KTable<K, V> {
* {@link Integer} etc. to get meaningful information.
* <p>
* Note that {@code print()} is not applied to the internal state store
and only called for each new {@code KTable}
- * update record.
+ * updated record.
*
* @param keySerde key serde used to deserialize key if type is {@code
byte[]},
* @param valSerde value serde used to deserialize value if type is
{@code byte[]},
@@ -691,7 +691,7 @@ public interface KTable<K, V> {
final String label);
/**
- * Write the update records of this {@code KTable} to a file at the given
path.
+ * Write the updated records of this {@code KTable} to a file at the given
path.
* This function will use the generated name of the parent processor node
to label the key/value pairs printed to
* the file.
* <p>
@@ -702,7 +702,7 @@ public interface KTable<K, V> {
* {@link Integer} etc. to get meaningful information.
* <p>
* Note that {@code writeAsText()} is not applied to the internal state
store and only called for each new
- * {@code KTable} update record.
+ * {@code KTable} updated record.
*
* @param filePath name of file to write to
* @deprecated Use the Interactive Queries APIs (e.g., {@link
KafkaStreams#store(String, QueryableStoreType) }
@@ -714,7 +714,7 @@ public interface KTable<K, V> {
void writeAsText(final String filePath);
/**
- * Write the update records of this {@code KTable} to a file at the given
path.
+ * Write the updated records of this {@code KTable} to a file at the given
path.
* This function will use the given name to label the key/value printed to
the file.
* <p>
* The default serde will be used to deserialize the key or value in case
the type is {@code byte[]} before calling
@@ -724,7 +724,7 @@ public interface KTable<K, V> {
* {@link Integer} etc. to get meaningful information.
* <p>
* Note that {@code writeAsText()} is not applied to the internal state
store and only called for each new
- * {@code KTable} update record.
+ * {@code KTable} updated record.
*
* @param filePath name of file to write to
* @param label the name used to label the key/value pairs printed out to
the console
@@ -738,7 +738,7 @@ public interface KTable<K, V> {
final String label);
/**
- * Write the update records of this {@code KTable} to a file at the given
path.
+ * Write the updated records of this {@code KTable} to a file at the given
path.
* This function will use the generated name of the parent processor node
to label the key/value pairs printed to
* the file.
* <p>
@@ -749,7 +749,7 @@ public interface KTable<K, V> {
* {@link Integer} etc. to get meaningful information.
* <p>
* Note that {@code writeAsText()} is not applied to the internal state
store and only called for each new
- * {@code KTable} update record.
+ * {@code KTable} updated record.
*
* @param filePath name of file to write to
* @param keySerde key serde used to deserialize key if type is {@code
byte[]},
@@ -765,7 +765,7 @@ public interface KTable<K, V> {
final Serde<V> valSerde);
/**
- * Write the update records of this {@code KTable} to a file at the given
path.
+ * Write the updated records of this {@code KTable} to a file at the given
path.
* This function will use the given name to label the key/value printed to
the file.
* <p>
* The default serde will be used to deserialize the key or value in case
the type is {@code byte[]} before calling
@@ -775,7 +775,7 @@ public interface KTable<K, V> {
* {@link Integer} etc. to get meaningful information.
* <p>
* Note that {@code writeAsText()} is not applied to the internal state
store and only called for each new
- * {@code KTable} update record.
+ * {@code KTable} updated record.
*
* @param filePath name of file to write to
* @param label the name used to label the key/value pairs printed to the
console
@@ -793,11 +793,11 @@ public interface KTable<K, V> {
final Serde<V> valSerde);
/**
- * Perform an action on each update record of this {@code KTable}.
+ * Perform an action on each updated record of this {@code KTable}.
* Note that this is a terminal operation that returns void.
* <p>
* Note that {@code foreach()} is not applied to the internal state store
and only called for each new
- * {@code KTable} update record.
+ * {@code KTable} updated record.
*
* @param action an action to perform on each record
* @deprecated Use the Interactive Queries APIs (e.g., {@link
KafkaStreams#store(String, QueryableStoreType) }
@@ -812,7 +812,7 @@ public interface KTable<K, V> {
* Convert this changelog stream to a {@link KStream}.
* <p>
* Note that this is a logical operation and only changes the
"interpretation" of the stream, i.e., each record of
- * this changelog stream is no longer treated as an update record (cf.
{@link KStream} vs {@code KTable}).
+ * this changelog stream is no longer treated as an updated record (cf.
{@link KStream} vs {@code KTable}).
*
* @return a {@link KStream} that contains the same records as this {@code
KTable}
*/
@@ -837,7 +837,7 @@ public interface KTable<K, V> {
* {@code table.}{@link #toStream() toStream}{@code ().}{@link
KStream#selectKey(KeyValueMapper) selectKey(KeyValueMapper)}.
* <p>
* Note that {@link #toStream()} is a logical operation and only changes
the "interpretation" of the stream, i.e.,
- * each record of this changelog stream is no longer treated as an update
record (cf. {@link KStream} vs {@code KTable}).
+ * each record of this changelog stream is no longer treated as an updated
record (cf. {@link KStream} vs {@code KTable}).
*
* @param mapper a {@link KeyValueMapper} that computes a new key for each
record
* @param <KR> the new key type of the result stream
@@ -1286,7 +1286,7 @@ public interface KTable<K, V> {
* You can retrieve all generated internal topic names via {@link
KafkaStreams#toString()}.
* <p>
* All data of this {@code KTable} will be redistributed through the
repartitioning topic by writing all update
- * records to and rereading all update records from it, such that the
resulting {@link KGroupedTable} is partitioned
+ * records to and rereading all updated records from it, such that the
resulting {@link KGroupedTable} is partitioned
* on the new key.
* <p>
* If the key or value type is changed, it is recommended to use {@link
#groupBy(KeyValueMapper, Serialized)}
@@ -1316,7 +1316,7 @@ public interface KTable<K, V> {
* You can retrieve all generated internal topic names via {@link
KafkaStreams#toString()}.
* <p>
* All data of this {@code KTable} will be redistributed through the
repartitioning topic by writing all update
- * records to and rereading all update records from it, such that the
resulting {@link KGroupedTable} is partitioned
+ * records to and rereading all updated records from it, such that the
resulting {@link KGroupedTable} is partitioned
* on the new key.
*
* @param selector a {@link KeyValueMapper} that computes a new
grouping key and value to be aggregated
@@ -1344,7 +1344,7 @@ public interface KTable<K, V> {
* You can retrieve all generated internal topic names via {@link
KafkaStreams#toString()}.
* <p>
* All data of this {@code KTable} will be redistributed through the
repartitioning topic by writing all update
- * records to and rereading all update records from it, such that the
resulting {@link KGroupedTable} is partitioned
+ * records to and rereading all updated records from it, such that the
resulting {@link KGroupedTable} is partitioned
* on the new key.
*
* @param selector a {@link KeyValueMapper} that computes a new grouping
key and value to be aggregated
@@ -1390,7 +1390,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -1464,7 +1464,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -1540,7 +1540,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -1623,7 +1623,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -1709,7 +1709,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -1791,7 +1791,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -1875,7 +1875,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -1966,7 +1966,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -2052,7 +2052,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -2133,7 +2133,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -2217,7 +2217,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
@@ -2307,7 +2307,7 @@ public interface KTable<K, V> {
* <th>thisState</th>
* <th>otherKTable</th>
* <th>otherState</th>
- * <th>result update record</th>
+ * <th>result updated record</th>
* </tr>
* <tr>
* <td><K1:A></td>
--
To stop receiving notification emails like this one, please contact
[email protected].