Repository: kafka
Updated Branches:
  refs/heads/trunk b6adb2dc8 -> 5648dcc3e


MINOR: Fix doc typos and grammar

This is contributed by mihbor on various doc fixes including:

https://github.com/apache/kafka/pull/3224
https://github.com/apache/kafka/pull/3226
https://github.com/apache/kafka/pull/3229

Author: Guozhang Wang <[email protected]>

Reviewers: Guozhang Wang <[email protected]>

Closes #3746 from guozhangwang/KMinor-doc-typos


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/5648dcc3
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/5648dcc3
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/5648dcc3

Branch: refs/heads/trunk
Commit: 5648dcc3e59f4518d7febb31e0d687b94f1baadf
Parents: b6adb2d
Author: Michal Borowiecki <[email protected]>
Authored: Sat Aug 26 16:33:33 2017 -0700
Committer: Guozhang Wang <[email protected]>
Committed: Sat Aug 26 16:33:33 2017 -0700

----------------------------------------------------------------------
 docs/streams/core-concepts.html   | 4 ++--
 docs/streams/developer-guide.html | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/5648dcc3/docs/streams/core-concepts.html
----------------------------------------------------------------------
diff --git a/docs/streams/core-concepts.html b/docs/streams/core-concepts.html
index 7349c3a..110ee8e 100644
--- a/docs/streams/core-concepts.html
+++ b/docs/streams/core-concepts.html
@@ -22,7 +22,7 @@
 
     <p>
         Kafka Streams is a client library for processing and analyzing data 
stored in Kafka.
-        It builds upon important stream processing concepts such as properly 
distinguishing between event time and processing time, windowing support, and 
simple yet efficient management of application state.
+        It builds upon important stream processing concepts such as properly 
distinguishing between event time and processing time, windowing support, and 
simple yet efficient management and real-time querying of application state.
     </p>
     <p>
         Kafka Streams has a <b>low barrier to entry</b>: You can quickly write 
and run a small-scale proof-of-concept on a single machine; and you only need 
to run additional instances of your application on multiple machines to scale 
up to high-volume production workloads.
@@ -57,7 +57,7 @@
     There are two special processors in the topology:
 
     <ul>
-        <li><b>Source Processor</b>: A source processor is a special type of 
stream processor that does not have any upstream processors. It produces an 
input stream to its topology from one or multiple Kafka topics by consuming 
records from these topics and forward them to its down-stream processors.</li>
+        <li><b>Source Processor</b>: A source processor is a special type of 
stream processor that does not have any upstream processors. It produces an 
input stream to its topology from one or multiple Kafka topics by consuming 
records from these topics and forwarding them to its down-stream 
processors.</li>
         <li><b>Sink Processor</b>: A sink processor is a special type of 
stream processor that does not have down-stream processors. It sends any 
received records from its up-stream processors to a specified Kafka topic.</li>
     </ul>
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/5648dcc3/docs/streams/developer-guide.html
----------------------------------------------------------------------
diff --git a/docs/streams/developer-guide.html 
b/docs/streams/developer-guide.html
index e26f6da..cb4c1b0 100644
--- a/docs/streams/developer-guide.html
+++ b/docs/streams/developer-guide.html
@@ -448,7 +448,7 @@ Note that in the <code>WordCountProcessor</code> 
implementation, users need to r
     ("alice", 1) --> ("alice", 3)
     </pre>
 
-    If these records a KStream and the stream processing application were to 
sum the values it would return <code>4</code>. If these records were a KTable 
or GlobalKTable, the return would be <code>3</code>, since the last record 
would be considered as an update.
+    If the stream is defined as a KStream and the stream processing 
application were to sum the values it would return <code>4</code>. If the 
stream is defined as a KTable or GlobalKTable, the return would be 
<code>3</code>, since the last record would be considered as an update.
 
     <h4><a id="streams_dsl_source" href="#streams_dsl_source">Creating Source 
Streams from Kafka</a></h4>
 

Reply via email to