Repository: spark
Updated Branches:
  refs/heads/branch-1.5 165be9ad1 -> 30f0f7e4e


[DOCS] [STREAMING] [KAFKA] Fix typo in exactly once semantics

Fix Typo in exactly once semantics
[Semantics of output operations] link

Author: Moussa Taifi <mouta...@gmail.com>

Closes #8468 from moutai/patch-3.

(cherry picked from commit 9625d13d575c97bbff264f6a94838aae72c9202d)
Signed-off-by: Sean Owen <so...@cloudera.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/30f0f7e4
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/30f0f7e4
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/30f0f7e4

Branch: refs/heads/branch-1.5
Commit: 30f0f7e4e39b58091e0a10199b6da81d14fa7fdb
Parents: 165be9a
Author: Moussa Taifi <mouta...@gmail.com>
Authored: Thu Aug 27 10:34:47 2015 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Thu Aug 27 10:35:32 2015 +0100

----------------------------------------------------------------------
 docs/streaming-kafka-integration.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/30f0f7e4/docs/streaming-kafka-integration.md
----------------------------------------------------------------------
diff --git a/docs/streaming-kafka-integration.md 
b/docs/streaming-kafka-integration.md
index 7571e22..5db39ae 100644
--- a/docs/streaming-kafka-integration.md
+++ b/docs/streaming-kafka-integration.md
@@ -82,7 +82,7 @@ This approach has the following advantages over the 
receiver-based approach (i.e
 
 - *Efficiency:* Achieving zero-data loss in the first approach required the 
data to be stored in a Write Ahead Log, which further replicated the data. This 
is actually inefficient as the data effectively gets replicated twice - once by 
Kafka, and a second time by the Write Ahead Log. This second approach 
eliminates the problem as there is no receiver, and hence no need for Write 
Ahead Logs. As long as you have sufficient Kafka retention, messages can be 
recovered from Kafka.
 
-- *Exactly-once semantics:* The first approach uses Kafka's high level API to 
store consumed offsets in Zookeeper. This is traditionally the way to consume 
data from Kafka. While this approach (in combination with write ahead logs) can 
ensure zero data loss (i.e. at-least once semantics), there is a small chance 
some records may get consumed twice under some failures. This occurs because of 
inconsistencies between data reliably received by Spark Streaming and offsets 
tracked by Zookeeper. Hence, in this second approach, we use simple Kafka API 
that does not use Zookeeper. Offsets are tracked by Spark Streaming within its 
checkpoints. This eliminates inconsistencies between Spark Streaming and 
Zookeeper/Kafka, and so each record is received by Spark Streaming effectively 
exactly once despite failures. In order to achieve exactly-once semantics for 
output of your results, your output operation that saves the data to an 
external data store must be either idempotent, or an atomic transa
 ction that saves results and offsets (see [Semanitcs of output 
operations](streaming-programming-guide.html#semantics-of-output-operations) in 
the main programming guide for further information).
+- *Exactly-once semantics:* The first approach uses Kafka's high level API to 
store consumed offsets in Zookeeper. This is traditionally the way to consume 
data from Kafka. While this approach (in combination with write ahead logs) can 
ensure zero data loss (i.e. at-least once semantics), there is a small chance 
some records may get consumed twice under some failures. This occurs because of 
inconsistencies between data reliably received by Spark Streaming and offsets 
tracked by Zookeeper. Hence, in this second approach, we use simple Kafka API 
that does not use Zookeeper. Offsets are tracked by Spark Streaming within its 
checkpoints. This eliminates inconsistencies between Spark Streaming and 
Zookeeper/Kafka, and so each record is received by Spark Streaming effectively 
exactly once despite failures. In order to achieve exactly-once semantics for 
output of your results, your output operation that saves the data to an 
external data store must be either idempotent, or an atomic transa
 ction that saves results and offsets (see [Semantics of output 
operations](streaming-programming-guide.html#semantics-of-output-operations) in 
the main programming guide for further information).
 
 Note that one disadvantage of this approach is that it does not update offsets 
in Zookeeper, hence Zookeeper-based Kafka monitoring tools will not show 
progress. However, you can access the offsets processed by this approach in 
each batch and update Zookeeper yourself (see below).
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to