Repository: spark
Updated Branches:
  refs/heads/branch-2.0 f3d82b53c -> f12b74c02

[SPARK-17853][STREAMING][KAFKA][DOC] make it clear that reusing is bad

## What changes were proposed in this pull request?

Documentation fix to make it clear that reusing group id for different streams 
is super duper bad, just like it is with the underlying Kafka consumer.

## How was this patch tested?

I built jekyll doc and made sure it looked ok.

Author: cody koeninger <>

Closes #15442 from koeninger/SPARK-17853.

(cherry picked from commit c264ef9b1918256a5018c7a42a1a2b42308ea3f7)
Signed-off-by: Reynold Xin <>


Branch: refs/heads/branch-2.0
Commit: f12b74c02eec9e201fec8a16dac1f8e549c1b4f0
Parents: f3d82b5
Author: cody koeninger <>
Authored: Wed Oct 12 00:40:47 2016 -0700
Committer: Reynold Xin <>
Committed: Wed Oct 12 00:40:52 2016 -0700

 docs/ | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/docs/ 
index 44c39e3..456b845 100644
--- a/docs/
+++ b/docs/
@@ -27,7 +27,7 @@ For Scala/Java applications using SBT/Maven project 
definitions, link your strea
          "bootstrap.servers" -> "localhost:9092,anotherhost:9092",
          "key.deserializer" -> classOf[StringDeserializer],
          "value.deserializer" -> classOf[StringDeserializer],
-         "" -> "example",
+         "" -> "use_a_separate_group_id_for_each_stream",
          "auto.offset.reset" -> "latest",
          "" -> (false: java.lang.Boolean)
@@ -48,7 +48,7 @@ Each item in the stream is a 
 For possible kafkaParams, see [Kafka consumer config 
-Note that is disabled, for discussion see [Storing 
Offsets](streaming-kafka-0-10-integration.html#storing-offsets) below.
+Note that the example sets to false, for discussion see 
[Storing Offsets](streaming-kafka-0-10-integration.html#storing-offsets) below.
 ### LocationStrategies
 The new Kafka consumer API will pre-fetch messages into buffers.  Therefore it 
is important for performance reasons that the Spark integration keep cached 
consumers on executors (rather than recreating them for each batch), and prefer 
to schedule partitions on the host locations that have the appropriate 
@@ -57,6 +57,9 @@ In most cases, you should use 
`LocationStrategies.PreferConsistent` as shown abo
 The cache for consumers has a default maximum size of 64.  If you expect to be 
handling more than (64 * number of executors) Kafka partitions, you can change 
this setting via `spark.streaming.kafka.consumer.cache.maxCapacity`
+The cache is keyed by topicpartition and, so use a **separate** 
`` for each call to `createDirectStream`.
 ### ConsumerStrategies
 The new Kafka consumer API has a number of different ways to specify topics, 
some of which require considerable post-object-instantiation setup.  
`ConsumerStrategies` provides an abstraction that allows Spark to obtain 
properly configured consumers even after restart from checkpoint.

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to