It's not required ,

*Simplified Parallelism:* No need to create multiple input Kafka streams
and union them. With directStream, Spark Streaming will create as many RDD
partitions as there are Kafka partitions to consume, which will all read
data from Kafka in parallel. So there is a one-to-one mapping between Kafka
and RDD partitions, which is easier to understand and tune.
On Jul 7, 2016 3:04 PM, "SamyaMaiti" <samya.maiti2...@gmail.com> wrote:

> Hi Team,
>
> Is there a way we can consume from Kafka using spark Streaming direct API
> using multiple consumers (belonging to same consumer group)
>
> Regards,
> Sam
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-streaming-Kafka-Direct-API-Multiple-consumers-tp27305.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to