anandchangediya commented on issue #21038: [SPARK-22968][DStream] Throw an 
exception on partition revoking issue
URL: https://github.com/apache/spark/pull/21038#issuecomment-541604823
 
 
   @koeninger According to Kafka documentation
   
   `If all the consumer instances have the same consumer group, then the 
records will effectively be load-balanced over the consumer instances`
   This means I can have multiple consumers with same groupId which can help me 
to load balance my application and scale accordingly.
   I don't know why it is said "fundamentally wrong" to have multiple consumers 
with the same groupId in spark.
   So how can I achieve scalability to listen to a single partition and 
increase consumption rate with multiple spark consumers?
   Is this the spark design fault or any other way to achieve that which I am 
unaware of?
   
   @SehanRathnayake Any thoughts?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to