Hi Prasad,
Want to correct a bit. Ots not one consumer per partitions.
Its one consumer thread per partitions.


On Thu, 26 Mar 2020 at 4:49 PM, Prasad Suhas Shembekar <
[email protected]> wrote:

> Hi,
>
> I am using Apache Kafka as a Message Broker in our application. The
> producers and consumers are running as Docker containers in Kubernetes.
> Right now, the producer publishes messages to a topic in single partition.
> While the consumer consumes it from the topic.
> As per my understanding, in Apache Kafka a single consumer from a consumer
> group can consume messages from one partition only. Meaning, if there is
> only a single partition and multiple consumers in a consumer group, only
> one consumer will consume the message and the rest will remain idle, till
> Apache Kafka does the partition rebalancing.
> As mentioned earlier, we have a single topic and single partition and
> multiple consumers in a single group. Thus we won't be able to achieve the
> horizontal scaling for message consumption.
>
> Please let me know if the above understanding is correct.
>
> I am looking out on how to create partitions dynamically in the topic, as
> and when a new consumer is added to consumer group (K8S auto scaling of
> PODS).
> Also, how to make the producer write to these different partitions created
> dynamically, without overloading few partitions.
>
> Request you to provide some inputs / suggestions on how to achieve this.
>
> Thanks & Regards,
> Prasad Shembekar
> Blue Marble
> WST-020, D Non-ODC, Mihan SEZ,
> Nagpur
> Extension: 6272148
> Direct: 0712-6672148
>
> ============================================================================================================================
> Disclaimer: This message and the information contained herein is
> proprietary and confidential and subject to the Tech Mahindra policy
> statement, you may review the policy at
> http://www.techmahindra.com/Disclaimer.html externally
> http://tim.techmahindra.com/tim/disclaimer.html internally within
> TechMahindra.
> ============================================================================================================================
>

Reply via email to