Hi,
We have a kafka streams application which runs multiple instances and
consumes from a source topic.
Producers produces keyed messages to this source topic.
Keyed messages are events from different sources and each source has a
unique key.

So what essentially happens is that messages from particular source always
gets added to a particular partition.
Hence we can run multiple instances of streams application with a
particular instance processing messages for certain partitions.
We will never get into a case where messages for a source are processed by
different instances of streams application simultaneously.

So far so good.

Now over time new sources are added. It may so happen that we reach a
saturation point and have no option but to increase number of partitions.

So what is the best practice to increase number of partitions.
Is there a way to ensure that existing key's messages continue to get
published on same partition as before.
And only new source's keys gets their messages published on the new
partition we add.

If this is not possible then does kafka's re-partition mechanism ensure
that during re-balance all the previous messages of a particular key gets
moved to same partition.
I guess under this approach we would have to stop our streaming application
till re-balance is over otherwise messages for same key may get processed
by different instances of the application.

Anyway just wanted to know how such a problem is tackled on live systems
real time, or how some of you have approached the same.

Thanks
Sachin

Reply via email to