Hi Sachin,
I’m a bit hazy on the details of the broker partition expansion feature. It’s
been a while since I looked at it. But you actually control the
key-to-partition mapping at the producer side. The producer’s default
partitioner just hashes the keys over the partition, but you could plug
Hi,
I will look into the suggestions you folks mentioned.
I was just wondering something from just kafka point of view.
Lets say we add new partitions to kafka topics. Is there any way to
configure that only new keys get their messages added to those partitions.
Existing keys continue to add
Hi Sachin,
Just to build on Boyang’s answer a little, when designing Kafka’s partition
expansion operation, we did consider making it work also for dynamically
repartitioning in a way that would work for Streams as well, but it added too
much complexity, and the contributor had some other use
Hey Sachin,
your observation is correct, unfortunately Kafka Streams doesn't support
adding partitions online. The rebalance could not guarantee the same key
routing to the same partition when the input topic partition changes, as
this is the upstream producer's responsibility to consistently
Hi,
We have a kafka streams application which runs multiple instances and
consumes from a source topic.
Producers produces keyed messages to this source topic.
Keyed messages are events from different sources and each source has a
unique key.
So what essentially happens is that messages from