GitHub user lhotari added a comment to the discussion: FIFO guarantees with 
Key_Shared subscriptions when scaling partitions and clearing backlog

> If possible can you share the insights about addressing strict FIFO while 
> re-partitioning topic in Pulsar 5.0.

@vishnumurthy-nd It's still a high-level idea, and I've been picking 
@merlimat's brain while we've had a few discussions about it. I suspect it 
would be somewhat similar to Pravega's [elastic 
streams](https://pravega.io/docs/latest/pravega-concepts/#elastic-streams-auto-scaling).
 Pravega's stream segments can automatically grow and shrink over time based on 
I/O load.

Pulsar already has the Key_Shared subscription, which is sufficient for use 
cases where the throughput of a single broker serving a partition is adequate. 
Unless there's significant I/O load, there isn't a need for partitions in 
Pulsar. It's also possible to use Key_Shared subscriptions over partitioned 
topics when a single partition isn't sufficient; however, changing the number 
of partitions would break key-ordered processing.

The limitation of current approaches is that at high scale, they still require 
thinking about partitions when this could be eliminated while addressing the 
key-ordered processing problem inherent in current partition-based approaches.

One way to think about the idea is to consider it as a solution that manages 
partitions under the covers, scaling up and down over time. The solution stores 
the necessary metadata so that keys can be consumed in order when this happens. 
There would be necessary protocol changes between producers, consumers, and 
brokers to enable this. The challenge is that this new type of topic would 
require new clients that support this feature. There could be a proxy approach 
for backwards compatibility with existing clients, though with certain 
limitations.

@merlimat Did I understand your vision correctly?

GitHub link: 
https://github.com/apache/pulsar/discussions/25131#discussioncomment-15578171

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to