aloyszhang opened a new issue #13761: URL: https://github.com/apache/pulsar/issues/13761
**Is your enhancement request related to a problem? Please describe.** As we all know, a namespace bundle may contain multiple partitions belonging to different topics. The throughput of these topics may vary greatly. Some topics may with very high rate/throughput while other topics have a very low rate/throughput. These partitions with high rate/throughput can cause broker overload and bundle unloading. At this point, if we split bundle manually with `range_equally_divide` or `topic_count_equally_divide` split algorithm, there may need many times split before these high rate/through partitions assigned to different new bundles. We have met this problem in our pulsar cluster, there is a namespace with lots of topics and only one topic called has much higher thought put than others. Bundles for this namespaces are: ["0x00000000","0x15555555","0x2aaaaaaa","0x3fffffff","0x55555554","0x6aaaaaa9","0x7ffffffe","0x95555553","0xaaaaaaa8","0xbffffffd","0xd5555552","0xeaaaaaa7","0xffffffff"] The highest rate/throughput topics have 8 partitions and two partitions in the same bundle 0xbffffffd_0xd5555552. Any broker owns this bundle has a very high load than other brokers. Since lots of other partitions in this bundle, we can’t split these two partitions into two different bundles quickly by `range_equally_divide` or `topic_count_equally_divide`. **Describe the solution you'd like** I think we should make it easier to split two partitions of one topic evenly into two different bundles. **Describe alternatives you've considered** One alternative way is that we can specify a topic argument when splitting a bundle by `topic_count_equally_divide`, and then split the partitions of this topic evenly into two new bundles. Any suggestions are appreciated! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
