lhotari commented on issue #25028: URL: https://github.com/apache/pulsar/issues/25028#issuecomment-3600359223
> 1. For the compression level, yes I've added it but it seems not having much effect. > 2. As for the receiver queue size, my experience is that for delay message that if we want the message to arrive on time, it is necessary to have the receiver queue size to be 1, yet even though we change the receiver queue size to 1000, it does not change the issue. Thanks for checking. The purpose was to find out whether the behavior is any different. For production use, it's also recommended to enable compression. > 4. Back to the `managedLedgerMaxUnackedRangesToPersist`, my former setting is 100,000, now it is 200,000. And I can confirm that setting this value to a bigger threshold works for a while. But my question is that the business keeps growing and the `holes` keeps growing to. I wonder what will happen if we keep increasing the `managedLedgerMaxUnackedRangesToPersist` ? That is my original question about:> The max number of "ack holes" (== "unacked ranges") is "number_of_entries_since_oldest_non_acked_message / 2". If this exceeds the limit managedLedgerMaxUnackedRangesToPersist, the messages could get redelivered after a broker restart or when the namespace bundle gets moved to another broker because of load shedding / load balancing. > > I wonder if there's a way to optimize such issue or a way to tune it ? Or this is not the correct way of using pulsar ? For the acknowledgment state, the way to optimize is to categorize messages with similar delays to the same topic. On the consumer side, you can consume from multiple topics. This reduces the state size on the broker. For optimizing BucketDelayedDeliveryTrackerFactory performance, @Denovo1998 has been working on #24739. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
