tarmacmonsterg opened a new issue, #25097: URL: https://github.com/apache/pulsar/issues/25097
### Search before reporting - [x] I searched in the [issues](https://github.com/apache/pulsar/issues) and found nothing similar. ### Read release policy - [x] I understand that [unsupported versions](https://pulsar.apache.org/contribute/release-policy/#supported-versions) don't get bug fixes. I will attempt to reproduce the issue on a supported version of Pulsar client and Pulsar broker. ### User environment Broker: apachepulsar/pulsar-all: 4.0.7 Helm chart: https://github.com/apache/pulsar-helm-chart/tree/pulsar-4.0.1 ### Issue Description Replication for some topics randomly stops during normal operation, causing backlog to accumulate. The issue is observed in two main cases: 1. A sudden spike in the publish rate to a topic (for example, a steady rate of 5 messages per second followed by a burst of 5,000 messages per second lasting about 5 minutes). 2. External infrastructure issues, such as frequent broker restarts or resource-related problems (for example, high iowait on BookKeeper). The issue occurs at the topic level. For example, we have both partitioned and non-partitioned topics. The problem is observed with non-partitioned topics, and there were also cases where replication got stuck for a single partition of a partitioned topic. The only way to restore replication is to disable replication for the namespace and then re-enable it. ### Error messages ```text No any error or warn messages ``` ### Reproducing the issue Can't reproduce using standalone cluster ### Additional information https://apache-pulsar.slack.com/archives/C078TGY9R29/p1764167528705519 2 Clusters: * 5 Bookies * 4 Brokers * 3 Proxies Replication: 2/2/2 ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
