awitghirmai opened a new issue, #19210: URL: https://github.com/apache/pulsar/issues/19210
### Search before asking - [X] I searched in the [issues](https://github.com/apache/pulsar/issues) and found nothing similar. ### Version **OS Version/Computer:** M1 Macbook, Monterrey 12.6.1 **Pulsar Version:** Pulsar Standalone Docker 2.10.1 and 2.11.0, run locally on Mac ### Minimal reproduce step 1. Spin up a brand new docker standalone for specific version, locally on Mac 1. Using our python code, create a topic with the following (note, python code under the hood is using the HTTP API to accomplish this) - 1 partition - Retention Policy time of -1 - Backlog Quota time of 60 seconds, with an action `producer_exception` - 1 consumer group 1. Migrate topic to Pulsar instance 1. Create a publisher and send a message (successfully) to pulsar 1. Made sure the message just sent was not consumed 1. Wait 5 minutes, and send another message This was done with version 2.10.1 and 2.11.0, on my local Macbook, and a colleagues Macbook with similar specs and version. We also did the exact same procedure, except when setting the Backlog quota, we manually set this through `pulsar-admin set-backlog-quota` ### What did you expect to see? I expected to see that after setting the Backlog Quota to 1 minute, and submitting messages contained in the backlog, that past the 1 minute mark, I should be getting a `ProducerBlockedQuotaExceededException` This doesn't appear to happen, and I've run it multiple times by wiping my Pulsar clean and re-doing on fresh instances. ### What did you see instead? I saw my next messages go through (published) even though I had messages in my backlog as old as 10 minutes. ### Anything else? _No response_ ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
