leizhiyuan opened a new issue, #6652: URL: https://github.com/apache/rocketmq/issues/6652
The issue tracker is used for bug reporting purposes **ONLY** whereas feature request needs to follow the [RIP process](https://github.com/apache/rocketmq/wiki/RocketMQ-Improvement-Proposal). To avoid unnecessary duplication, please check whether there is a previous issue before filing a new one. It is recommended to start a discussion thread in the [mailing lists](http://rocketmq.apache.org/about/contact/) or [github discussions](https://github.com/apache/rocketmq/discussions) in cases of discussing your deployment plan, API clarification, and other non-bug-reporting issues. We welcome any friendly suggestions, bug fixes, collaboration, and other improvements. Please ensure that your bug report is clear and self-contained. Otherwise, it would take additional rounds of communication, thus more time, to understand the problem itself. Generally, fixing an issue goes through the following steps: 1. Understand the issue reported; 1. Reproduce the unexpected behavior locally; 1. Perform root cause analysis to identify the underlying problem; 1. Create test cases to cover the identified problem; 1. Work out a solution to rectify the behavior and make the newly created test cases pass; 1. Make a pull request and go through peer review; As a result, it would be very helpful yet challenging if you could provide an isolated project reproducing your reported issue. Anyway, please ensure your issue report is informative enough for the community to pick up. At a minimum, include the following hints: **BUG REPORT** 1. Please describe the issue you observed: 5.0 PopReviveService Here, from the code analysis, if the message is deleted due to disk full or other reasons. At this time, the message in the rmq_sys_REVIVE_LOG topic is gone, and the offset is returned too small, and there seems to be no logic similar to the SDK to jump directly to the next nextBeginOffset. You can only continue one by one. If nothing else, it feels like it can be optimized here. Otherwise, once the message is deleted, the catch-up progress will become very slow. 5.0 PopReviveService Here, from code analysis, messages are deleted if due to disk full or other reasons. At this point, the message in the rmq_sys_REVIVE_LOG topic disappears and the return is that the offset is too small, and there seems to be no logic similar to jumping directly to the next nextBeginOffset in the SDK. Processing can only be continued one by one. If nothing else is considered, it feels like it can be optimized here. Otherwise, once the message is deleted, the catch-up progress wi ll become slow. 2. Please tell us about your environment: ``` 2023-04-25 11:49:56 WARN PopReviveService_7 - Can not get msg , topic xxxx_0, offset 170910032, queueId 4, result is GetMessageResult [status=OFFSET_TOO_SMALL, nextBeginOffset=173909497, minOffset=173909497, maxOffset=177327929, bufferTotalSize=0, messageCount=0, suggestPullingFromSlave=false] 2023-04-25 11:49:56 WARN PopReviveService_7 - Can not get msg , topic xxxx_0, offset 170910033, queueId 4, result is GetMessageResult [status=OFFSET_TOO_SMALL, nextBeginOffset=173909497, minOffset=173909497, maxOffset=177327929, bufferTotalSize=0, messageCount=0, suggestPullingFromSlave=false] ``` 4. Other information (e.g. detailed explanation, logs, related issues, suggestions on how to fix, etc): add **FEATURE REQUEST** 1. Please describe the feature you are requesting. 2. Provide any additional detail on your proposed use case for this feature. 3. Indicate the importance of this issue to you (blocker, must-have, should-have, nice-to-have). Are you currently using any workarounds to address this issue? 5. If there are some sub-tasks involved, use -[] for each sub-task and create a corresponding issue to map to the sub-task: - [sub-task1-issue-number](example_sub_issue1_link_here): sub-task1 description here, - [sub-task2-issue-number](example_sub_issue2_link_here): sub-task2 description here, - ... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
