heshiyingx opened a new issue, #8767: URL: https://github.com/apache/rocketmq/issues/8767
### Before Creating the Bug Report - [X] I found a bug, not just asking a question, which should be created in [GitHub Discussions](https://github.com/apache/rocketmq/discussions). - [X] I have searched the [GitHub Issues](https://github.com/apache/rocketmq/issues) and [GitHub Discussions](https://github.com/apache/rocketmq/discussions) of this repository and believe that this is not a duplicate. - [X] I have confirmed that this bug belongs to the current repository, not other repositories of RocketMQ. ### Runtime platform environment 运行环境:macOS14.7(intel),docker 使用镜像:apache/rocketmq:5.3.0 docker_compose整体文件包括配置如下: [docker_compose.zip](https://github.com/user-attachments/files/17171947/docker_compose.zip) ### RocketMQ version 使用的是官方镜像,apache/rocketmq:5.3.0 ### JDK Version openjdk version "1.8.0_422" OpenJDK Runtime Environment (Temurin)(build 1.8.0_422-b05) OpenJDK 64-Bit Server VM (Temurin)(build 25.422-b05, mixed mode) ### Describe the Bug 在双主双从的集群环境下,创建一个topic,持续的生产消息,多个消费者只能消费其中一个broker的消息。另一个broker中的消息不能被消费。 ### Steps to Reproduce - 修改docker_compose文件[docker_compose.zip](https://github.com/user-attachments/files/17171947/docker_compose.zip),修改配置文件的挂载目录(使配置文件,能正确映射到docker 容器) - 修改配置文件中brokerIP1为本机IP - 使用docker_compose启动集群 - 进入某个容器,使用`sh mqadmin updateTopic -n <nameserver_address> -t <topic_name> -c <cluster_name> -a +message.type=NORMAL`命令创建topic - 使用`https://github.com/apache/rocketmq-clients/archive/refs/tags/golang/v5.1.0-rc.1.zip`这个SDK生产消息,然后同样使用这个SDK中的示例代码,修改EndPoint和topic,使用simple_consumer中的消费者启动. - 然后就能观测到只能消费一部分消息(经排查,消费到的消息属于一个broker,未消费到的消息属于另外一个broker) ### What Did You Expect to See? 消费者能够消费集群中所有的消息 ### What Did You See Instead? 只能消费部分消息。 ### Additional Context _No response_ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
