weihubeats opened a new issue, #8358:
URL: https://github.com/apache/rocketmq/issues/8358

   ### Before Creating the Bug Report
   
   - [X] I found a bug, not just asking a question, which should be created in 
[GitHub Discussions](https://github.com/apache/rocketmq/discussions).
   
   - [X] I have searched the [GitHub 
Issues](https://github.com/apache/rocketmq/issues) and [GitHub 
Discussions](https://github.com/apache/rocketmq/discussions)  of this 
repository and believe that this is not a duplicate.
   
   - [X] I have confirmed that this bug belongs to the current repository, not 
other repositories of RocketMQ.
   
   
   ### Runtime platform environment
   
   Ubuntu 22.04.2 LTS
   
   ### RocketMQ version
   
   5.1.0
   
   ### JDK Version
   
   1.8
   
   ### Describe the Bug
   
   Client does not send heartbeats to all Nameserve in clustered mode, 
resulting in frequent disconnections
   
   Clients are timed to establish connections with all Nameserve
   
![image](https://github.com/apache/rocketmq/assets/42484192/da9cf252-1599-4198-9899-2447a0362b25)
   
   The client does not send a heartbeat to Nameserve regularly, but the client 
sets up idle read/write detection
   
![image](https://github.com/apache/rocketmq/assets/42484192/2795213a-bf27-4a26-bc54-b023e5f51340)
   
   The client will only get the topic metadata from one Nameserve at regular 
intervals, which can be treated as similar to a heartbeat, but in cluster mode, 
it will only get the metadata from a single Nameserve, which leads to frequent 
disconnections and reconnections of other Nameserves.
   
   ### Steps to Reproduce
   
   ```java
   public class LocalProducer {
   
       /**
        * The number of produced messages.
        */
       public static final int MESSAGE_COUNT = 100;
       public static final String PRODUCER_GROUP = "xiao-zou-topic-producer";
       
       public static final String DEFAULT_NAMESRVADDR = 
"127.0.0.1:9000;127.0.0.1:9001;127.0.0.1:9002";
       public static final String TOPIC = "xiao-zou-topic";
       public static final String TAG = "TagA";
       private static final Log log = LogFactory.getLog(LocalProducer.class);
   
       public static void main(String[] args) throws MQClientException, 
InterruptedException {
           /*
            * Instantiate with a producer group name.
            */
           DefaultMQProducer producer = new DefaultMQProducer(PRODUCER_GROUP, 
true);
           producer.setNamesrvAddr(DEFAULT_NAMESRVADDR);
           producer.addRetryResponseCode(RemotingSysResponseCode.SYSTEM_BUSY);
           producer.start();
   
           for (int i = 0; i < MESSAGE_COUNT; i++) {
               try {
                   Message msg = new Message(TOPIC /* Topic */,
                       TAG /* Tag */,
                       ("Hello xiaozou " + 
i).getBytes(RemotingHelper.DEFAULT_CHARSET) /* Message body */
                   );
   //                msg.setDelayTimeLevel(2);
                   SendResult sendResult = producer.send(msg, 5000);
                   DateTimeFormatter dtf2 = 
DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
                   System.out.printf("%s %s%n", sendResult, 
dtf2.format(LocalDateTime.now()));
                   TimeUnit.SECONDS.sleep(20);
               } catch (Exception e) {
                   e.printStackTrace();
                   Thread.sleep(1000);
               }
           }
           producer.shutdown();
       }
   }
   ```
   
   
![image](https://github.com/apache/rocketmq/assets/42484192/1bbe9dfc-18e2-43d2-8272-af5af1d948f7)
   
   
   ### What Did You Expect to See?
   
   no error: NETTY CLIENT PIPELINE: IDLE exception
   
   ### What Did You See Instead?
   
   NETTY CLIENT PIPELINE: IDLE exception [127.0.0.1:9000]
   
   ### Additional Context
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to