Git-Yang opened a new issue #697:
URL: https://github.com/apache/rocketmq-externals/issues/697


   **BUG REPORT**
   
   1. Please describe the issue you observed:
   **Source Cluster:**
   
![image](https://user-images.githubusercontent.com/30995057/112601080-f8565c00-8e4c-11eb-941f-50f4c91c23ec.png)
   **Target Cluster:**
   
![image](https://user-images.githubusercontent.com/30995057/112601064-f2f91180-8e4c-11eb-8e6e-5a55c3f3a43a.png)
   
   **Condition:**
   a. Two worker nodes
   b. The number of concurrent tasks is 6
   c. The task was restarted multiple times during the synchronization process.
   
   **Result:**
   The number of queue messages in the source cluster and the target cluster is 
inconsistent.
   
   
![image](https://user-images.githubusercontent.com/30995057/112601902-f93bbd80-8e4d-11eb-8c8f-e17d8529d845.png)
   **Analysis:**
   When synchronizing positions between multiple workers, local data is 
directly overwritten when positions are found to be inconsistent, which may 
cause the latest position to be overwritten by old data.
   
   - What did you do (The steps to reproduce)?
   1.Each worker instance records the current positions that have been updated 
locally.
   2.When the current worker node synchronizes data with other worker nodes, 
only the locally updated position is synchronized.
   
   - What did you expect to see?
   Each worker node sends the local updated position to the topic to ensure 
that other workers only synchronize the position that needs to be updated.
   
   - What did you see instead?
   Each worker node consumes the full position of other worker nodes from the 
topic, including the old position.
   
   2. Please tell us about your environment:
   openjdk1.8
   CentOS7.3
   
   3. Other information (e.g. detailed explanation, logs, related issues, 
suggestions how to fix, etc):
   Based on #693  #695  change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to