LetLetMe opened a new issue, #8058:
URL: https://github.com/apache/rocketmq/issues/8058

   ### Before Creating the Enhancement Request
   
   - [X] I have confirmed that this should be classified as an enhancement 
rather than a bug/feature.
   
   
   ### Summary
   
   
我们之前的百万队列需求,使用了rocksdb存储了元数据和cq信息,我们之前的json版本的元数据是有一个dataVersion字段的,用以记录数据的版本,该字段会在每次元数据更新时候被同步跟新,并且会被持久化,下次broker重新启动时会被再次加载,但是该rocksdb版本的实现似乎遗漏了这一点
   
   Our previous million queue requirement used rocksdb to store metadata and cq 
information. In our previous JSON version of metadata, there was a 
'dataVersion' field to record the data version. This field would be 
synchronized and persisted every time the metadata was updated, and would be 
loaded again when the broker restarted. However, it seems that this aspect was 
overlooked in the implementation of this rocksdb version.
   
   ### Motivation
   
   现如今rmq商业版要与开源版实现方案对齐,当数据迁移的时候,发现开源版并没有记录DataVersion,导致迁移不太顺利
   
   Nowadays, the RMQ commercial version needs to align with the open-source 
version in terms of implementation. When migrating the data, it was found that 
the open-source version does not record the DataVersion, which makes the 
migration not very smooth.
   
   ### Describe the Solution You'd Like
   
   
在现在的rocksdb方案中,topic和group分别使用了独立的目录,数据存储在相应目录的Default列簇中,我为他们每个额外添加了一个列簇,分别名为topicDataVersion和groupDataVersion,用以存储数据版本号
   
   In the current rocksdb solution, topics and groups each use separate 
directories, with data stored in the Default column family of the respective 
directories. I have added an additional column family for each of them, named 
topicDataVersion and groupDataVersion, to store the data version numbers.
   
   ### Describe Alternatives You've Considered
   
   无
   nothing
   
   ### Additional Context
   
   关联的issue #7064
   
   Associated issue #7064


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to