RongtongJin opened a new issue #568:
URL: https://github.com/apache/rocketmq-externals/issues/568


   The current rebalancing process of Apache RocketMQ  in cluster mode is as 
follows:
   
   Each consumer instance starts a rebalance service thread, which periodically 
obtains a list of consumer instances under the group from the broker, and sorts 
the message queue under the topic and consumer Instances under the group , and 
then uses related strategy  to calculate the message queue to be pulled. For 
details, please refer to this 
[document](https://github.com/apache/rocketmq/blob/master/docs/en/Design_LoadBlancing.md#2-consumer-load-balancing).
   
   There are some problems with this architecture:
   - In some scenarios, the view of the queue and consumer instance obtained by 
each consumer is inconsistent, resulting in confusion of consumer load
   - The load balancing algorithm lacks stickiness, and the increase or 
decrease of the queue or consumer will lead to a large amount of overhead
   
   This topic hopes to optimize the Apache RocketMQ rebalancing architecture, 
including
   
   1. Move the rebalancing calculation to the broker, and the client requests 
the broker to obtain the allocation result.
   2.Provide a sticky rebalancing algorithm for Apache RocketMQ.
   3. Implementation of rebalance evaluation method in Openmessaging-Chaos.
   
   当前Apache RocketMQ Consumer端在集群模式下的重平衡过程如下:
   
   
每个消费者实例启动一个负载均衡服务线程,该线程定时向broker获取Group下的消费者实例列表,并对Topic下的消息队列(MessageQueue)、消费者实例(Consumer
 id)排序,然后用分配策略算法,计算出待拉取的消息队列。详细可以参考该文档。
   
   这样的架构存在一些问题:
   • 在某些场景下,每个消费者获取到的队列和消费者实例视图不一致,导致消费者负载混乱
   • 负载均衡算法缺少粘性,队列或者消费者的增减会导致大量开销
   
   本题目希望优化Apache RocketMQ重平衡架构,主要包括
   1.改进当前负载均衡架构,负载均衡的计算在broker完成,客户端请求broker获取分配结果
   2.为RocketMQ提供具有粘性的负载均衡算法,在队列分尽可能平均的情况下保证和上次分配结果尽量相同
   3.在OpenMessaging-Chaos中实现负载均衡评估手段。
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to