Tests show that Kafka's million-level TPS is mainly owed to batch. When set 
batch size to 1, the TPS is reduced an order of magnitude. So I try to add this 
feature to RocketMQ.
 
https://github.com/apache/incubator-rocketmq/pull/53



 
Original intention
 
Batch is not for packaging but improving performance of small messages. So the 
messages of the same batch should act the same role, no more effort should be 
taken to split the batch.
 
No split has another important advantage, messages of the same batch should be 
sent atomically, that is all successfully or all unsuccessfully, of which the 
importance is self-evident.
 
So performance and atomicity are the original intentions, which will reflect on 
the usage constraints.



 
How it works
 
For a minimal effort, it works as follows:
  
Only add synchronous send functions to MQProducer interface, just like 
send(final Collection msgs)
 
Use MessageBatch which extends Message and implements Iterable<Message>
 
Use byte buffer instead of list of objects to avoid too much GC in Broker.
 
Split the decode and encode logic from lockForPutMessage to avoid too many race 
conditions.
  
Usage constraints
 
messages of the same batch should have:
 
1. same topic: If they belong to different topics(internally the queues), then 
may be sent to different brokers, which will against atomicity.
 
2. same waitStoreMsgOK: also differences will against atomicity.
 
3. no delay level: If we care about the delay level, we need to decode the 
internal properties of every message, which will cause much performance loss.



 
Performance Tests:
 
On linux with 24 Core 48G Ram and SSD, using 50 threads to send 50Byte(body) 
message in batch size 50, we get about 150w TPS until the disk is full.



 
Potential problems:
 
Although the messages can be accumulated in the Broker very quickly, it need 
time to dispatch to the consume queue, which is much slower than accepting 
messages. So the messages may not be able to be consumed immediately.
 
We may need to refactor the ReputMessageService to solve this problem.
 
Please feel free to reach out with any question.







Best Regards

dongeforever

Reply via email to