[
https://issues.apache.org/jira/browse/ROCKETMQ-80?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15871269#comment-15871269
]
ASF GitHub Bot commented on ROCKETMQ-80:
----------------------------------------
Github user dongeforever commented on the issue:
https://github.com/apache/incubator-rocketmq/pull/53
@Jaskey Now there is no batch id, but the messages in one batch are sent to
the same queue, and they can only be sent all successfully or all
unsuccessfully.
You could check the code or test it.
> Add batch feature
> -----------------
>
> Key: ROCKETMQ-80
> URL: https://issues.apache.org/jira/browse/ROCKETMQ-80
> Project: Apache RocketMQ
> Issue Type: New Feature
> Affects Versions: 4.1.0-incubating
> Reporter: dongeforever
> Assignee: dongeforever
> Fix For: 4.1.0-incubating
>
>
> Tests show that Kafka's million-level TPS is mainly owed to batch. When set
> batch size to 1, the TPS is reduced an order of magnitude. So I try to add
> this feature to RocketMQ.
> For a minimal effort, it works as follows:
> Only add synchronous send functions to MQProducer interface, just like
> send(final Collection msgs).
> Use MessageBatch which extends Message and implements Iterable<Message>.
> Use byte buffer instead of list of objects to avoid too much GC in Broker.
> Split the decode and encode logic from lockForPutMessage to avoid too many
> race conditions.
> Tests:
> On linux with 24 Core 48G Ram and SSD, using 50 threads to send 50Byte(body)
> message in batch size 50, we get about 150w TPS until the disk is full.
> Potential problems:
> Although the messages can be accumulated in the Broker very quickly, it need
> time to dispatch to the consume queue, which is much slower than accepting
> messages. So the messages may not be able to be consumed immediately.
> We may need to refactor the ReputMessageService to solve this problem.
> And if guys have some ideas, please let me know or just share it in this
> issue.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)