This is an automated email from the ASF dual-hosted git repository.

dinglei pushed a commit to branch new-official-website
in repository https://gitbox.apache.org/repos/asf/rocketmq-site.git


The following commit(s) were added to refs/heads/new-official-website by this 
push:
     new 6749b40c4 [ISSUE #290] Add Basic Best Practice Section in the v4.x & 
5.0 Document(CN -> EN) (#331)
6749b40c4 is described below

commit 6749b40c4143538723cd3547dc3ec6252d7b3b09
Author: Oliver <[email protected]>
AuthorDate: Thu Oct 27 14:47:54 2022 +0800

    [ISSUE #290] Add Basic Best Practice Section in the v4.x & 5.0 Document(CN 
-> EN) (#331)
    
    * [ISSUE #290] Add Basic Best Practice Section in the v4.x & 5.0 
Document(CN -> EN)
    
    * update link
---
 .../current/05-bestPractice/15bestpractice.md      | 476 +++++++++++----------
 .../version-5.0/06-bestPractice/15bestpractice.md  | 222 +++++-----
 2 files changed, 366 insertions(+), 332 deletions(-)

diff --git 
a/i18n/en/docusaurus-plugin-content-docs/current/05-bestPractice/15bestpractice.md
 
b/i18n/en/docusaurus-plugin-content-docs/current/05-bestPractice/15bestpractice.md
index 02d48fe20..fe38a4be4 100644
--- 
a/i18n/en/docusaurus-plugin-content-docs/current/05-bestPractice/15bestpractice.md
+++ 
b/i18n/en/docusaurus-plugin-content-docs/current/05-bestPractice/15bestpractice.md
@@ -1,228 +1,248 @@
-# 基本最佳实践
-
-## 生产者
-
-###  发送消息注意事项
-
-#### Tags的使用
-
-一个应用尽可能用一个Topic,而消息子类型则可以用tags来标识。tags可以由应用自由设置,只有生产者在发送消息设置了tags,消费方在订阅消息时才可以利用tags通过broker做消息过滤:message.setTags("TagA")。
  
-
-#### Keys的使用
-
-每个消息在业务层面的唯一标识码要设置到keys字段,方便将来定位消息丢失问题。服务器会为每个消息创建索引(哈希索引),应用可以通过topic、key来查询这条消息内容,以及消息被谁消费。由于是哈希索引,请务必保证key尽可能唯一,这样可以避免潜在的哈希冲突。
-
-
-```java
-   // 订单Id   
-   String orderId = "20034568923546";   
-   message.setKeys(orderId);   
-```
-#### 日志的打印
-
-消息发送成功或者失败要打印消息日志,务必要打印SendResult和key字段。send消息方法只要不抛异常,就代表发送成功。发送成功会有多个状态,在sendResult里定义。以下对每个状态进行说明:
     
-
-- **SEND_OK**
-
-消息发送成功。要注意的是消息发送成功也不意味着它是可靠的。要确保不会丢失任何消息,还应启用同步Master服务器或同步刷盘,即SYNC_MASTER或SYNC_FLUSH。
-
-
-- **FLUSH_DISK_TIMEOUT**
-
-消息发送成功但是服务器刷盘超时。此时消息已经进入服务器队列(内存),只有服务器宕机,消息才会丢失。消息存储配置参数中可以设置刷盘方式和同步刷盘时间长度,如果Broker服务器设置了刷盘方式为同步刷盘,即FlushDiskType=SYNC_FLUSH(默认为异步刷盘方式),当Broker服务器未在同步刷盘时间内(默认为5s)完成刷盘,则将返回该状态——刷盘超时。
-
-- **FLUSH_SLAVE_TIMEOUT**
-
-消息发送成功,但是服务器同步到Slave时超时。此时消息已经进入服务器队列,只有服务器宕机,消息才会丢失。如果Broker服务器的角色是同步Master,即SYNC_MASTER(默认是异步Master即ASYNC_MASTER),并且从Broker服务器未在同步刷盘时间(默认为5秒)内完成与主服务器的同步,则将返回该状态——数据同步到Slave服务器超时。
-
-- **SLAVE_NOT_AVAILABLE**
-
-消息发送成功,但是此时Slave不可用。此时消息已经进入Master服务器队列,只有Master服务器宕机,消息才会丢失。如果Broker服务器的角色是同步Master,即SYNC_MASTER(默认是异步Master服务器即ASYNC_MASTER),但没有配置slave
 Broker服务器,则将返回该状态——无Slave服务器可用。
-
-
-### 消息发送失败处理方式
-
-Producer的send方法本身支持内部重试,重试逻辑如下:
-
-- 至多重试2次(同步发送为2次,异步发送为0次)。
-- 如果发送失败,则轮转到下一个Broker。这个方法的总耗时时间不超过sendMsgTimeout设置的值,默认10s。
-- 如果本身向broker发送消息产生超时异常,就不会再重试。
-
-以上策略也是在一定程度上保证了消息可以发送成功。如果业务对消息可靠性要求比较高,建议应用增加相应的重试逻辑:比如调用send同步方法发送失败时,则尝试将消息存储到db,然后由后台线程定时重试,确保消息一定到达Broker。
-
-上述db重试方式为什么没有集成到MQ客户端内部做,而是要求应用自己去完成,主要基于以下几点考虑:首先,MQ的客户端设计为无状态模式,方便任意的水平扩展,且对机器资源的消耗仅仅是cpu、内存、网络。其次,如果MQ客户端内部集成一个KV存储模块,那么数据只有同步落盘才能较可靠,而同步落盘本身性能开销较大,所以通常会采用异步落盘,又由于应用关闭过程不受MQ运维人员控制,可能经常会发生
 kill -9 
这样暴力方式关闭,造成数据没有及时落盘而丢失。第三,Producer所在机器的可靠性较低,一般为虚拟机,不适合存储重要数据。综上,建议重试过程交由应用来控制。
-
-### 选择oneway形式发送
-通常消息的发送是这样一个过程:
-
-- 客户端发送请求到服务器
-- 服务器处理请求
-- 服务器向客户端返回应答
-
-所以,一次消息发送的耗时时间是上述三个步骤的总和,而某些场景要求耗时非常短,但是对可靠性要求并不高,例如日志收集类应用,此类应用可以采用oneway形式调用,oneway形式只发送请求不等待应答,而发送请求在客户端实现层面仅仅是一个操作系统系统调用的开销,即将数据写入客户端的socket缓冲区,此过程耗时通常在微秒级。
-
-## 客户端配置
-
-相对于RocketMQ的Broker集群,生产者和消费者都是客户端。本小节主要描述生产者和消费者公共的行为配置。
-
-### 客户端寻址方式
-
-RocketMQ可以令客户端找到Name Server, 然后通过Name 
Server再找到Broker。如下所示有多种配置方式,优先级由高到低,高优先级会覆盖低优先级。
-
-- 代码中指定Name Server地址,多个namesrv地址之间用分号分割   
-
-```java
-producer.setNamesrvAddr("192.168.0.1:9876;192.168.0.2:9876");  
-
-consumer.setNamesrvAddr("192.168.0.1:9876;192.168.0.2:9876");
-```
-- Java启动参数中指定Name Server地址
-
-```text
--Drocketmq.namesrv.addr=192.168.0.1:9876;192.168.0.2:9876  
-```
-- 环境变量指定Name Server地址
-
-```text
-export   NAMESRV_ADDR=192.168.0.1:9876;192.168.0.2:9876   
-```
-- HTTP静态服务器寻址(默认)
-
-客户端启动后,会定时访问一个静态HTTP服务器,地址如下:<http://jmenv.tbsite.net:8080/rocketmq/nsaddr>,这个URL的返回内容如下:
-
-```text
-192.168.0.1:9876;192.168.0.2:9876   
-```
-客户端默认每隔2分钟访问一次这个HTTP服务器,并更新本地的Name 
Server地址。URL已经在代码中硬编码,可通过修改/etc/hosts文件来改变要访问的服务器,例如在/etc/hosts增加如下配置:
-```text
-10.232.22.67    jmenv.taobao.net   
-```
-推荐使用HTTP静态服务器寻址方式,好处是客户端部署简单,且Name Server集群可以热升级。
-
-## 消费者
-
-### 消费过程幂等
-
-RocketMQ无法避免消息重复(Exactly-Once),所以如果业务对消费重复非常敏感,务必要在业务层面进行去重处理。可以借助关系数据库进行去重。首先需要确定消息的唯一键,可以是msgId,也可以是消息内容中的唯一标识字段,例如订单Id等。在消费之前判断唯一键是否在关系数据库中存在。如果不存在则插入,并消费,否则跳过。(实际过程要考虑原子性问题,判断是否存在可以尝试插入,如果报主键冲突,则插入失败,直接跳过)
-
-msgId一定是全局唯一标识符,但是实际使用中,可能会存在相同的消息有两个不同msgId的情况(消费者主动重发、因客户端重投机制导致的重复等),这种情况就需要使业务字段进行重复消费。
-
-### 消费速度慢的处理方式
-
-### 提高消费并行度
-
-绝大部分消息消费行为都属于 IO 密集型,即可能是操作数据库,或者调用 
RPC,这类消费行为的消费速度在于后端数据库或者外系统的吞吐量,通过增加消费并行度,可以提高总的消费吞吐量,但是并行度增加到一定程度,反而会下降。所以,应用必须要设置合理的并行度。
 如下有几种修改消费并行度的方法:
-
-- 同一个 ConsumerGroup 下,通过增加 Consumer 实例数量来提高并行度(需要注意的是超过订阅队列数的 Consumer 
实例无效)。可以通过加机器,或者在已有机器启动多个进程的方式。
-- 提高单个 Consumer 的消费并行线程,通过修改参数 consumeThreadMin、consumeThreadMax实现。
-
-### 批量方式消费
-
-某些业务流程如果支持批量方式消费,则可以很大程度上提高消费吞吐量,例如订单扣款类应用,一次处理一个订单耗时 1 s,一次处理 10 个订单可能也只耗时 2 
s,这样即可大幅度提高消费的吞吐量,通过设置 consumer的 consumeMessageBatchMaxSize 返个参数,默认是 
1,即一次只消费一条消息,例如设置为 N,那么每次消费的消息数小于等于 N。
-
-### 跳过非重要消息
-
-发生消息堆积时,如果消费速度一直追不上发送速度,如果业务对数据要求不高的话,可以选择丢弃不重要的消息。例如,当某个队列的消息数堆积到100000条以上,则尝试丢弃部分或全部消息,这样就可以快速追上发送消息的速度。示例代码如下:
-
-```java
-    public ConsumeConcurrentlyStatus consumeMessage(
-            List<MessageExt> msgs,
-            ConsumeConcurrentlyContext context) {
-        long offset = msgs.get(0).getQueueOffset();
-        String maxOffset =
-                msgs.get(0).getProperty(Message.PROPERTY_MAX_OFFSET);
-        long diff = Long.parseLong(maxOffset) - offset;
-        if (diff > 100000) {
-            // TODO 消息堆积情况的特殊处理
-            return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
-        }
-        // TODO 正常消费过程
-        return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
-    }    
-```
-
-
-#### 优化每条消息消费过程     
-
-举例如下,某条消息的消费过程如下:
-
-- 根据消息从 DB 查询【数据 1】
-- 根据消息从 DB 查询【数据 2】
-- 复杂的业务计算
-- 向 DB 插入【数据 3】
-- 向 DB 插入【数据 4】
-
-这条消息的消费过程中有4次与 DB的 交互,如果按照每次 5ms 计算,那么总共耗时 20ms,假设业务计算耗时 5ms,那么总过耗时 
25ms,所以如果能把 4 次 DB 交互优化为 2 次,那么总耗时就可以优化到 15ms,即总体性能提高了 
40%。所以应用如果对时延敏感的话,可以把DB部署在SSD硬盘,相比于SCSI磁盘,前者的RT会小很多。
-
-### 消费打印日志
-
-如果消息量较少,建议在消费入口方法打印消息,消费耗时等,方便后续排查问题。
-
-
-```java
-   public ConsumeConcurrentlyStatus consumeMessage(
-            List<MessageExt> msgs,
-            ConsumeConcurrentlyContext context) {
-        log.info("RECEIVE_MSG_BEGIN: " + msgs.toString());
-        // TODO 正常消费过程
-        return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
-    }   
-```
-
-如果能打印每条消息消费耗时,那么在排查消费慢等线上问题时,会更方便。
-
-### 其他消费建议
-
-#### 关于消费者和订阅
-
-第一件需要注意的事情是,不同的消费者组可以独立的消费一些 topic,并且每个消费者组都有自己的消费偏移量,请确保同一组内的每个消费者订阅信息保持一致。
-
-#### 关于有序消息
-
-消费者将锁定每个消息队列,以确保他们被逐个消费,虽然这将会导致性能下降,但是当你关心消息顺序的时候会很有用。我们不建议抛出异常,你可以返回 
ConsumeOrderlyStatus.SUSPEND_CURRENT_QUEUE_A_MOMENT 作为替代。
-
-#### 关于并发消费
-
-顾名思义,消费者将并发消费这些消息,建议你使用它来获得良好性能,我们不建议抛出异常,你可以返回 
ConsumeConcurrentlyStatus.RECONSUME_LATER 作为替代。
-
-#### 关于消费状态Consume Status
-
-对于并发的消费监听器,你可以返回 RECONSUME_LATER 
来通知消费者现在不能消费这条消息,并且希望可以稍后重新消费它。然后,你可以继续消费其他消息。对于有序的消息监听器,因为你关心它的顺序,所以不能跳过消息,但是你可以返回SUSPEND_CURRENT_QUEUE_A_MOMENT
 告诉消费者等待片刻。
-
-#### 关于Blocking
-
-不建议阻塞监听器,因为它会阻塞线程池,并最终可能会终止消费进程
-
-#### 关于线程数设置     
-
-消费者使用 ThreadPoolExecutor 在内部对消息进行消费,所以你可以通过设置 setConsumeThreadMin 或 
setConsumeThreadMax 来改变它。
-
-####  关于消费位点
-
-当建立一个新的消费者组时,需要决定是否需要消费已经存在于 Broker 中的历史消息CONSUME_FROM_LAST_OFFSET 
将会忽略历史消息,并消费之后生成的任何消息。CONSUME_FROM_FIRST_OFFSET 将会消费每个存在于 Broker 中的信息。你也可以使用 
CONSUME_FROM_TIMESTAMP 来消费在指定时间戳后产生的消息。
-
-## Broker
-
-###  Broker 角色
-  Broker 角色分为 
ASYNC_MASTER(异步主机)、SYNC_MASTER(同步主机)以及SLAVE(从机)。如果对消息的可靠性要求比较严格,可以采用 
SYNC_MASTER加SLAVE的部署方式。如果对消息可靠性要求不高,可以采用ASYNC_MASTER加SLAVE的部署方式。如果只是测试方便,则可以选择仅ASYNC_MASTER或仅SYNC_MASTER的部署方式。
-### FlushDiskType
- SYNC_FLUSH(同步刷新)相比于ASYNC_FLUSH(异步处理)会损失很多性能,但是也更可靠,所以需要根据实际的业务场景做好权衡。
-### Broker 配置
-
-| 参数名                           | 默认值                        | 说明              
                                           |
-| -------------------------------- | ----------------------------- | 
------------------------------------------------------------ |
-| listenPort                    | 10911              | 接受客户端连接的监听端口 |
-| namesrvAddr       | null                         | nameServer 地址     |
-| brokerIP1 | 网卡的 InetAddress                         | 当前 broker 监听的 IP  |
-| brokerIP2 | 跟 brokerIP1 一样                         | 存在主从 broker 时,如果在 
broker 主节点上配置了 brokerIP2 属性,broker 从节点会连接主节点配置的 brokerIP2 进行同步  |
-| brokerName        | null                         | broker 的名称                
           |
-| brokerClusterName                     | DefaultCluster                  | 本 
broker 所属的 Cluser 名称           |
-| brokerId             | 0                              | broker id, 0 表示 
master, 其他的正整数表示 slave                                                 |
-| storePathCommitLog                      | $HOME/store/commitlog/             
                 | 存储 commit log 的路径                                            
    |
-| storePathConsumerQueue                   | $HOME/store/consumequeue/         
                     | 存储 consume queue 的路径                                     
         |
-| mapedFileSizeCommitLog     | 1024 * 1024 * 1024(1G) | commit log 的映射文件大小     
                                  |​ 
-| deleteWhen     | 04 | 在每天的什么时间删除已经超过文件保留时间的 commit log                       
                 |​ 
-| fileReserverdTime     | 72 | 以小时计算的文件保留时间                                    
    |​ 
-| brokerRole     | ASYNC_MASTER | SYNC_MASTER/ASYNC_MASTER/SLAVE               
                         |​ 
-| flushDiskType     | ASYNC_FLUSH | SYNC_FLUSH/ASYNC_FLUSH SYNC_FLUSH 模式下的 
broker 保证在收到确认生产者之前将消息刷盘。ASYNC_FLUSH 模式下的 broker 则利用刷盘一组消息的模式,可以取得更好的性能。        
                               |​
-
+# Basic Best Practices
+
+## Producer
+
+###  Precautions for sending messages
+
+#### The use of Tags
+
+An application can use a Topic, and message subtypes can be identified as 
tags. tags can be set freely by the application. Only when the producer sets 
tags when sending messages, the consumer can use tags to filter messages 
through the broker when subscribing to messages:message.setTags("TagA")。  
+
+#### The use of Keys
+
+The unique identifier of each message at the service level must be set to the 
keys field to locate message loss problems in the future. The server creates an 
index (hash index) for each message, and the application can query the content 
of the message by topic and key, and by whom the message was consumed. Since it 
is a hash index, make sure that the key is as unique as possible to avoid 
potential hash collisions.
+
+
+```java
+   // order Id   
+   String orderId = "20034568923546";   
+   message.setKeys(orderId);   
+```
+#### Printing Logs
+
+The SendResult and key fields must be printed to print the message log if the 
message is sent successfully or failed. send Indicates that the message is sent 
successfully as long as no exception is thrown. There are multiple states for a 
successful send, defined in sendResult. Each state is described as follows:  
+
+- **SEND_OK**
+
+The message was sent successfully. Procedure Note that successful message 
delivery does not mean it is reliable. To ensure that no messages are lost, you 
should also enable the sync Master server or sync flush, which is SYNC_MASTER 
or SYNC_FLUSH.
+
+
+- **FLUSH_DISK_TIMEOUT**
+
+The message is sent successfully but disk flushing times out. At this point, 
the message has entered the server queue (memory), only the server downtime, 
the message will be lost. In the message storage configuration parameters, you 
can set the disk flushing mode and the synchronization flush time. If the 
Broker server is set to FlushDiskType=SYNC_FLUSH (asynchronous flush by 
default), if the Broker server does not flush disks during the synchronous 
flush time (5s by default), The state, [...]
+
+- **FLUSH_SLAVE_TIMEOUT**
+
+The message was sent successfully, but the server timed out when synchronizing 
the message to the Slave. At this point, the message has entered the server 
queue, only the server downtime, the message will be lost. If the role of the 
Broker server is SYNC_MASTER (ASYNC_MASTER by default) and the secondary Broker 
server does not complete synchronization with the primary server within the 
synchronization flush time (default: 5 seconds), This state is returned -- data 
synchronization to the  [...]
+
+- **SLAVE_NOT_AVAILABLE**
+
+The message was successfully sent, but the Slave was unavailable. Procedure At 
this point, the message has entered the Master server queue, only the Master 
server downtime, the message will be lost. If the role of the Broker server is 
SYNC_MASTER (ASYNC_MASTER by default) but no slave Broker server is configured, 
the broker returns the status that no Slave server is available.
+
+
+### Handling method for message sending failure
+
+The send method of Producer itself supports internal retry. The retry logic is 
as follows:
+
+- Retry a maximum of two times (2 times for synchronous and 0 times for 
asynchronous).
+- If the delivery fails, it is routed to the next Broker. The total time for 
this method should not exceed the value set by sendMsgTimeout, which defaults 
to 10s.
+- If it sends a message to the broker that generates a timeout exception, it 
will not be retried.
+
+The above strategies also guarantee the success of message sending to a 
certain extent. If the service has high requirements on message reliability, 
you are advised to add retry logic. For example, if the send method fails to be 
invoked, the system tries to store the message to the db and retry periodically 
by the background thread to ensure that the message reaches the Broker.
+
+The reason why the above db retry method is not integrated into the MQ client, 
but requires the application to complete by itself, is mainly based on the 
following considerations: First, the MQ client is designed as a stateless mode, 
convenient for arbitrary horizontal expansion, and the consumption of machine 
resources is only cpu, memory, network. Secondly, if the MQ client is 
internally integrated with a KV storage module, the data can only be reliable 
if the synchronous disk fall, an [...]
+
+### Select oneway to send
+In general, a message is sent as follows:
+
+- The client sends a request to the server
+- The server handles the request
+- The server returns a reply to the client
+
+Therefore, the time taken to send a message is the sum of the above three 
steps. However, some scenarios require a very short time, but do not have high 
reliability requirements. For example, log collection applications can be 
invoked in oneway mode. On the client side, sending a request is only the cost 
of a system call of the operating system, that is, writing data to the socket 
buffer of the client, which usually takes microseconds.
+
+## Client Configuration
+
+In contrast to RocketMQ's cluster of brokers, both producers and consumers are 
clients. This section describes the behavior configuration common to producers 
and consumers.
+
+### Client addressing mode
+
+RocketMQ enables clients to find NameServer and then NameServer to find 
Broker. As shown in the following figure, the configuration mode ranges from 
high to low. The higher priority overrides the lower priority.
+
+- The NameServer address is specified in the code, and multiple NameServer 
addresses are separated by semicolons   
+
+```java
+producer.setNamesrvAddr("192.168.0.1:9876;192.168.0.2:9876");  
+
+consumer.setNamesrvAddr("192.168.0.1:9876;192.168.0.2:9876");
+```
+
+- The NameServer address is specified in the Java startup parameter
+
+```text
+-Drocketmq.namesrv.addr=192.168.0.1:9876;192.168.0.2:9876  
+```
+
+- The environment variable specifies the NameServer address
+
+```text
+export   NAMESRV_ADDR=192.168.0.1:9876;192.168.0.2:9876   
+```
+- HTTP static server addressing (default)
+
+After the client is started, it periodically accesses a static HTTP server 
with the following address:<http://jmenv.tbsite.net:8080/rocketmq/nsaddr>,The 
URL returns something like this:
+
+```text
+192.168.0.1:9876;192.168.0.2:9876   
+```
+
+By default, the client accesses the HTTP server every 2 minutes and updates 
the local NameServer address.
+The URL is hardcoded in the code. You can change the server to be accessed by 
modifying the /etc/hosts file, for example, adding the following configuration 
to /etc/hosts:
+```text
+10.232.22.67    jmenv.taobao.net   
+```
+
+Static HTTP server addressing is recommended. It is easy to deploy clients and 
the NameServer cluster can be hot upgraded.
+
+## Consumer
+
+### The consumption process is idempotent
+
+RocketMQ cannot avoid message duplications (Exactly Once), so if the business 
is very sensitive to consumption duplications, it is important to de-process at 
the business level.
+This can be done with the help of relational databases. You first need to 
determine a unique key for the message, either an msgId or a unique identifying 
field in the message content, such as an order id.
+Determine if the unique key exists in the relational database before 
consumption. If not, insert and consume, otherwise skip. (The actual process 
should consider the atomicity problem, determine whether there is a primary key 
conflict, then the insertion failed, directly skip)
+
+MsgId must be a globally unique identifier, but in practice, there may be 
cases where the same message has two different msgIds (consumer active 
retransmission, duplication due to client reinvestment mechanism, etc.), which 
necessitates repeated consumption of business fields.
+
+### A slow process of consumption
+
+### Increase consumption parallelism
+
+The vast majority of message consumption is IO intensive, that is, it may be 
operating on a database or calling an RPC, and the rate of consumption for this 
type of consumption depends on the throughput of the back-end database or 
external system.
+By increasing consumption parallelism, the total consumption throughput can be 
improved, but when the parallelism increases to a certain degree, it will 
decrease.
+Therefore, the application must set a reasonable degree of parallelism. There 
are several ways to modify consumption parallelism:
+
+- In the same ConsumerGroup, we increase the number of Consumer instances to 
improve parallelism (note that Consumer instances exceeding the subscription 
queue are invalid). You can add a machine, or start multiple processes on an 
existing machine.
+- Improve the consumption parallel thread of a single Consumer by modifying 
parameters consumeThreadMin and consumeThreadMax.
+
+### Consumption in bulk
+
+If some business processes support batch consumption, the consumption 
throughput can be greatly improved. For example, the application of order 
deduction takes 1 s to process one order at a time, and only 2 s to process 10 
orders at a time. In this way, the consumption throughput can be greatly 
improved.
+By setting the consumer consumeMessageBatchMaxSize return a parameter, the 
default is 1, namely consumption one message, for example, is set to N, so the 
number of messages every time consumption less than or equal to N.
+
+### Skip non-important messages
+
+In case of message pile-up, if the consumption rate cannot keep up with the 
delivery rate, and if the business is not demanding enough data, you can choose 
to discard unimportant messages.
+For example, when a queue accumulates more than 100,000 messages, try to 
discard some or all of them so that you can quickly catch up with sending 
messages. Example code is as follows:
+
+```java
+    public ConsumeConcurrentlyStatus consumeMessage(
+            List<MessageExt> msgs,
+            ConsumeConcurrentlyContext context) {
+        long offset = msgs.get(0).getQueueOffset();
+        String maxOffset =
+                msgs.get(0).getProperty(Message.PROPERTY_MAX_OFFSET);
+        long diff = Long.parseLong(maxOffset) - offset;
+        if (diff > 100000) {
+            // TODO Special handling of message stacking cases
+            return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
+        }
+        // TODO Normal consumption process
+        return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
+    }    
+```
+
+
+#### Optimize the per-message consumption process
+
+For example, the consumption process of a message is as follows:
+
+- Query [data 1] from DB according to message
+- Query [data 2] from DB according to message
+- Complex business calculations
+- Insert [data 3] into DB
+- Insert [data 4] into DB
+
+There are four interactions with DB during the consumption of this message. If 
we calculate each interaction as 5ms, the total time is 20ms.
+Assuming that the service computation takes 5ms, the total time is 25ms. 
Therefore, if the four DB interactions can be optimized to two, the total time 
can be optimized to 15ms, which means that the overall performance is improved 
by 40%.
+Therefore, if the application is sensitive to delay, the DB can be deployed on 
SSD disks. Compared with SCSI disks, the RT of the former is much smaller.
+
+### Consumption print log
+
+If the number of messages is small, you are advised to print messages in the 
consumption entry method, which takes a long time to consume.
+
+
+```java
+   public ConsumeConcurrentlyStatus consumeMessage(
+            List<MessageExt> msgs,
+            ConsumeConcurrentlyContext context) {
+        log.info("RECEIVE_MSG_BEGIN: " + msgs.toString());
+        // TODO Normal consumption process
+        return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
+    }   
+```
+
+If you can print each message consuming time, it will be more convenient to 
troubleshoot online problems such as slow consumption.
+
+### Other Consumption Tips
+
+#### About consumers and subscriptions
+
+The first thing to note is that different consumer groups can consume several 
topics independently, and each consumer group has its own consumption offset. 
Make sure that the subscription information of each consumer within the same 
group is consistent.
+
+#### About Ordered Messages
+
+Consumers will lock each message queue to ensure that they are consumed one by 
one, which causes performance degradation, but is useful when you are concerned 
about message order. We do not recommend throwing an exception, you can return 
ConsumeOrderlyStatus. SUSPEND_CURRENT_QUEUE_A_MOMENT instead.
+
+#### About Concurrent consumption
+
+As the name suggests, the consumer will concurrent consumption of these 
messages, it is recommended that you use it to get good performance, we do not 
recommend throwing an exception, you can return 
ConsumeConcurrentlyStatus.RECONSUME_LATER instead.
+
+#### Consume Status is about consumption status
+
+For concurrent consumption listeners, you can return RECONSUME_LATER to notify 
the consumer that the message cannot be consumed now and that it is expected to 
be consumed again later.
+You can then continue consuming other messages. For an ordered message 
listener, you can't skip the message because you care about its order, but you 
can go back to SUSPEND_CURRENT_QUEUE_A_MOMENT and tell the consumer to wait.
+
+#### About Blocking
+
+Blocking listeners is not recommended because it blocks the thread pool and 
may eventually terminate the consuming process
+
+#### About thread count Settings
+
+Consumers use ThreadPoolExecutor to consume messages internally, so you can 
change it by setting setConsumeThreadMin or setConsumeThreadMax.
+
+####  About the consumption position
+
+When creating a new consumer group, you need to decide whether you want to 
consume the history messages already in the Broker. CONSUME_FROM_LAST_OFFSET 
will ignore the history messages and consume any messages generated later.
+CONSUME_FROM_FIRST_OFFSET will consume every information that exists in the 
Broker. You can also use CONSUME_FROM_TIMESTAMP to consume messages generated 
after a specified timestamp.
+
+## Broker
+
+###  Broker Role
+  Broker roles are classified into ASYNC_MASTER, SYNC_MASTER, and SLAVE.
+  If you have strict requirements on message reliability, deploy SYNC_MASTER 
plus SLAVE.
+  If message reliability is not required, deploy ASYNC_MASTER plus SLAVE.
+  If testing is only convenient, you can select ASYNC_MASTER only or 
SYNC_MASTER only deployment.
+
+### FlushDiskType
+
+  Compared with ASYNC_FLUSH, SYNC_FLUSH suffers from performance loss but is 
more reliable. Therefore, the trade-off must be made based on the actual 
service scenario.
+
+### Broker Configuration
+
+| Parameter              | Default                   | Description             
                                                                                
                                                                                
                    |
+|------------------------|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| listenPort             | 10911                     | A listening port that 
accepts client connections                                                      
                                                                                
                      |
+| namesrvAddr            | null                      | nameServer address      
                                                                                
                                                                                
                    |
+| brokerIP1              | The network InetAddress   | The IP address on which 
the broker is currently listening                                               
                                                                                
                    |
+| brokerIP2              | same to brokerIP1         | When a master/slave 
broker exists, if the brokerIP2 property is configured on the broker master 
node, the broker slave node will connect to the brokerIP2 configured on the 
master node for synchronization |
+| brokerName             | null                      | broker name             
                                                                                
                                                                                
                    |
+| brokerClusterName      | DefaultCluster            | The Cluser name to 
which this broker belongs                                                       
                                                                                
                                              |
+| brokerId               | 0                         | broker id 0 indicates 
master, and other positive integers indicate slave                              
                                                                                
                                                   |
+| storePathCommitLog     | $HOME/store/commitlog/    | Path to store the 
commit log                                                                      
                                                                                
                                     |
+| storePathConsumerQueue | $HOME/store/consumequeue/ | A path that consumes 
queue is stored                                                                 
                                                                                
                                       |
+| mapedFileSizeCommitLog | 1024 * 1024 * 1024(1G)    | commit log mapping file 
size                                                                            
                                                                                
                           |​ 
+| deleteWhen             | 04                        | At what time of day 
should I delete the commit log whose file retention time has exceeded           
                                                                                
                                                                                
 |​ 
+| fileReserverdTime      | 72                        | File retention time in 
hours                                                                           
                                                                                
                                     |​ 
+| brokerRole             | ASYNC_MASTER              | 
SYNC_MASTER/ASYNC_MASTER/SLAVE                                                  
                                                                                
                                            |​ 
+| flushDiskType          | ASYNC_FLUSH               | SYNC_FLUSH/ASYNC_FLUSH 
The broker in SYNC_FLUSH mode guarantees to flush messages before receiving the 
acknowledged producer. ASYNC_FLUSH brokers use the flush mode to flush a group 
of messages for better performance.                                             
                                                |​
\ No newline at end of file
diff --git 
a/i18n/en/docusaurus-plugin-content-docs/version-5.0/06-bestPractice/15bestpractice.md
 
b/i18n/en/docusaurus-plugin-content-docs/version-5.0/06-bestPractice/15bestpractice.md
index 8c2b75400..02593b684 100644
--- 
a/i18n/en/docusaurus-plugin-content-docs/version-5.0/06-bestPractice/15bestpractice.md
+++ 
b/i18n/en/docusaurus-plugin-content-docs/version-5.0/06-bestPractice/15bestpractice.md
@@ -1,104 +1,118 @@
-# 基本最佳实践
-
-## 生产者
-
-###  发送消息注意事项
-
-#### Tag的使用
-
-一个应用尽可能用一个Topic,而消息子类型则可以用tags来标识。tags可以由应用自由设置,只有生产者在发送消息设置了tags,消费方在订阅消息时才可以利用tags通过broker做消息过滤,5.x
 SDK 可以调用messageBuilder.setTag("messageTag"),历史版本可以调用 
message.setTags("messageTag")。  
-
-#### Keys的使用
-
-每个消息在业务层面一般建议映射到业务的唯一标识并设置到keys字段,方便将来定位消息丢失问题。服务器会为每个消息创建索引(哈希索引),应用可以通过topic、key来查询这条消息内容,以及消息被谁消费。由于是哈希索引,请务必保证key尽可能唯一,这样可以避免潜在的哈希冲突。常见的设置策略使用订单Id、用户Id、请求Id等比较离散的唯一标识来处理。
-
-#### 日志的打印
-
-消息发送成功或者失败要打印消息日志,用于业务排查问题。Send消息方法只要不抛异常,就代表发送成功。
-### 消息发送失败处理方式
-
-Producer的send方法本身支持内部重试,5.x 
SDK的重试逻辑参考[发送重试策略](../04-featureBehavior/05sendretrypolicy.md):
-
-以上策略也是在一定程度上保证了消息可以发送成功。如果业务要求消息发送不能丢,仍然需要对可能出现的异常做兜底,比如调用send同步方法发送失败时,则尝试将消息存储到db,然后由后台线程定时重试,确保消息一定到达Broker。
-
-上述DB重试方式为什么没有集成到MQ客户端内部做,而是要求应用自己去完成,主要基于以下几点考虑:首先,MQ的客户端设计为无状态模式,方便任意的水平扩展,且对机器资源的消耗仅仅是cpu、内存、网络。其次,如果MQ客户端内部集成一个KV存储模块,那么数据只有同步落盘才能较可靠,而同步落盘本身性能开销较大,所以通常会采用异步落盘,又由于应用关闭过程不受MQ运维人员控制,可能经常会发生
 kill -9 
这样暴力方式关闭,造成数据没有及时落盘而丢失。第三,Producer所在机器的可靠性较低,一般为虚拟机,不适合存储重要数据。综上,建议重试过程交由应用来控制。
-
-## 消费者
-
-### 消费过程幂等
-
-RocketMQ 
无法避免消息重复(Exactly-Once),所以如果业务对消费重复非常敏感,务必要在业务层面进行去重处理。可以借助关系数据库进行去重。首先需要确定消息的唯一键,可以是msgId,也可以是消息内容中的唯一标识字段,例如订单Id等。在消费之前判断唯一键是否在关系数据库中存在。如果不存在则插入,并消费,否则跳过。(实际过程要考虑原子性问题,判断是否存在可以尝试插入,如果报主键冲突,则插入失败,直接跳过)
-
-msgId一定是全局唯一标识符,但是实际使用中,可能会存在相同的消息有两个不同msgId的情况(消费者主动重发、因客户端重投机制导致的重复等),这种情况就需要使业务字段进行重复消费。
-
-### 消费速度慢的处理方式
-
-### 提高消费并行度
-
-绝大部分消息消费行为都属于 IO 密集型,即可能是操作数据库,或者调用 
RPC,这类消费行为的消费速度在于后端数据库或者外系统的吞吐量,通过增加消费并行度,可以提高总的消费吞吐量,但是并行度增加到一定程度,反而会下降。所以,应用必须要设置合理的并行度。
 如下有几种修改消费并行度的方法:
-
-- 同一个 ConsumerGroup 下,通过增加 Consumer 实例数量来提高并行度。可以通过加机器,或者在已有机器启动多个进程的方式。
-- 提高单个 Consumer 的消费并行线程,5.x PushConsumer SDK 
可以通过PushConsumerBuilder.setConsumptionThreadCount() 
设置线程数,SimpleConsumer可以由业务线程自由增加并发,底层线程安全;历史版本SDK PushConsumer可以通过修改参数 
consumeThreadMin、consumeThreadMax实现。
-
-### 批量方式消费
-
-某些业务流程如果支持批量方式消费,则可以很大程度上提高消费吞吐量,例如订单扣款类应用,一次处理一个订单耗时 1 s,一次处理 10 个订单可能也只耗时 2 
s,这样即可大幅度提高消费的吞吐量。建议使用5.x SDK的SimpleConsumer,每次接口调用设置批次大小,一次性拉取消费多条消息。
-
-### 重置位点跳过非重要消息
-
-发生消息堆积时,如果消费速度一直追不上发送速度,如果业务对数据要求不高的话,可以选择丢弃不重要的消息。建议使用重置位点功能直接调整消费位点到指定时刻或者指定位置。
-
-#### 优化每条消息消费过程     
-
-举例如下,某条消息的消费过程如下:
-
-- 根据消息从 DB 查询【数据 1】
-- 根据消息从 DB 查询【数据 2】
-- 复杂的业务计算
-- 向 DB 插入【数据 3】
-- 向 DB 插入【数据 4】
-
-这条消息的消费过程中有4次与 DB的 交互,如果按照每次 5ms 计算,那么总共耗时 20ms,假设业务计算耗时 5ms,那么总过耗时 
25ms,所以如果能把 4 次 DB 交互优化为 2 次,那么总耗时就可以优化到 15ms,即总体性能提高了 
40%。所以应用如果对时延敏感的话,可以把DB部署在SSD硬盘,相比于SCSI磁盘,前者的RT会小很多。
-
-### 消费打印日志
-
-如果消息量较少,建议在消费入口方法打印消息,消费耗时等,方便后续排查问题。
-
-```java
-   new MessageListener() {
-        @Override
-        public ConsumeResult consume(MessageView messageView) {
-            LOGGER.info("Consume message={}", messageView);
-            //Do your consume process
-            return ConsumeResult.SUCCESS;
-            }
-    }
-```
-
-如果能打印每条消息消费耗时,那么在排查消费慢等线上问题时,会更方便。但如果线上环境TPS很高,不建议开启,避免日志太多影响性能。
-
-## Broker
-
-###  Broker 角色
-  Broker 角色分为 
ASYNC_MASTER(异步主机)、SYNC_MASTER(同步主机)以及SLAVE(从机)。如果对消息的可靠性要求比较严格,可以采用 
SYNC_MASTER加SLAVE的部署方式。如果对消息可靠性要求不高,可以采用ASYNC_MASTER加SLAVE的部署方式。如果只是测试方便,则可以选择仅ASYNC_MASTER或仅SYNC_MASTER的部署方式。
-### FlushDiskType
- SYNC_FLUSH(同步刷新)相比于ASYNC_FLUSH(异步处理)会损失很多性能,但是也更可靠,所以需要根据实际的业务场景做好权衡。
-### Broker 配置
-
-| 参数名                           | 默认值                        | 说明              
                                           |
-| -------------------------------- | ----------------------------- | 
------------------------------------------------------------ |
-| listenPort                    | 10911              | 接受客户端连接的监听端口 |
-| namesrvAddr       | null                         | nameServer 地址     |
-| brokerIP1 | 网卡的 InetAddress                         | 当前 broker 监听的 IP  |
-| brokerIP2 | 跟 brokerIP1 一样                         | 存在主从 broker 时,如果在 
broker 主节点上配置了 brokerIP2 属性,broker 从节点会连接主节点配置的 brokerIP2 进行同步  |
-| brokerName        | null                         | broker 的名称                
           |
-| brokerClusterName                     | DefaultCluster                  | 本 
broker 所属的 Cluser 名称           |
-| brokerId             | 0                              | broker id, 0 表示 
master, 其他的正整数表示 slave                                                 |
-| storePathCommitLog                      | $HOME/store/commitlog/             
                 | 存储 commit log 的路径                                            
    |
-| storePathConsumerQueue                   | $HOME/store/consumequeue/         
                     | 存储 consume queue 的路径                                     
         |
-| mapedFileSizeCommitLog     | 1024 * 1024 * 1024(1G) | commit log 的映射文件大小     
                                  |​ 
-| deleteWhen     | 04 | 在每天的什么时间删除已经超过文件保留时间的 commit log                       
                 |​ 
-| fileReserverdTime     | 72 | 以小时计算的文件保留时间                                    
    |​ 
-| brokerRole     | ASYNC_MASTER | SYNC_MASTER/ASYNC_MASTER/SLAVE               
                         |​ 
-| flushDiskType     | ASYNC_FLUSH | SYNC_FLUSH/ASYNC_FLUSH SYNC_FLUSH 模式下的 
broker 保证在收到确认生产者之前将消息刷盘。ASYNC_FLUSH 模式下的 broker 则利用刷盘一组消息的模式,可以取得更好的性能。        
                               |​
-
+#  Basic Best Practices
+
+## Producer
+
+###  Precautions for sending messages
+
+#### The use of Tags
+
+An application can be identified as a Topic, and message subtypes can be 
identified as tags. tags can be set freely by the application. Only when the 
producer sets tags when sending messages, the consumer can use tags to filter 
messages through the broker when subscribing  messages.
+5.x SDK can call messageBuilder.setTag("messageTag") and historical versions 
can call message.setTags("messageTag"). 
+
+#### The use of Keys
+
+At the service level, it is recommended that each message be mapped to a 
unique service identifier and set to the keys field to locate message loss 
problems in the future. The server creates an index (hash index) for each 
message, and the application can query the content of the message by topic and 
key, and by whom the message was consumed. Since it is a hash index, make sure 
that the key is as unique as possible to avoid potential hash collisions. 
Common setup policies use discrete uni [...]
+
+#### Printing Logs
+
+If the message is sent successfully or fails, you need to print message logs 
for troubleshooting services. Send Indicates that the message is sent 
successfully as long as no exception is thrown.
+
+### Handling method for message sending failure
+
+The send method of the Producer itself supports internal retry,5.x Retry logic 
reference [Send retry policy](../04-featureBehavior/05sendretrypolicy.md):
+
+The above strategies also guarantee the success of message sending to a 
certain extent. If the business requires that the message be sent without loss, 
you still need to cover for possible exceptions, such as when the send 
synchronization method is called and fails to send, then try to store the 
message to the db and retry periodically by the background thread to ensure 
that the message reaches the Broker.
+
+The reason why the above DB retry method is not integrated into the MQ client, 
but requires the application to complete by itself, is mainly based on the 
following considerations: First, the MQ client is designed as a stateless mode, 
convenient for arbitrary horizontal expansion, and the consumption of machine 
resources is only cpu, memory, network. Secondly, if the MQ client is 
internally integrated with a KV storage module, the data can only be reliable 
if the synchronous disk fall, an [...]
+
+## Consumer
+
+### The consumption process is idempotent
+
+RocketMQ cannot avoid message duplications (Exactly Once), so if the business 
is very sensitive to consumption duplications, it is important to de-process at 
the business level.
+This can be done with the help of relational databases. You first need to 
determine a unique key for the message, either an msgId or a unique identifying 
field in the message content, such as an order id.
+Determine if the unique key exists in the relational database before 
consumption. If not, insert and consume, otherwise skip. (The actual process 
should consider the atomicity problem, determine whether there is a primary key 
conflict, then the insertion failed, directly skip)
+
+MsgId must be a globally unique identifier, but in practice, there may be 
cases where the same message has two different msgIds (consumer active 
retransmission, duplication due to client reinvestment mechanism, etc.), which 
necessitates repeated consumption of business fields.
+
+### A slow process of consumption
+
+### Increase consumption parallelism
+
+The vast majority of message consumption is IO intensive, that is, it may be 
operating on a database or calling an RPC, and the rate of consumption for this 
type of consumption depends on the throughput of the back-end database or 
external system.
+By increasing consumption parallelism, the total consumption throughput can be 
improved, but when the parallelism increases to a certain degree, it will 
decrease.
+Therefore, the application must set a reasonable degree of parallelism. There 
are several ways to modify consumption parallelism:
+
+- In the same ConsumerGroup, we increase the number of Consumer instances to 
improve parallelism (note that Consumer instances exceeding the subscription 
queue are invalid). You can add a machine, or start multiple processes on an 
existing machine.
+- Improve the individual Consumer's consumption parallel threads, 5.x 
PushConsumer SDK can PushConsumerBuilder.setConsumptionThreadCount() sets the 
number of threads, SimpleConsumer is free to increase concurrency from business 
threads, and the underlying thread is safe; The historical SDK PushConsumer can 
be implemented by modifying parameters consumeThreadMin and consumeThreadMax.
+
+### Consumption in bulk
+
+If some business processes support bulk consumption, consumption throughput 
can be greatly improved. For example, the application of order deduction takes 
1 s to process one order at a time, and it may only take 2 s to process 10 
orders at a time, so the consumption throughput can be greatly improved. It is 
recommended to use SimpleConsumer from the 5.x SDK, set the batch size per 
interface call, and pull multiple messages at once.
+
+### Reset site to skip non-important messages
+
+In case of message pile-up, if the consumption rate cannot keep up with the 
delivery rate, and if the business is not demanding enough data, you can choose 
to discard unimportant messages. You are advised to use the reset site function 
to directly adjust the consumption site to a specified time or location.
+
+#### Optimize the per-message consumption process     
+
+For example, the consumption process of a message is as follows:
+
+- Query [data 1] from DB according to message
+- Query [data 2] from DB according to message
+- Complex business calculations
+- Insert [data 3] into DB
+- Insert [data 4] into DB
+
+There are four interactions with DB during the consumption of this message. If 
we calculate each interaction as 5ms, the total time is 20ms.
+Assuming that the service computation takes 5ms, the total time is 25ms. 
Therefore, if the four DB interactions can be optimized to two, the total time 
can be optimized to 15ms, which means that the overall performance is improved 
by 40%.
+Therefore, if the application is sensitive to delay, the DB can be deployed on 
SSD disks. Compared with SCSI disks, the RT of the former is much smaller.
+
+### Consumption print log
+
+If the number of messages is small, you are advised to print messages in the 
consumption entry method, which takes a long time to consume.
+
+```java
+   new MessageListener() {
+        @Override
+        public ConsumeResult consume(MessageView messageView) {
+            LOGGER.info("Consume message={}", messageView);
+            //Do your consume process
+            return ConsumeResult.SUCCESS;
+            }
+    }
+```
+
+If you can print each message consuming time, it will be more convenient to 
troubleshoot online problems such as slow consumption.
+
+## Broker
+
+###  Broker Role
+
+Broker roles are classified into ASYNC_MASTER, SYNC_MASTER, and SLAVE.
+If you have strict requirements on message reliability, deploy SYNC_MASTER 
plus SLAVE.
+If message reliability is not required, deploy ASYNC_MASTER plus SLAVE.
+If testing is only convenient, you can select ASYNC_MASTER only or SYNC_MASTER 
only deployment.
+
+### FlushDiskType
+
+Compared with ASYNC_FLUSH, SYNC_FLUSH suffers from performance loss but is 
more reliable. Therefore, the trade-off must be made based on the actual 
service scenario.
+
+### Broker Configuration
+
+| Parameter              | Default                   | Description             
                                                                                
                                                                                
                    |
+|------------------------|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| listenPort             | 10911                     | A listening port that 
accepts client connections                                                      
                                                                                
                      |
+| namesrvAddr            | null                      | nameServer address      
                                                                                
                                                                                
                    |
+| brokerIP1              | The network InetAddress   | The IP address on which 
the broker is currently listening                                               
                                                                                
                    |
+| brokerIP2              | same to brokerIP1         | When a master/slave 
broker exists, if the brokerIP2 property is configured on the broker master 
node, the broker slave node will connect to the brokerIP2 configured on the 
master node for synchronization |
+| brokerName             | null                      | broker name             
                                                                                
                                                                                
                    |
+| brokerClusterName      | DefaultCluster            | The Cluser name to 
which this broker belongs                                                       
                                                                                
                                              |
+| brokerId               | 0                         | broker id 0 indicates 
master, and other positive integers indicate slave                              
                                                                                
                                                   |
+| storePathCommitLog     | $HOME/store/commitlog/    | Path to store the 
commit log                                                                      
                                                                                
                                     |
+| storePathConsumerQueue | $HOME/store/consumequeue/ | A path that consumes 
queue is stored                                                                 
                                                                                
                                       |
+| mapedFileSizeCommitLog | 1024 * 1024 * 1024(1G)    | commit log mapping file 
size                                                                            
                                                                                
                           |​ 
+| deleteWhen             | 04                        | At what time of day 
should I delete the commit log whose file retention time has exceeded           
                                                                                
                                                                                
 |​ 
+| fileReserverdTime      | 72                        | File retention time in 
hours                                                                           
                                                                                
                                     |​ 
+| brokerRole             | ASYNC_MASTER              | 
SYNC_MASTER/ASYNC_MASTER/SLAVE                                                  
                                                                                
                                            |​ 
+| flushDiskType          | ASYNC_FLUSH               | SYNC_FLUSH/ASYNC_FLUSH 
The broker in SYNC_FLUSH mode guarantees to flush messages before receiving the 
acknowledged producer. ASYNC_FLUSH brokers use the flush mode to flush a group 
of messages for better performance.                                             
                                                |​

Reply via email to