This is an automated email from the ASF dual-hosted git repository.

jinrongtong pushed a commit to branch new-official-website
in repository https://gitbox.apache.org/repos/asf/rocketmq-site.git


The following commit(s) were added to refs/heads/new-official-website by this 
push:
     new 647272848 [ISSUE #460]Fix 30RocketMQ Streams Overview.md document 
pictures not display (#461)
647272848 is described below

commit 6472728485bb82373f644fb98f9f93e5da175ffd
Author: mxsm <[email protected]>
AuthorDate: Thu Jan 19 12:20:56 2023 +0800

    [ISSUE #460]Fix 30RocketMQ Streams Overview.md document pictures not 
display (#461)
---
 .../version-5.0/07-streams/30RocketMQ Streams Overview.md      | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/i18n/en/docusaurus-plugin-content-docs/version-5.0/07-streams/30RocketMQ 
Streams Overview.md 
b/i18n/en/docusaurus-plugin-content-docs/version-5.0/07-streams/30RocketMQ 
Streams Overview.md
index c959eec42..794797ac4 100644
--- a/i18n/en/docusaurus-plugin-content-docs/version-5.0/07-streams/30RocketMQ 
Streams Overview.md      
+++ b/i18n/en/docusaurus-plugin-content-docs/version-5.0/07-streams/30RocketMQ 
Streams Overview.md      
@@ -4,18 +4,18 @@ RocketMQ Streams is a lightweight stream computing engine 
based on RocketMQ. It
 
 ## Architecture
 
-![总体架构](E:\develop\github\contribution\rocketmq-site\i18n\en\docusaurus-plugin-content-docs\current\picture\33rocketmq-streams\总体-1.png)
+![总体架构](..\picture\33rocketmq-streams\总体-1.png)
 
 Data is consumed from RocketMQ by RocketMQ-streams, processed, and ultimately 
written back to RocketMQ.
 
-![总体架构](E:\develop\github\contribution\rocketmq-site\i18n\en\docusaurus-plugin-content-docs\current\picture\33rocketmq-streams\总体-2.png)
+![总体架构](..\picture\33rocketmq-streams\总体-2.png)
 
 Data is consumed by the RocketMQ Consumer, enters the processing topology to 
be processed by operators. If the stream processing task contains the keyBy 
operator, the data needs to be grouped by Key and written to a shuffle topic. 
Subsequent operators consume from the shuffle topic. If there are also stateful 
operators such as count, the calculation requires reading and writing to the 
state topic. After the calculation is finished, the result is written back to 
RocketMQ.
 
 
 ## Consume model
 
-![img_2.png](E:\develop\github\contribution\rocketmq-site\i18n\en\docusaurus-plugin-content-docs\current\picture\33rocketmq-streams\消费模型.png)
+![img_2.png](..\picture\33rocketmq-streams\消费模型.png)
 
 The calculation instances actually depend on the client of the Rocket-streams 
SDK. Therefore, the calculation instances consume MQ, dependent on the RocketMQ 
rebalance allocation. The total number of calculation instances cannot be 
greater than the total number of consuming MQ, otherwise, some calculation 
instances will be in a waiting state, unable to consume data.
 
@@ -23,13 +23,13 @@ One calculation instance can consume multiple MQs, and 
within one instance, ther
 
 ## Status
 
-![img_3.png](E:\develop\github\contribution\rocketmq-site\i18n\en\docusaurus-plugin-content-docs\current\picture\33rocketmq-streams\状态存储.png)
+![img_3.png](..\picture\33rocketmq-streams\状态存储.png)
 
 For stateful operators, such as count, grouping must be done first before 
summing. The grouping operator keyBy will re-write the data to RocketMQ based 
on the grouping key, and ensures that data with the same key is written to the 
same partition (this process is called shuffle), to ensure that data with the 
same key is consumed by the same consumer. The state is locally accelerated by 
RocksDB, and remotely persisted by RocketMQ.
 
 
 ## Expansion/shrinkage capacity
 
-![img.png](E:\develop\github\contribution\rocketmq-site\i18n\en\docusaurus-plugin-content-docs\current\picture\33rocketmq-streams\RocketMQ-streams扩缩容.png)
+![img.png](..\picture\33rocketmq-streams\RocketMQ-streams扩缩容.png)
 
 When the calculation instances are reduced from 3 to 2, with the help of the 
rebalance function under the RocketMQ cluster consumption mode, the consumed MQ 
will be re-allocated among the calculation instances. The MQ2 and MQ3 consumed 
by Instance1 are allocated to Instance2 and Instance3, and the state data of 
these two MQs also needs to be migrated to Instance2 and Instance3. This also 
implies that the state data is saved according to the original data partition 
MQ; expansion is just t [...]

Reply via email to