This is an automated email from the ASF dual-hosted git repository.

roryqi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
     new 2b036970 [#1143] docs: Correct sequence number text by reducing 
paragraph indentation by 1 space (#1144)
2b036970 is described below

commit 2b036970453bd795710c66b92fd3a1a6ea0ef43c
Author: Bowen Liang <[email protected]>
AuthorDate: Mon Aug 14 09:48:32 2023 +0800

    [#1143] docs: Correct sequence number text by reducing paragraph 
indentation by 1 space (#1144)
---
 README.md | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/README.md b/README.md
index 17eb872b..5e39813b 100644
--- a/README.md
+++ b/README.md
@@ -49,15 +49,15 @@ Depending on different situations, Uniffle supports Memory 
& Local, Memory & Rem
 * Spark driver ask coordinator to get shuffle server for shuffle process
 * Spark task write shuffle data to shuffle server with following step:
 ![Rss Shuffle_Write](docs/asset/rss_shuffle_write.png)
-   1. Send KV data to buffer
-   2. Flush buffer to queue when buffer is full or buffer manager is full
-   3. Thread pool get data from queue
-   4. Request memory from shuffle server first and send the shuffle data
-   5. Shuffle server cache data in memory first and flush to queue when buffer 
manager is full
-   6. Thread pool get data from queue
-   7. Write data to storage with index file and data file
-   8. After write data, task report all blockId to shuffle server, this step 
is used for data validation later
-   9. Store taskAttemptId in MapStatus to support Spark speculation
+ 1. Send KV data to buffer
+ 2. Flush buffer to queue when buffer is full or buffer manager is full
+ 3. Thread pool get data from queue
+ 4. Request memory from shuffle server first and send the shuffle data
+ 5. Shuffle server cache data in memory first and flush to queue when buffer 
manager is full
+ 6. Thread pool get data from queue
+ 7. Write data to storage with index file and data file
+ 8. After write data, task report all blockId to shuffle server, this step is 
used for data validation later
+ 9. Store taskAttemptId in MapStatus to support Spark speculation
 
 * Depending on different storage types, the spark task will read shuffle data 
from shuffle server or remote storage or both of them.
 

Reply via email to