This is an automated email from the ASF dual-hosted git repository.

ckj pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
     new 7f9b5615 [MINOR] docs: correct the format of server_guide doc
7f9b5615 is described below

commit 7f9b5615162bd013426ab6d58f5054fa1cf68d0c
Author: Junfan Zhang <[email protected]>
AuthorDate: Thu Feb 16 12:52:56 2023 +0800

    [MINOR] docs: correct the format of server_guide doc
    
    ### What changes were proposed in this pull request?
    
    correct the format of server_guide doc
    
    ### Why are the changes needed?
    
    The markdown format is wrong in original shuffle server guide doc
    
    ### Does this PR introduce _any_ user-facing change?
    
    No
    
    ### How was this patch tested?
    
    Dont need
---
 docs/server_guide.md | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/docs/server_guide.md b/docs/server_guide.md
index c646d03c..e3f9a99c 100644
--- a/docs/server_guide.md
+++ b/docs/server_guide.md
@@ -111,12 +111,11 @@ To do this, we introduce the extra configs
 
 |Property Name|Default|Description|
 |---|---|---|
-|rss.server.huge-partition.size.threshold|20g|Threshold of huge partition 
size, once exceeding threshold, memory usage limitation and huge partition 
buffer flushing will be triggered. This value depends on the capacity of per 
disk in shuffle server. For example, per disk capacity is 1TB, and the max size 
of huge partition in per disk is 5. So the total size of huge partition in 
local disk is 100g (10%),this is an acceptable config value. Once reaching this 
threshold, it will be better to [...]
-to HDFS directly, which could be handled by multiple storage manager fallback 
strategy|
+|rss.server.huge-partition.size.threshold|20g|Threshold of huge partition 
size, once exceeding threshold, memory usage limitation and huge partition 
buffer flushing will be triggered. This value depends on the capacity of per 
disk in shuffle server. For example, per disk capacity is 1TB, and the max size 
of huge partition in per disk is 5. So the total size of huge partition in 
local disk is 100g (10%),this is an acceptable config value. Once reaching this 
threshold, it will be better to [...]
 |rss.server.huge-partition.memory.limit.ratio|0.2|The memory usage limit ratio 
for huge partition, it will only triggered when partition's size exceeds the 
threshold of 'rss.server.huge-partition.size.threshold'. If the buffer capacity 
is 10g, this means the default memory usage for huge partition is 2g. Samely, 
this config value depends on max size of huge partitions on per shuffle server.|
 
 #### Data flush
-Once the huge partition threshold is reached, the partition is marked as a 
huge partition. And then single buffer flush is triggered (writing to 
persistent storage as soon as possible). By default, single buffer flush is 
only enabled by configuring `rss.server.single.buffer.flush.enabled', but it's 
automatically valid for huge partition. 
+Once the huge partition threshold is reached, the partition is marked as a 
huge partition. And then single buffer flush is triggered (writing to 
persistent storage as soon as possible). By default, single buffer flush is 
only enabled by configuring `rss.server.single.buffer.flush.enabled`, but it's 
automatically valid for huge partition. 
 
 If you don't use HDFS, the huge partition may be flushed to local disk, which 
is dangerous if the partition size is larger than the free disk space. 
Therefore, it is recommended to use a mixed storage type, including HDFS or 
other distributed file systems.
 

Reply via email to