This is an automated email from the ASF dual-hosted git repository.

roryqi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
     new 5af0e033 [#1124] docs(tez): Add the document of config option 
`tez.rss.client.send.thread.num` (#1142)
5af0e033 is described below

commit 5af0e033e46c91ba53c553c1fc0803a1e40621ab
Author: bin41215 <[email protected]>
AuthorDate: Sun Aug 13 19:22:36 2023 +0800

    [#1124] docs(tez): Add the document of config option 
`tez.rss.client.send.thread.num` (#1142)
    
    ### What changes were proposed in this pull request?
    
    Add the document of config option `tez.rss.client.send.thread.num `
    
    ### Why are the changes needed?
    
    Fix: #1124
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
---
 docs/client_guide.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/docs/client_guide.md b/docs/client_guide.md
index eb1b4091..afd769f4 100644
--- a/docs/client_guide.md
+++ b/docs/client_guide.md
@@ -243,6 +243,7 @@ Notice: this feature requires the MEMORY_LOCAL_HADOOP mode.
 | tez.rss.avoid.recompute.succeeded.task | false   | Whether to avoid 
recompute succeeded task when node is unhealthy or black-listed |
 | tez.rss.client.max.buffer.size | 3k | The max buffer size in map side. 
Control the size of each segment(WrappedBuffer) in the buffer. |
 | tez.rss.client.batch.trigger.num | 50 | The max batch of buffers to send 
data in map side. Affect the number of blocks sent to the server in each batch, 
and may affect rss_worker_used_buffer_size |
+| tez.rss.client.send.thread.num | 5 | The thread pool size for the client to 
send data to the server. |
 
 ### Netty Setting
 | Property Name                                       | Default | Description  
                                                                                
                                                                                
                                                                                
                                                                       |

Reply via email to