This is an automated email from the ASF dual-hosted git repository.

zuston pushed a commit to branch branch-0.9
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git

commit 601119ddab68cc118d6a03e94d0b821899520f87
Author: RickyMa <[email protected]>
AuthorDate: Wed May 22 14:53:36 2024 +0800

    [MINOR] fix: Update outdated config: rss.writer.send.check.timeout -> 
rss.client.send.check.timeout.ms (#1734)
    
    ### What changes were proposed in this pull request?
    
    Update outdated config: rss.writer.send.check.timeout -> 
rss.client.send.check.timeout.ms.
    
    ### Why are the changes needed?
    
    The `rss.writer.send.check.timeout` configuration has been renamed to 
`rss.client.send.check.timeout.ms`, no longer in use.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    No need.
---
 README.md                                                               | 2 +-
 .../test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java | 2 +-
 .../test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java | 2 +-
 docs/coordinator_guide.md                                               | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/README.md b/README.md
index dcab92443..3757bd77f 100644
--- a/README.md
+++ b/README.md
@@ -179,7 +179,7 @@ If you have packaged tgz with hadoop jars, the env of 
`HADOOP_HOME` is needn't s
     rss.coordinator.remote.storage.path 
hdfs://cluster1/path,hdfs://cluster2/path
     rss.writer.require.memory.retryMax 1200
     rss.client.retry.max 50
-    rss.writer.send.check.timeout 600000
+    rss.client.send.check.timeout.ms 600000
     rss.client.read.buffer.size 14m
    ```
 5. start Coordinator
diff --git 
a/client-spark/spark2/src/test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java
 
b/client-spark/spark2/src/test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java
index 8711c483b..193cf4121 100644
--- 
a/client-spark/spark2/src/test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java
+++ 
b/client-spark/spark2/src/test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java
@@ -133,7 +133,7 @@ public class RssShuffleWriterTest {
     rssShuffleWriter.checkBlockSendResult(Sets.newHashSet(1L, 2L, 3L));
     manager.clearTaskMeta(taskId);
 
-    // case 2: partial blocks aren't sent before 
spark.rss.writer.send.check.timeout,
+    // case 2: partial blocks aren't sent before 
spark.rss.client.send.check.timeout.ms,
     // Runtime exception will be thrown
     manager.addSuccessBlockIds(taskId, Sets.newHashSet(1L, 2L));
     Throwable e2 =
diff --git 
a/client-spark/spark3/src/test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java
 
b/client-spark/spark3/src/test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java
index 5ca85eced..9c9726ed7 100644
--- 
a/client-spark/spark3/src/test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java
+++ 
b/client-spark/spark3/src/test/java/org/apache/spark/shuffle/writer/RssShuffleWriterTest.java
@@ -334,7 +334,7 @@ public class RssShuffleWriterTest {
     rssShuffleWriter.checkBlockSendResult(Sets.newHashSet(1L, 2L, 3L));
     successBlocks.clear();
 
-    // case 2: partial blocks aren't sent before 
spark.rss.writer.send.check.timeout,
+    // case 2: partial blocks aren't sent before 
spark.rss.client.send.check.timeout.ms,
     // Runtime exception will be thrown
     successBlocks.put("taskId", Sets.newHashSet(1L, 2L));
     Throwable e2 =
diff --git a/docs/coordinator_guide.md b/docs/coordinator_guide.md
index b33bbba5a..11bd911b1 100644
--- a/docs/coordinator_guide.md
+++ b/docs/coordinator_guide.md
@@ -58,7 +58,7 @@ This document will introduce how to deploy Uniffle 
coordinators.
     rss.coordinator.remote.storage.path 
hdfs://cluster1/path,hdfs://cluster2/path
     rss.writer.require.memory.retryMax 1200
     rss.client.retry.max 100
-    rss.writer.send.check.timeout 600000
+    rss.client.send.check.timeout.ms 600000
     rss.client.read.buffer.size 14m
    ```
    

Reply via email to