This is an automated email from the ASF dual-hosted git repository.

zuston pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-uniffle.git


The following commit(s) were added to refs/heads/master by this push:
     new d12d6f2ce [#1576] feat(doc): server deploy guide without hadoop-home 
env (#1577)
d12d6f2ce is described below

commit d12d6f2cea136f6a8320367c75aa8ea7c55031e2
Author: Junfan Zhang <[email protected]>
AuthorDate: Mon Mar 18 17:38:12 2024 +0800

    [#1576] feat(doc): server deploy guide without hadoop-home env (#1577)
    
    ### What changes were proposed in this pull request?
    
    Provide the uniffle server deploy guide for those machines without hadoop 
env.
    
    ### Why are the changes needed?
    
    Leveraging from the #1379 and #1370 , we could setup uniffle shuffle-server 
without hadoop env.
    This will simplify the quick start process.
    
    Fix: #1576
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Existing tests
---
 .../apache/uniffle/shuffle/manager/RssShuffleManagerBaseTest.java    | 2 ++
 docs/server_guide.md                                                 | 5 +++++
 2 files changed, 7 insertions(+)

diff --git 
a/client-spark/common/src/test/java/org/apache/uniffle/shuffle/manager/RssShuffleManagerBaseTest.java
 
b/client-spark/common/src/test/java/org/apache/uniffle/shuffle/manager/RssShuffleManagerBaseTest.java
index da45c82a8..fffb7af3f 100644
--- 
a/client-spark/common/src/test/java/org/apache/uniffle/shuffle/manager/RssShuffleManagerBaseTest.java
+++ 
b/client-spark/common/src/test/java/org/apache/uniffle/shuffle/manager/RssShuffleManagerBaseTest.java
@@ -63,6 +63,8 @@ public class RssShuffleManagerBaseTest {
   @Test
   public void testGetDefaultRemoteStorageInfo() {
     SparkConf sparkConf = new SparkConf();
+    sparkConf.set(
+        "spark." + 
RssClientConf.RSS_CLIENT_REMOTE_STORAGE_USE_LOCAL_CONF_ENABLED.key(), "false");
     RemoteStorageInfo remoteStorageInfo =
         RssShuffleManagerBase.getDefaultRemoteStorageInfo(sparkConf);
     assertTrue(remoteStorageInfo.getConfItems().isEmpty());
diff --git a/docs/server_guide.md b/docs/server_guide.md
index c5b0d75dc..9d6585987 100644
--- a/docs/server_guide.md
+++ b/docs/server_guide.md
@@ -32,6 +32,11 @@ This document will introduce how to deploy Uniffle shuffle 
servers.
      HADOOP_HOME=<hadoop home>
      XMX_SIZE="80g"
    ```
+   
+   For the following cases, you don't need to specify `HADOOP_HOME` that will 
simplify the server deployment.
+   1. using the storage type without HDFS like `MEMORY_LOCALFILE
+   2. using HDFS and package with hadoop jars, like this: 
`./build_distribution.sh --hadoop-profile 'hadoop3.2' 
-Phadoop-dependencies-included`. But you need to explicitly set the 
`spark.rss.client.remote.storage.useLocalConfAsDefault=true`
+
 3. update RSS_HOME/conf/server.conf, eg,
    ```
      rss.rpc.server.port 19999

Reply via email to