This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository 
https://gitbox.apache.org/repos/asf/incubator-dolphinscheduler-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 941a6c0  Automated deployment: Thu Oct 15 07:11:26 UTC 2020 
8b336e89e8b1d1bfedd6e337596e7bce1c893237
941a6c0 is described below

commit 941a6c0647d2b702c950f130f6abd66af7493dd7
Author: dailidong <[email protected]>
AuthorDate: Thu Oct 15 07:11:27 2020 +0000

    Automated deployment: Thu Oct 15 07:11:26 UTC 2020 
8b336e89e8b1d1bfedd6e337596e7bce1c893237
---
 en-us/docs/1.3.1/user_doc/system-manual.html | 23 +++++++++--------------
 en-us/docs/1.3.1/user_doc/system-manual.json |  2 +-
 2 files changed, 10 insertions(+), 15 deletions(-)

diff --git a/en-us/docs/1.3.1/user_doc/system-manual.html 
b/en-us/docs/1.3.1/user_doc/system-manual.html
index 9ff7658..6894ae3 100644
--- a/en-us/docs/1.3.1/user_doc/system-manual.html
+++ b/en-us/docs/1.3.1/user_doc/system-manual.html
@@ -260,13 +260,13 @@ The operation functions of the workflow definition list 
are as follows:</p>
 <ul>
 <li>Upload resource files and udf functions, all uploaded files and resources 
will be stored on hdfs, so the following configuration items are required:</li>
 </ul>
-<pre><code>conf/common/common.properties  
+<pre><code>conf/common.properties  
     # Users who have permission to create directories under the HDFS root path
     hdfs.root.user=hdfs
-    # data base dir, resource file will store to this hadoop hdfs path, self 
configuration, please make sure the directory exists on hdfs and have read 
write permissions。&quot;/escheduler&quot; is recommended
-    data.store2hdfs.basepath=/dolphinscheduler
-    # resource upload startup type : HDFS,S3,NONE
-    res.upload.startup.type=HDFS
+    # data base dir, resource file will store to this hadoop hdfs path, self 
configuration, please make sure the directory exists on hdfs and have read 
write permissions。&quot;/dolphinscheduler&quot; is recommended
+    resource.upload.path=/dolphinscheduler
+    # resource storage type : HDFS,S3,NONE
+    resource.storage.type=HDFS
     # whether kerberos starts
     hadoop.security.authentication.startup.state=false
     # java.security.krb5.conf path
@@ -274,11 +274,10 @@ The operation functions of the workflow definition list 
are as follows:</p>
     # loginUserFromKeytab user
     [email protected]
     # loginUserFromKeytab path
-    login.user.keytab.path=/opt/hdfs.headless.keytab
-    
-conf/common/hadoop.properties      
-    # ha or single namenode,If namenode ha needs to copy core-site.xml and 
hdfs-site.xml
-    # to the conf directory,support s3,for example : s3a://dolphinscheduler
+    login.user.keytab.path=/opt/hdfs.headless.keytab    
+    # if resource.storage.type is HDFS,and your Hadoop Cluster NameNode has HA 
enabled, you need to put core-site.xml and hdfs-site.xml in the 
installPath/conf directory. In this example, it is placed under 
/opt/soft/dolphinscheduler/conf, and configure the namenode cluster name; if 
the NameNode is not HA, modify it to a specific IP or host name.
+    # if resource.storage.type is S3,write S3 address,HA,for example 
:s3a://dolphinscheduler,
+    # Note,s3 be sure to create the root directory /dolphinscheduler
     fs.defaultFS=hdfs://mycluster:8020    
     #resourcemanager ha note this need ips , this empty if single
     yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx    
@@ -286,10 +285,6 @@ conf/common/hadoop.properties
     yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
 
 </code></pre>
-<ul>
-<li>Only one address needs to be configured for yarn.resourcemanager.ha.rm.ids 
and yarn.application.status.address, and the other address is empty.</li>
-<li>You need to copy core-site.xml and hdfs-site.xml from the conf directory 
of the Hadoop cluster to the conf directory of the dolphinscheduler project, 
and restart the api-server service.</li>
-</ul>
 <h4>3.2 File management</h4>
 <blockquote>
 <p>It is the management of various resource files, including the creation of 
basic txt/log/sh/conf/py/java and other files, uploading jar packages and other 
types of files, which can be edited, renamed, downloaded, and deleted.</p>
diff --git a/en-us/docs/1.3.1/user_doc/system-manual.json 
b/en-us/docs/1.3.1/user_doc/system-manual.json
index 24714f9..c694b52 100644
--- a/en-us/docs/1.3.1/user_doc/system-manual.json
+++ b/en-us/docs/1.3.1/user_doc/system-manual.json
@@ -1,6 +1,6 @@
 {
   "filename": "system-manual.md",
-  "__html": "<h1>System User Manual</h1>\n<h2>Get started 
quickly</h2>\n<blockquote>\n<p>Please refer to<a href=\"quick-start.html\">Get 
started quickly</a></p>\n</blockquote>\n<h2>Operation guide</h2>\n<h3>1. 
Home</h3>\n<p>The home page contains task status statistics, process status 
statistics, and workflow definition statistics of all items of the user.\n<p 
align=\"center\">\n<img src=\"/img/home_en.png\" width=\"80%\" 
/>\n</p></p>\n<h3>2. Project management</h3>\n<h4>2.1 Create proje [...]
+  "__html": "<h1>System User Manual</h1>\n<h2>Get started 
quickly</h2>\n<blockquote>\n<p>Please refer to<a href=\"quick-start.html\">Get 
started quickly</a></p>\n</blockquote>\n<h2>Operation guide</h2>\n<h3>1. 
Home</h3>\n<p>The home page contains task status statistics, process status 
statistics, and workflow definition statistics of all items of the user.\n<p 
align=\"center\">\n<img src=\"/img/home_en.png\" width=\"80%\" 
/>\n</p></p>\n<h3>2. Project management</h3>\n<h4>2.1 Create proje [...]
   "link": "/en-us/docs/1.3.1/user_doc/system-manual.html",
   "meta": {}
 }
\ No newline at end of file

Reply via email to