This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a commit to branch asf-site
in repository 
https://gitbox.apache.org/repos/asf/incubator-dolphinscheduler-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 7d587e1  Automated deployment: Tue Oct 13 12:57:40 UTC 2020 
ec558d272c060c2979d85c35f55f5a16cf1002e2
7d587e1 is described below

commit 7d587e15dd3afafcea9d07f80cb7152ba343987a
Author: dailidong <[email protected]>
AuthorDate: Tue Oct 13 12:57:41 2020 +0000

    Automated deployment: Tue Oct 13 12:57:40 UTC 2020 
ec558d272c060c2979d85c35f55f5a16cf1002e2
---
 zh-cn/docs/1.3.1/user_doc/cluster-deployment.html | 2 +-
 zh-cn/docs/1.3.1/user_doc/cluster-deployment.json | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/zh-cn/docs/1.3.1/user_doc/cluster-deployment.html 
b/zh-cn/docs/1.3.1/user_doc/cluster-deployment.html
index 8ae73da..66714b1 100644
--- a/zh-cn/docs/1.3.1/user_doc/cluster-deployment.html
+++ b/zh-cn/docs/1.3.1/user_doc/cluster-deployment.html
@@ -247,7 +247,7 @@ sslTrust="smtp.qq.com"
 #</span><span class="bash"> 
业务用到的比如sql等资源文件上传到哪里,可以设置:HDFS,S3,NONE,单机如果想使用本地文件系统,请配置为HDFS,因为HDFS支持本地文件系统;如果不需要资源上传功能请选择NONE。强调一点:使用本地文件系统不需要部署hadoop</span>
 resourceStorageType="HDFS"
 <span class="hljs-meta">
-#</span><span 
class="bash">如果上传资源保存想保存在hadoop上,hadoop集群的NameNode启用了HA的话,需要将hadoop的配置文件core-site.xml和hdfs-site.xml放到conf目录下,本例即是放到/opt/dolphinscheduler/conf下面,并配置namenode
 cluster名称;如果NameNode不是HA,则只需要将mycluster修改为具体的ip或者主机名即可</span>
+#</span><span class="bash"> 
如果上传资源保存想保存在hadoop上,hadoop集群的NameNode启用了HA的话,需要将hadoop的配置文件core-site.xml和hdfs-site.xml放到安装路径(上面的installPath)的conf目录下,本例即是放到/opt/soft/dolphinscheduler/conf下面,并配置namenode
 cluster名称;如果NameNode不是HA,则只需要将mycluster修改为具体的ip或者主机名即可</span>
 defaultFS="hdfs://mycluster:8020"
 <span class="hljs-meta">
 
diff --git a/zh-cn/docs/1.3.1/user_doc/cluster-deployment.json 
b/zh-cn/docs/1.3.1/user_doc/cluster-deployment.json
index 724be7e..e65fc8a 100644
--- a/zh-cn/docs/1.3.1/user_doc/cluster-deployment.json
+++ b/zh-cn/docs/1.3.1/user_doc/cluster-deployment.json
@@ -1,6 +1,6 @@
 {
   "filename": "cluster-deployment.md",
-  "__html": 
"<h1>集群部署(Cluster)</h1>\n<h1>1、基础软件安装(必装项请自行安装)</h1>\n<ul>\n<li>PostgreSQL 
(8.2.15+) or MySQL (5.7系列)  :  两者任选其一即可</li>\n<li><a 
href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\";>JDK</a>
 (1.8+) :  必装,请安装好后在/etc/profile下配置 JAVA_HOME 及 PATH 变量</li>\n<li>ZooKeeper 
(3.4.6+) :必装</li>\n<li>Hadoop (2.6+) or MinIO :选装,如果需要用到资源上传功能,可以选择上传到Hadoop or 
MinIO上</li>\n</ul>\n<pre><code class=\"language-markdown\"> 
注意:DolphinScheduler本身不依赖Hadoop、Hive、Spark,仅是会调用他们的Cl [...]
+  "__html": 
"<h1>集群部署(Cluster)</h1>\n<h1>1、基础软件安装(必装项请自行安装)</h1>\n<ul>\n<li>PostgreSQL 
(8.2.15+) or MySQL (5.7系列)  :  两者任选其一即可</li>\n<li><a 
href=\"https://www.oracle.com/technetwork/java/javase/downloads/index.html\";>JDK</a>
 (1.8+) :  必装,请安装好后在/etc/profile下配置 JAVA_HOME 及 PATH 变量</li>\n<li>ZooKeeper 
(3.4.6+) :必装</li>\n<li>Hadoop (2.6+) or MinIO :选装,如果需要用到资源上传功能,可以选择上传到Hadoop or 
MinIO上</li>\n</ul>\n<pre><code class=\"language-markdown\"> 
注意:DolphinScheduler本身不依赖Hadoop、Hive、Spark,仅是会调用他们的Cl [...]
   "link": "/zh-cn/docs/1.3.1/user_doc/cluster-deployment.html",
   "meta": {}
 }
\ No newline at end of file

Reply via email to