This is an automated email from the ASF dual-hosted git repository.
lidongdai pushed a commit to branch 1.3.5-prepare
in repository https://gitbox.apache.org/repos/asf/incubator-dolphinscheduler.git
The following commit(s) were added to refs/heads/1.3.5-prepare by this push:
new 428cc85 [1.3.5-prepare][Improvement][Docker] Sync docker conf
templates to the latest conf properties and update readme (#4684)
428cc85 is described below
commit 428cc85ca761ba7b9e252e57e5af7852f28a9876
Author: Shiwen Cheng <[email protected]>
AuthorDate: Thu Feb 4 23:19:35 2021 +0800
[1.3.5-prepare][Improvement][Docker] Sync docker conf templates to the
latest conf properties and update readme (#4684)
---
docker/README.md | 1 +
docker/build/README.md | 32 +++++++++++--
docker/build/README_zh_CN.md | 32 +++++++++++--
.../conf/dolphinscheduler/alert.properties.tpl | 6 +--
.../application-api.properties.tpl | 10 ++--
.../conf/dolphinscheduler/common.properties.tpl | 54 +++++++++-------------
.../dolphinscheduler/datasource.properties.tpl | 7 +--
.../conf/dolphinscheduler/master.properties.tpl | 5 +-
.../conf/dolphinscheduler/quartz.properties.tpl | 2 +-
.../conf/dolphinscheduler/worker.properties.tpl | 11 ++---
.../conf/dolphinscheduler/zookeeper.properties.tpl | 2 +-
docker/build/conf/zookeeper/zoo.cfg | 45 ------------------
docker/build/startup-init-conf.sh | 5 +-
docker/docker-swarm/docker-compose.yml | 1 -
docker/docker-swarm/docker-stack.yml | 1 -
.../src/main/resources/alert.properties | 1 -
.../src/main/resources/application-api.properties | 6 +--
.../src/main/resources/common.properties | 10 ++--
.../src/main/resources/datasource.properties | 2 +-
.../src/main/resources/master.properties | 4 +-
.../src/main/resources/worker.properties | 4 +-
.../src/main/resources/quartz.properties | 2 +-
.../src/main/resources/zookeeper.properties | 2 +-
23 files changed, 110 insertions(+), 135 deletions(-)
diff --git a/docker/README.md b/docker/README.md
index e69de29..dcd2098 100644
--- a/docker/README.md
+++ b/docker/README.md
@@ -0,0 +1 @@
+# Dolphin Scheduler for Docker
diff --git a/docker/build/README.md b/docker/build/README.md
index dc0d512..9c3896e 100644
--- a/docker/build/README.md
+++ b/docker/build/README.md
@@ -162,12 +162,40 @@ This environment variable sets the runtime environment
for task. The default val
User data directory path, self configuration, please make sure the directory
exists and have read write permissions. The default value is
`/tmp/dolphinscheduler`
+**`RESOURCE_STORAGE_TYPE`**
+
+This environment variable sets resource storage type for dolphinscheduler like
`HDFS`, `S3`, `NONE`. The default value is `HDFS`.
+
+**`RESOURCE_UPLOAD_PATH`**
+
+This environment variable sets resource store path on HDFS/S3 for resource
storage. The default value is `/dolphinscheduler`.
+
+**`FS_DEFAULT_FS`**
+
+This environment variable sets fs.defaultFS for resource storage like
`file:///`, `hdfs://mycluster:8020` or `s3a://dolphinscheduler`. The default
value is `file:///`.
+
+**`FS_S3A_ENDPOINT`**
+
+This environment variable sets s3 endpoint for resource storage. The default
value is `s3.xxx.amazonaws.com`.
+
+**`FS_S3A_ACCESS_KEY`**
+
+This environment variable sets s3 access key for resource storage. The default
value is `xxxxxxx`.
+
+**`FS_S3A_SECRET_KEY`**
+
+This environment variable sets s3 secret key for resource storage. The default
value is `xxxxxxx`.
+
**`ZOOKEEPER_QUORUM`**
This environment variable sets zookeeper quorum for `master-server` and
`worker-serverr`. The default value is `127.0.0.1:2181`.
**Note**: You must be specify it when start a standalone dolphinscheduler
server. Like `master-server`, `worker-server`.
+**`ZOOKEEPER_ROOT`**
+
+This environment variable sets zookeeper root directory for dolphinscheduler.
The default value is `/dolphinscheduler`.
+
**`MASTER_EXEC_THREADS`**
This environment variable sets exec thread num for `master-server`. The
default value is `100`.
@@ -208,10 +236,6 @@ This environment variable sets exec thread num for
`worker-server`. The default
This environment variable sets heartbeat interval for `worker-server`. The
default value is `10`.
-**`WORKER_FETCH_TASK_NUM`**
-
-This environment variable sets fetch task num for `worker-server`. The default
value is `3`.
-
**`WORKER_MAX_CPULOAD_AVG`**
This environment variable sets max cpu load avg for `worker-server`. The
default value is `100`.
diff --git a/docker/build/README_zh_CN.md b/docker/build/README_zh_CN.md
index 10a1306..b5fb79b 100644
--- a/docker/build/README_zh_CN.md
+++ b/docker/build/README_zh_CN.md
@@ -162,12 +162,40 @@ Dolphin Scheduler映像使用了几个容易遗漏的环境变量。虽然这些
用户数据目录, 用户自己配置, 请确保这个目录存在并且用户读写权限, 默认值 `/tmp/dolphinscheduler`。
+**`RESOURCE_STORAGE_TYPE`**
+
+配置`dolphinscheduler`的资源存储类型,可选项为 `HDFS`、`S3`、`NONE`,默认值 `HDFS`。
+
+**`RESOURCE_UPLOAD_PATH`**
+
+配置`HDFS/S3`上的资源存储路径,默认值 `/dolphinscheduler`。
+
+**`FS_DEFAULT_FS`**
+
+配置资源存储的文件系统协议,如 `file:///`, `hdfs://mycluster:8020` or
`s3a://dolphinscheduler`,默认值 `file:///`。
+
+**`FS_S3A_ENDPOINT`**
+
+当`RESOURCE_STORAGE_TYPE=S3`时,需要配置`S3`的访问路径,默认值 `s3.xxx.amazonaws.com`。
+
+**`FS_S3A_ACCESS_KEY`**
+
+当`RESOURCE_STORAGE_TYPE=S3`时,需要配置`S3`的`s3 access key`,默认值 `xxxxxxx`。
+
+**`FS_S3A_SECRET_KEY`**
+
+当`RESOURCE_STORAGE_TYPE=S3`时,需要配置`S3`的`s3 secret key`,默认值 `xxxxxxx`。
+
**`ZOOKEEPER_QUORUM`**
配置`master-server`和`worker-serverr`的`Zookeeper`地址, 默认值 `127.0.0.1:2181`。
**注意**:
当运行`dolphinscheduler`中`master-server`、`worker-server`这些服务时,必须指定这个环境变量,以便于你更好的搭建分布式服务。
+**`ZOOKEEPER_ROOT`**
+
+配置`dolphinscheduler`在`zookeeper`中数据存储的根目录,默认值 `/dolphinscheduler`。
+
**`MASTER_EXEC_THREADS`**
配置`master-server`中的执行线程数量,默认值 `100`。
@@ -208,10 +236,6 @@ Dolphin Scheduler映像使用了几个容易遗漏的环境变量。虽然这些
配置`worker-server`中的心跳交互时间,默认值 `10`。
-**`WORKER_FETCH_TASK_NUM`**
-
-配置`worker-server`中的获取任务的数量,默认值 `3`。
-
**`WORKER_MAX_CPULOAD_AVG`**
配置`worker-server`中的CPU中的最大`load average`值,默认值 `100`。
diff --git a/docker/build/conf/dolphinscheduler/alert.properties.tpl
b/docker/build/conf/dolphinscheduler/alert.properties.tpl
index b940ecd..b479521 100644
--- a/docker/build/conf/dolphinscheduler/alert.properties.tpl
+++ b/docker/build/conf/dolphinscheduler/alert.properties.tpl
@@ -14,11 +14,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
+
#alert type is EMAIL/SMS
alert.type=EMAIL
-# alter msg template, default is html template
-#alert.template=html
# mail server configuration
mail.protocol=SMTP
mail.server.host=${MAIL_SERVER_HOST}
@@ -46,5 +45,4 @@
enterprise.wechat.push.url=https://qyapi.weixin.qq.com/cgi-bin/message/send?acce
enterprise.wechat.team.send.msg={\"toparty\":\"$toParty\",\"agentid\":\"$agentId\",\"msgtype\":\"text\",\"text\":{\"content\":\"$msg\"},\"safe\":\"0\"}
enterprise.wechat.user.send.msg={\"touser\":\"$toUser\",\"agentid\":\"$agentId\",\"msgtype\":\"markdown\",\"markdown\":{\"content\":\"$msg\"}}
-
-
+plugin.dir=/Users/xx/your/path/to/plugin/dir
diff --git a/docker/build/conf/dolphinscheduler/application-api.properties.tpl
b/docker/build/conf/dolphinscheduler/application-api.properties.tpl
index 8891592..0d73f13 100644
--- a/docker/build/conf/dolphinscheduler/application-api.properties.tpl
+++ b/docker/build/conf/dolphinscheduler/application-api.properties.tpl
@@ -21,17 +21,19 @@ server.port=12345
# session config
server.servlet.session.timeout=7200
-# servlet config
server.servlet.context-path=/dolphinscheduler/
# file size limit for upload
spring.servlet.multipart.max-file-size=1024MB
spring.servlet.multipart.max-request-size=1024MB
+# enable response compression
+server.compression.enabled=true
+server.compression.mime-types=text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json,application/xml
+
# post content
server.jetty.max-http-post-size=5000000
-# i18n
spring.messages.encoding=UTF-8
#i18n classpath folder , file prefix messages, if have many files, use ","
seperator
@@ -39,7 +41,3 @@ spring.messages.basename=i18n/messages
# Authentication types (supported types: PASSWORD)
security.authentication.type=PASSWORD
-
-
-
-
diff --git a/docker/build/conf/dolphinscheduler/common.properties.tpl
b/docker/build/conf/dolphinscheduler/common.properties.tpl
index ff74598..83ee1a0 100644
--- a/docker/build/conf/dolphinscheduler/common.properties.tpl
+++ b/docker/build/conf/dolphinscheduler/common.properties.tpl
@@ -15,35 +15,26 @@
# limitations under the License.
#
-#============================================================================
-# System
-#============================================================================
-# system env path. self configuration, please make sure the directory and file
exists and have read write execute permissions
-dolphinscheduler.env.path=${DOLPHINSCHEDULER_ENV_PATH}
-
-# user data directory path, self configuration, please make sure the directory
exists and have read write permissions
-data.basedir.path=${DOLPHINSCHEDULER_DATA_BASEDIR_PATH}
-
-# resource upload startup type : HDFS,S3,NONE
+# resource storage type : HDFS,S3,NONE
resource.storage.type=${RESOURCE_STORAGE_TYPE}
-#============================================================================
-# HDFS
-#============================================================================
# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs
path, self configuration, please make sure the directory exists on hdfs and
have read write permissions。"/dolphinscheduler" is recommended
resource.upload.path=${RESOURCE_UPLOAD_PATH}
+# user data local directory path, please make sure the directory exists and
have read write permissions
+data.basedir.path=${DOLPHINSCHEDULER_DATA_BASEDIR_PATH}
+
# whether kerberos starts
-#hadoop.security.authentication.startup.state=false
+hadoop.security.authentication.startup.state=false
# java.security.krb5.conf path
-#java.security.krb5.conf.path=/opt/krb5.conf
+java.security.krb5.conf.path=/opt/krb5.conf
-# loginUserFromKeytab user
-#[email protected]
+# login user from keytab username
[email protected]
-# loginUserFromKeytab path
-#login.user.keytab.path=/opt/hdfs.headless.keytab
+# login user from keytab path
+login.user.keytab.path=/opt/hdfs.headless.keytab
#resource.view.suffixs
#resource.view.suffixs=txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties
@@ -51,28 +42,25 @@ resource.upload.path=${RESOURCE_UPLOAD_PATH}
# if resource.storage.type=HDFS, the user need to have permission to create
directories under the HDFS root path
hdfs.root.user=hdfs
-# kerberos expire time
-kerberos.expire.time=7
-
-#============================================================================
-# S3
-#============================================================================
-# if resource.storage.type=S3,the value like: s3a://dolphinscheduler ; if
resource.storage.type=HDFS, When namenode HA is enabled, you need to copy
core-site.xml and hdfs-site.xml to conf dir
+# if resource.storage.type=S3, the value like: s3a://dolphinscheduler ; if
resource.storage.type=HDFS, When namenode HA is enabled, you need to copy
core-site.xml and hdfs-site.xml to conf dir
fs.defaultFS=${FS_DEFAULT_FS}
-# if resource.storage.type=S3,s3 endpoint
+# if resource.storage.type=S3, s3 endpoint
fs.s3a.endpoint=${FS_S3A_ENDPOINT}
-# if resource.storage.type=S3,s3 access key
+# if resource.storage.type=S3, s3 access key
fs.s3a.access.key=${FS_S3A_ACCESS_KEY}
-# if resource.storage.type=S3,s3 secret key
+# if resource.storage.type=S3, s3 secret key
fs.s3a.secret.key=${FS_S3A_SECRET_KEY}
-# if not use hadoop resourcemanager, please keep default value; if
resourcemanager HA enable, please type the HA ips ; if resourcemanager is
single, make this value empty TODO
+# if resourcemanager HA enable, please type the HA ips ; if resourcemanager is
single, make this value empty
yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
-# If resourcemanager HA enable or not use resourcemanager, please keep the
default value; If resourcemanager is single, you only need to replace ark1 to
actual resourcemanager hostname.
-yarn.application.status.address=http://ark1:8088/ws/v1/cluster/apps/%s
-
+# if resourcemanager HA enable or not use resourcemanager, please keep the
default value; If resourcemanager is single, you only need to replace ds1 to
actual resourcemanager hostname.
+yarn.application.status.address=http://ds1:8088/ws/v1/cluster/apps/%s
+# system env path
+dolphinscheduler.env.path=${DOLPHINSCHEDULER_ENV_PATH}
+development.state=false
+kerberos.expire.time=7
diff --git a/docker/build/conf/dolphinscheduler/datasource.properties.tpl
b/docker/build/conf/dolphinscheduler/datasource.properties.tpl
index f7c5ee6..4d12931 100644
--- a/docker/build/conf/dolphinscheduler/datasource.properties.tpl
+++ b/docker/build/conf/dolphinscheduler/datasource.properties.tpl
@@ -15,15 +15,12 @@
# limitations under the License.
#
-# db
+# postgresql
spring.datasource.driver-class-name=${DATABASE_DRIVER}
spring.datasource.url=jdbc:${DATABASE_TYPE}://${DATABASE_HOST}:${DATABASE_PORT}/${DATABASE_DATABASE}?${DATABASE_PARAMS}
spring.datasource.username=${DATABASE_USERNAME}
spring.datasource.password=${DATABASE_PASSWORD}
-## base spring data source configuration todo need to remove
-#spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
-
# connection configuration
#spring.datasource.initialSize=5
# min connection number
@@ -63,4 +60,4 @@ spring.datasource.password=${DATABASE_PASSWORD}
# open PSCache, specify count PSCache for every connection
#spring.datasource.poolPreparedStatements=true
-#spring.datasource.maxPoolPreparedStatementPerConnectionSize=20
\ No newline at end of file
+#spring.datasource.maxPoolPreparedStatementPerConnectionSize=20
diff --git a/docker/build/conf/dolphinscheduler/master.properties.tpl
b/docker/build/conf/dolphinscheduler/master.properties.tpl
index 17dd6f9..a9370c1 100644
--- a/docker/build/conf/dolphinscheduler/master.properties.tpl
+++ b/docker/build/conf/dolphinscheduler/master.properties.tpl
@@ -21,6 +21,9 @@ master.exec.threads=${MASTER_EXEC_THREADS}
# master execute task number in parallel
master.exec.task.num=${MASTER_EXEC_TASK_NUM}
+# master dispatch task number
+#master.dispatch.task.num = 3
+
# master heartbeat interval
master.heartbeat.interval=${MASTER_HEARTBEAT_INTERVAL}
@@ -37,4 +40,4 @@ master.max.cpuload.avg=${MASTER_MAX_CPULOAD_AVG}
master.reserved.memory=${MASTER_RESERVED_MEMORY}
# master listen port
-#master.listen.port=${MASTER_LISTEN_PORT}
\ No newline at end of file
+master.listen.port=${MASTER_LISTEN_PORT}
diff --git a/docker/build/conf/dolphinscheduler/quartz.properties.tpl
b/docker/build/conf/dolphinscheduler/quartz.properties.tpl
index 2564579..10f1812 100644
--- a/docker/build/conf/dolphinscheduler/quartz.properties.tpl
+++ b/docker/build/conf/dolphinscheduler/quartz.properties.tpl
@@ -51,4 +51,4 @@
#============================================================================
# Configure Datasources
#============================================================================
-#org.quartz.dataSource.myDs.connectionProvider.class =
org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
\ No newline at end of file
+#org.quartz.dataSource.myDs.connectionProvider.class =
org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
diff --git a/docker/build/conf/dolphinscheduler/worker.properties.tpl
b/docker/build/conf/dolphinscheduler/worker.properties.tpl
index d596be9..81cf8f9 100644
--- a/docker/build/conf/dolphinscheduler/worker.properties.tpl
+++ b/docker/build/conf/dolphinscheduler/worker.properties.tpl
@@ -21,17 +21,14 @@ worker.exec.threads=${WORKER_EXEC_THREADS}
# worker heartbeat interval
worker.heartbeat.interval=${WORKER_HEARTBEAT_INTERVAL}
-# submit the number of tasks at a time
-worker.fetch.task.num=${WORKER_FETCH_TASK_NUM}
-
-# only less than cpu avg load, worker server can work. default value : the
number of cpu cores * 2
+# only less than cpu avg load, worker server can work. default value -1: the
number of cpu cores * 2
worker.max.cpuload.avg=${WORKER_MAX_CPULOAD_AVG}
# only larger than reserved memory, worker server can work. default value :
physical memory * 1/6, unit is G.
worker.reserved.memory=${WORKER_RESERVED_MEMORY}
# worker listener port
-#worker.listen.port=${WORKER_LISTEN_PORT}
+worker.listen.port=${WORKER_LISTEN_PORT}
-# default worker group
-#worker.group=${WORKER_GROUP}
\ No newline at end of file
+# default worker group,if this worker belongs different groups,you can config
the following like that 'worker.groups=default,test'
+worker.group=${WORKER_GROUP}
diff --git a/docker/build/conf/dolphinscheduler/zookeeper.properties.tpl
b/docker/build/conf/dolphinscheduler/zookeeper.properties.tpl
index 51540aa..3f1bd7b 100644
--- a/docker/build/conf/dolphinscheduler/zookeeper.properties.tpl
+++ b/docker/build/conf/dolphinscheduler/zookeeper.properties.tpl
@@ -26,4 +26,4 @@ zookeeper.dolphinscheduler.root=${ZOOKEEPER_ROOT}
#zookeeper.connection.timeout=30000
#zookeeper.retry.base.sleep=100
#zookeeper.retry.max.sleep=30000
-#zookeeper.retry.maxtime=10
\ No newline at end of file
+#zookeeper.retry.maxtime=10
diff --git a/docker/build/conf/zookeeper/zoo.cfg
b/docker/build/conf/zookeeper/zoo.cfg
deleted file mode 100644
index 7980d37..0000000
--- a/docker/build/conf/zookeeper/zoo.cfg
+++ /dev/null
@@ -1,45 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements. See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-# The number of milliseconds of each tick
-tickTime=2000
-# The number of ticks that the initial
-# synchronization phase can take
-initLimit=10
-# The number of ticks that can pass between
-# sending a request and getting an acknowledgement
-syncLimit=5
-# the directory where the snapshot is stored.
-# do not use /tmp for storage, /tmp here is just
-# example sakes.
-dataDir=/tmp/zookeeper
-# the port at which the clients will connect
-clientPort=2181
-# the maximum number of client connections.
-# increase this if you need to handle more clients
-#maxClientCnxns=60
-#
-# Be sure to read the maintenance section of the
-# administrator guide before turning on autopurge.
-#
-# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
-#
-# The number of snapshots to retain in dataDir
-#autopurge.snapRetainCount=3
-# Purge task interval in hours
-# Set to "0" to disable auto purge feature
-#autopurge.purgeInterval=1
diff --git a/docker/build/startup-init-conf.sh
b/docker/build/startup-init-conf.sh
index 200f17d..27c73da 100755
--- a/docker/build/startup-init-conf.sh
+++ b/docker/build/startup-init-conf.sh
@@ -39,7 +39,7 @@ export
DATABASE_PARAMS=${DATABASE_PARAMS:-"characterEncoding=utf8"}
export
DOLPHINSCHEDULER_ENV_PATH=${DOLPHINSCHEDULER_ENV_PATH:-"/opt/dolphinscheduler/conf/env/dolphinscheduler_env.sh"}
export
DOLPHINSCHEDULER_DATA_BASEDIR_PATH=${DOLPHINSCHEDULER_DATA_BASEDIR_PATH:-"/tmp/dolphinscheduler"}
export RESOURCE_STORAGE_TYPE=${RESOURCE_STORAGE_TYPE:-"HDFS"}
-export RESOURCE_UPLOAD_PATH=${RESOURCE_UPLOAD_PATH:-"/ds"}
+export RESOURCE_UPLOAD_PATH=${RESOURCE_UPLOAD_PATH:-"/dolphinscheduler"}
export FS_DEFAULT_FS=${FS_DEFAULT_FS:-"file:///"}
export FS_S3A_ENDPOINT=${FS_S3A_ENDPOINT:-"s3.xxx.amazonaws.com"}
export FS_S3A_ACCESS_KEY=${FS_S3A_ACCESS_KEY:-"xxxxxxx"}
@@ -68,7 +68,6 @@ export MASTER_LISTEN_PORT=${MASTER_LISTEN_PORT:-"5678"}
#============================================================================
export WORKER_EXEC_THREADS=${WORKER_EXEC_THREADS:-"100"}
export WORKER_HEARTBEAT_INTERVAL=${WORKER_HEARTBEAT_INTERVAL:-"10"}
-export WORKER_FETCH_TASK_NUM=${WORKER_FETCH_TASK_NUM:-"3"}
export WORKER_MAX_CPULOAD_AVG=${WORKER_MAX_CPULOAD_AVG:-"100"}
export WORKER_RESERVED_MEMORY=${WORKER_RESERVED_MEMORY:-"0.1"}
export WORKER_LISTEN_PORT=${WORKER_LISTEN_PORT:-"1234"}
@@ -77,7 +76,7 @@ export WORKER_GROUP=${WORKER_GROUP:-"default"}
#============================================================================
# Alert Server
#============================================================================
-# XLS FILE
+# xls file
export XLS_FILE_PATH=${XLS_FILE_PATH:-"/tmp/xls"}
# mail
export MAIL_SERVER_HOST=${MAIL_SERVER_HOST:-""}
diff --git a/docker/docker-swarm/docker-compose.yml
b/docker/docker-swarm/docker-compose.yml
index 721d18b..2af0a8f 100644
--- a/docker/docker-swarm/docker-compose.yml
+++ b/docker/docker-swarm/docker-compose.yml
@@ -169,7 +169,6 @@ services:
TZ: Asia/Shanghai
WORKER_EXEC_THREADS: "100"
WORKER_HEARTBEAT_INTERVAL: "10"
- WORKER_FETCH_TASK_NUM: "3"
WORKER_MAX_CPULOAD_AVG: "100"
WORKER_RESERVED_MEMORY: "0.1"
WORKER_GROUP: "default"
diff --git a/docker/docker-swarm/docker-stack.yml
b/docker/docker-swarm/docker-stack.yml
index 6ef73ab..bec281f 100644
--- a/docker/docker-swarm/docker-stack.yml
+++ b/docker/docker-swarm/docker-stack.yml
@@ -163,7 +163,6 @@ services:
TZ: Asia/Shanghai
WORKER_EXEC_THREADS: "100"
WORKER_HEARTBEAT_INTERVAL: "10"
- WORKER_FETCH_TASK_NUM: "3"
WORKER_MAX_CPULOAD_AVG: "100"
WORKER_RESERVED_MEMORY: "0.1"
WORKER_GROUP: "default"
diff --git a/dolphinscheduler-alert/src/main/resources/alert.properties
b/dolphinscheduler-alert/src/main/resources/alert.properties
index 19b55fe..c46edee 100644
--- a/dolphinscheduler-alert/src/main/resources/alert.properties
+++ b/dolphinscheduler-alert/src/main/resources/alert.properties
@@ -46,4 +46,3 @@ enterprise.wechat.enable=false
#enterprise.wechat.user.send.msg={\"touser\":\"$toUser\",\"agentid\":\"$agentId\",\"msgtype\":\"markdown\",\"markdown\":{\"content\":\"$msg\"}}
plugin.dir=/Users/xx/your/path/to/plugin/dir
-
diff --git a/dolphinscheduler-api/src/main/resources/application-api.properties
b/dolphinscheduler-api/src/main/resources/application-api.properties
index 1fd24c8..0d73f13 100644
--- a/dolphinscheduler-api/src/main/resources/application-api.properties
+++ b/dolphinscheduler-api/src/main/resources/application-api.properties
@@ -31,7 +31,7 @@ spring.servlet.multipart.max-request-size=1024MB
server.compression.enabled=true
server.compression.mime-types=text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json,application/xml
-#post content
+# post content
server.jetty.max-http-post-size=5000000
spring.messages.encoding=UTF-8
@@ -41,7 +41,3 @@ spring.messages.basename=i18n/messages
# Authentication types (supported types: PASSWORD)
security.authentication.type=PASSWORD
-
-
-
-
diff --git a/dolphinscheduler-common/src/main/resources/common.properties
b/dolphinscheduler-common/src/main/resources/common.properties
index ef49a19..b916b10 100644
--- a/dolphinscheduler-common/src/main/resources/common.properties
+++ b/dolphinscheduler-common/src/main/resources/common.properties
@@ -33,7 +33,7 @@ java.security.krb5.conf.path=/opt/krb5.conf
# login user from keytab username
[email protected]
-# loginUserFromKeytab path
+# login user from keytab path
login.user.keytab.path=/opt/hdfs.headless.keytab
#resource.view.suffixs
@@ -42,16 +42,16 @@ login.user.keytab.path=/opt/hdfs.headless.keytab
# if resource.storage.type=HDFS, the user need to have permission to create
directories under the HDFS root path
hdfs.root.user=hdfs
-# if resource.storage.type=S3,the value like: s3a://dolphinscheduler ; if
resource.storage.type=HDFS, When namenode HA is enabled, you need to copy
core-site.xml and hdfs-site.xml to conf dir
+# if resource.storage.type=S3, the value like: s3a://dolphinscheduler ; if
resource.storage.type=HDFS, When namenode HA is enabled, you need to copy
core-site.xml and hdfs-site.xml to conf dir
fs.defaultFS=hdfs://mycluster:8020
-# if resource.storage.type=S3,s3 endpoint
+# if resource.storage.type=S3, s3 endpoint
fs.s3a.endpoint=http://192.168.xx.xx:9010
-# if resource.storage.type=S3,s3 access key
+# if resource.storage.type=S3, s3 access key
fs.s3a.access.key=A3DXS30FO22544RE
-# if resource.storage.type=S3,s3 secret key
+# if resource.storage.type=S3, s3 secret key
fs.s3a.secret.key=OloCLq3n+8+sdPHUhJ21XrSxTC+JK
# if resourcemanager HA enable, please type the HA ips ; if resourcemanager is
single, make this value empty
diff --git a/dolphinscheduler-dao/src/main/resources/datasource.properties
b/dolphinscheduler-dao/src/main/resources/datasource.properties
index 25ac220..9eca946 100644
--- a/dolphinscheduler-dao/src/main/resources/datasource.properties
+++ b/dolphinscheduler-dao/src/main/resources/datasource.properties
@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
-
+
# postgresql
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
diff --git a/dolphinscheduler-server/src/main/resources/master.properties
b/dolphinscheduler-server/src/main/resources/master.properties
index 44301fb..db963e0 100644
--- a/dolphinscheduler-server/src/main/resources/master.properties
+++ b/dolphinscheduler-server/src/main/resources/master.properties
@@ -21,7 +21,6 @@
# master execute task number in parallel
#master.exec.task.num=20
-
# master dispatch task number
#master.dispatch.task.num = 3
@@ -34,7 +33,6 @@
# master commit task interval
#master.task.commit.interval=1000
-
# only less than cpu avg load, master server can work. default value -1 : the
number of cpu cores * 2
#master.max.cpuload.avg=-1
@@ -42,4 +40,4 @@
#master.reserved.memory=0.3
# master listen port
-#master.listen.port=5678
\ No newline at end of file
+#master.listen.port=5678
diff --git a/dolphinscheduler-server/src/main/resources/worker.properties
b/dolphinscheduler-server/src/main/resources/worker.properties
index 72e7163..ba9d72f 100644
--- a/dolphinscheduler-server/src/main/resources/worker.properties
+++ b/dolphinscheduler-server/src/main/resources/worker.properties
@@ -28,7 +28,7 @@
#worker.reserved.memory=0.3
# worker listener port
-#worker.listen.port: 1234
+#worker.listen.port=1234
-# default worker group,if this worker belongs different groups,you can config
the following like that `worker.groups=default,test`
+# default worker group,if this worker belongs different groups,you can config
the following like that 'worker.groups=default,test'
worker.groups=default
diff --git a/dolphinscheduler-service/src/main/resources/quartz.properties
b/dolphinscheduler-service/src/main/resources/quartz.properties
index 6e208f6..93ee71c 100644
--- a/dolphinscheduler-service/src/main/resources/quartz.properties
+++ b/dolphinscheduler-service/src/main/resources/quartz.properties
@@ -51,4 +51,4 @@
#============================================================================
# Configure Datasources
#============================================================================
-#org.quartz.dataSource.myDs.connectionProvider.class =
org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
\ No newline at end of file
+#org.quartz.dataSource.myDs.connectionProvider.class =
org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
diff --git a/dolphinscheduler-service/src/main/resources/zookeeper.properties
b/dolphinscheduler-service/src/main/resources/zookeeper.properties
index 2204467..c539891 100644
--- a/dolphinscheduler-service/src/main/resources/zookeeper.properties
+++ b/dolphinscheduler-service/src/main/resources/zookeeper.properties
@@ -26,4 +26,4 @@ zookeeper.quorum=localhost:2181
#zookeeper.connection.timeout=30000
#zookeeper.retry.base.sleep=100
#zookeeper.retry.max.sleep=30000
-#zookeeper.retry.maxtime=10
\ No newline at end of file
+#zookeeper.retry.maxtime=10