zhongjiajie commented on code in PR #11113:
URL: https://github.com/apache/dolphinscheduler/pull/11113#discussion_r929756889


##########
docs/docs/en/architecture/configuration.md:
##########
@@ -4,13 +4,11 @@
 
 ## Preface
 
-This document explains the DolphinScheduler application configurations 
according to DolphinScheduler-1.3.x versions.
+This document explains the DolphinScheduler application configurations 
according to DolphinScheduler-3.0.0-beta-2 versions.

Review Comment:
   I think we can directly remove the version part, in website 
dolphinscheduler.apache.org it have the sider bar to choose specific version of 
dolphinscheudler reader want to watch
   ```suggestion
   This document explains the DolphinScheduler application configurations.
   ```



##########
docs/docs/en/architecture/configuration.md:
##########
@@ -139,321 +123,203 @@ export DOLPHINSCHEDULER_OPTS="
 
 > "-XX:DisableExplicitGC" is not recommended due to may lead to memory link 
 > (DS dependent on Netty to communicate).
 
-### datasource.properties [datasource config properties]
+### 2. Database connection related configuration
+
+DS uses Spring Hikari to manage database connections, configuration file 
location:
+
+|Service| Configuration file  |
+|--|--|
+Master Server | `master-server/conf/application.yaml`
+Api Server| `api-server/conf/application.yaml`
+Worker Server| `worker-server/conf/application.yaml`
+Alert Server| `alert-server/conf/application.yaml`

Review Comment:
   Some markdown syntax error, but the most important thing is we also support 
change database configs by file `dolphinscheduler_env.sh`. I think it is better 
to add this method to this docs
   
   ```suggestion
   | Master Server | `master-server/conf/application.yaml` |
   | Api Server| `api-server/conf/application.yaml` |
   | Worker Server| `worker-server/conf/application.yaml` |
   | Alert Server| `alert-server/conf/application.yaml` |
   ```



##########
docs/docs/en/architecture/configuration.md:
##########
@@ -100,23 +98,9 @@ This document only describes DolphinScheduler 
configurations and other topics ar
 
 ## Configurations in Details
 
-serial number| service classification| config file|
-|--|--|--|
-1|startup or shutdown DS application|dolphinscheduler-daemon.sh
-2|datasource config properties|datasource.properties
-3|ZooKeeper config properties|zookeeper.properties
-4|common-service[storage] config properties|common.properties
-5|API-service config properties|application-api.properties
-6|master-service config properties|master.properties
-7|worker-service config properties|worker.properties
-8|alert-service config properties|alert.properties
-9|quartz config properties|quartz.properties
-10|DS environment variables configuration script[install/start 
DS]|install_config.conf
-11|load environment variables configs <br /> [eg: JAVA_HOME,HADOOP_HOME, 
HIVE_HOME ...]|dolphinscheduler_env.sh
-12|services log config files|API-service log config : logback-api.xml  <br /> 
master-service log config  : logback-master.xml    <br /> worker-service log 
config : logback-worker.xml  <br /> alert-service log config : logback-alert.xml
-
-
-### dolphinscheduler-daemon.sh [startup or shutdown DS application]
+
+
+### 1. dolphinscheduler-daemon.sh [startup or shutdown DS application]

Review Comment:
   Could you replace all `DS` to `DolphinScheduler`, I do not think using the 
shortcut in document is a good idea. Also please remove the leading number of 
sub title, same as other change in your PR. it is unnecessary because the 
leading `###` is the level mean
   ```suggestion
   ### dolphinscheduler-daemon.sh [startup or shutdown DolphinScheduler 
application]
   ```



##########
docs/docs/en/architecture/configuration.md:
##########
@@ -139,321 +123,203 @@ export DOLPHINSCHEDULER_OPTS="
 
 > "-XX:DisableExplicitGC" is not recommended due to may lead to memory link 
 > (DS dependent on Netty to communicate).
 
-### datasource.properties [datasource config properties]
+### 2. Database connection related configuration
+
+DS uses Spring Hikari to manage database connections, configuration file 
location:
+
+|Service| Configuration file  |
+|--|--|
+Master Server | `master-server/conf/application.yaml`
+Api Server| `api-server/conf/application.yaml`
+Worker Server| `worker-server/conf/application.yaml`
+Alert Server| `alert-server/conf/application.yaml`
+
+The default configuration is as follows:
 
-DS uses Druid to manage database connections and default simplified configs 
are:
 |Parameters | Default value| Description|
 |--|--|--|
-spring.datasource.driver-class-name||datasource driver
-spring.datasource.url||datasource connection url
-spring.datasource.username||datasource username
-spring.datasource.password||datasource password
-spring.datasource.initialSize|5| initial connection pool size number
-spring.datasource.minIdle|5| minimum connection pool size number
-spring.datasource.maxActive|5| maximum connection pool size number
-spring.datasource.maxWait|60000| max wait milliseconds
-spring.datasource.timeBetweenEvictionRunsMillis|60000| idle connection check 
interval
-spring.datasource.timeBetweenConnectErrorMillis|60000| retry interval
-spring.datasource.minEvictableIdleTimeMillis|300000| connections over 
minEvictableIdleTimeMillis will be collect when idle check
-spring.datasource.validationQuery|SELECT 1| validate connection by running the 
SQL
-spring.datasource.validationQueryTimeout|3| validate connection 
timeout[seconds]
-spring.datasource.testWhileIdle|true| set whether the pool validates the 
allocated connection when a new connection request comes
-spring.datasource.testOnBorrow|true| validity check when the program requests 
a new connection
-spring.datasource.testOnReturn|false| validity check when the program recalls 
a connection
-spring.datasource.defaultAutoCommit|true| whether auto commit
-spring.datasource.keepAlive|true| runs validationQuery SQL to avoid the 
connection closed by pool when the connection idles over 
minEvictableIdleTimeMillis
-spring.datasource.poolPreparedStatements|true| open PSCache
-spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| specify the 
size of PSCache on each connection
-
-
-### zookeeper.properties [zookeeper config properties]
+spring.datasource.driver-class-name| org.postgresql.Driver |datasource driver
+spring.datasource.url| jdbc:postgresql://127.0.0.1:5432/dolphinscheduler 
|datasource connection url
+spring.datasource.username|root|datasource username
+spring.datasource.password|root|datasource password
+spring.datasource.hikari.connection-test-query|select 1|validate connection by 
running the SQL
+spring.datasource.hikari.minimum-idle| 5| minimum connection pool size number
+spring.datasource.hikari.auto-commit|true|whether auto commit
+spring.datasource.hikari.pool-name|DolphinScheduler|name of the connection pool
+spring.datasource.hikari.maximum-pool-size|50| maximum connection pool size 
number
+spring.datasource.hikari.connection-timeout|30000|connection timeout
+spring.datasource.hikari.idle-timeout|600000|Maximum idle connection survival 
time
+spring.datasource.hikari.leak-detection-threshold|0|Connection leak detection 
threshold
+spring.datasource.hikari.initialization-fail-timeout|1|Connection pool 
initialization failed timeout
+
+
+### 3. Zookeeper related configuration
+DS uses Zookeeper for cluster management, fault tolerance, event monitoring 
and other functions. Configuration file location:
+|Service| Configuration file  |
+|--|--|
+Master Server | `master-server/conf/application.yaml`
+Api Server| `api-server/conf/application.yaml`
+Worker Server| `worker-server/conf/application.yaml`
+
+The default configuration is as follows:
 
 |Parameters | Default value| Description|
 |--|--|--|
-zookeeper.quorum|localhost:2181| ZooKeeper cluster connection info
-zookeeper.dolphinscheduler.root|/dolphinscheduler| DS is stored under 
ZooKeeper root directory
-zookeeper.session.timeout|60000|  session timeout
-zookeeper.connection.timeout|30000| connection timeout
-zookeeper.retry.base.sleep|100| time to wait between subsequent retries
-zookeeper.retry.max.sleep|30000| maximum time to wait between subsequent 
retries
-zookeeper.retry.maxtime|10| maximum retry times
+registry.zookeeper.namespace|dolphinscheduler|namespace of zookeeper
+registry.zookeeper.connect-string|localhost:2181| the connection string of 
zookeeper
+registry.zookeeper.retry-policy.base-sleep-time|60ms|time to wait between 
subsequent retries
+registry.zookeeper.retry-policy.max-sleep|300ms|maximum time to wait between 
subsequent retries
+registry.zookeeper.retry-policy.max-retries|5|maximum retry times
+registry.zookeeper.session-timeout|30s|session timeout
+registry.zookeeper.connection-timeout|30s|connection timeout
+registry.zookeeper.block-until-connected|600ms|waiting time to block until the 
connection succeeds
+registry.zookeeper.digest|~|digest of zookeeper
+
+
+### 4. common.properties [hadoop、s3、yarn config properties]
+
+Currently, common.properties mainly configures Hadoop,s3a related 
configurations. Configuration file location:
 
+|Service| Configuration file  |
+|--|--|
+Master Server | `master-server/conf/common.properties`
+Api Server| `api-server/conf/common.properties`
+Worker Server| `worker-server/conf/common.properties`
+Alert Server| `alert-server/conf/common.properties`
 
-### common.properties [hadoop、s3、yarn config properties]
+The default configuration is as follows:
 
-Currently, common.properties mainly configures Hadoop,s3a related 
configurations.
 | Parameters | Default value | Description |
 |--|--|--|
 data.basedir.path | /tmp/dolphinscheduler | local directory used to store temp 
files
 resource.storage.type | NONE | type of resource files: HDFS, S3, NONE
-resource.storage.upload.base.path | /dolphinscheduler | storage path of 
resource files
-resource.aws.access.key.id | minioadmin | access key id of S3
-resource.aws.secret.access.key | minioadmin | secret access key of S3
-resource.aws.region |us-east-1 | region of S3
-resource.aws.s3.bucket.name | dolphinscheduler | bucket name of S3
-resource.aws.s3.endpoint | http://minio:9000 | endpoint of S3
-resource.hdfs.root.user | hdfs | configure users with corresponding 
permissions if storage type is HDFS
-resource.hdfs.fs.defaultFS | hdfs://mycluster:8020 | If 
resource.storage.type=S3, then the request url would be similar to 
's3a://dolphinscheduler'. Otherwise if resource.storage.type=HDFS and hadoop 
supports HA, copy core-site.xml and hdfs-site.xml into 'conf' directory
+resource.upload.path | /dolphinscheduler | storage path of resource files
+aws.access.key.id | minioadmin | access key id of S3
+aws.secret.access.key | minioadmin | secret access key of S3
+aws.region | us-east-1 | region of S3
+aws.s3.endpoint | http://minio:9000 | endpoint of S3
+hdfs.root.user | hdfs | configure users with corresponding permissions if 
storage type is HDFS
+fs.defaultFS | hdfs://mycluster:8020 | If resource.storage.type=S3, then the 
request url would be similar to 's3a://dolphinscheduler'. Otherwise if 
resource.storage.type=HDFS and hadoop supports HA, copy core-site.xml and 
hdfs-site.xml into 'conf' directory
 hadoop.security.authentication.startup.state | false | whether hadoop grant 
kerberos permission
 java.security.krb5.conf.path | /opt/krb5.conf | kerberos config directory
 login.user.keytab.username | [email protected] | kerberos username
 login.user.keytab.path | /opt/hdfs.headless.keytab | kerberos user keytab
 kerberos.expire.time | 2 | kerberos expire time,integer,the unit is hour
-yarn.resourcemanager.ha.rm.ids |  | specify the yarn resourcemanager url. if 
resourcemanager supports HA, input HA IP addresses (separated by comma), or 
input null for standalone
+yarn.resourcemanager.ha.rm.ids | 192.168.xx.xx,192.168.xx.xx | specify the 
yarn resourcemanager url. if resourcemanager supports HA, input HA IP addresses 
(separated by comma), or input null for standalone
 yarn.application.status.address | http://ds1:8088/ws/v1/cluster/apps/%s | keep 
default if ResourceManager supports HA or not use ResourceManager, or replace 
ds1 with corresponding hostname if ResourceManager in standalone mode
-dolphinscheduler.env.path | env/dolphinscheduler_env.sh | load environment 
variables configs [eg: JAVA_HOME,HADOOP_HOME, HIVE_HOME ...]
 development.state | false | specify whether in development state
-task.resource.limit.state | false | specify whether in resource limit state
+resource.manager.httpaddress.port | 8088 | the port of resource manager
+yarn.job.history.status.address | 
http://ds1:19888/ws/v1/history/mapreduce/jobs/%s | job history status url of 
yarn
+datasource.encryption.enable | false | whether to enable datasource encryption
+datasource.encryption.salt | !@#$%^&* | the salt of the datasource encryption
+data-quality.jar.name | dolphinscheduler-data-quality-dev-SNAPSHOT.jar | the 
jar of data quality
+support.hive.oneSession | false | specify whether hive SQL is executed in the 
same session
+sudo.enable | true | whether to enable sudo
+alert.rpc.port | 50052 | the RPC port of Alert Server
+zeppelin.rest.url | http://localhost:8080 | the RESTful API url of zeppelin
 
 
-### application-api.properties [API-service log config]
+### 5. Api-server related configuration
+Location: `api-server/conf/application.yaml`
 
 |Parameters | Default value| Description|
 |--|--|--|
 server.port|12345|api service communication port
-server.servlet.session.timeout|7200|session timeout
-server.servlet.context-path|/dolphinscheduler | request path
-spring.servlet.multipart.max-file-size|1024MB| maximum file size
-spring.servlet.multipart.max-request-size|1024MB| maximum request size
-server.jetty.max-http-post-size|5000000| jetty maximum post size
-spring.messages.encoding|UTF-8| message encoding
-spring.jackson.time-zone|GMT+8| time zone
-spring.messages.basename|i18n/messages| i18n config
-security.authentication.type|PASSWORD| authentication type
+server.servlet.session.timeout|120m|session timeout
+server.servlet.context-path|/dolphinscheduler/ |request path
+spring.servlet.multipart.max-file-size|1024MB|maximum file size
+spring.servlet.multipart.max-request-size|1024MB|maximum request size
+server.jetty.max-http-post-size|5000000|jetty maximum post size
+spring.banner.charset|UTF-8|message encoding
+spring.jackson.time-zone|UTC|time zone
+spring.jackson.date-format|"yyyy-MM-dd HH:mm:ss"|设置时间格式

Review Comment:
   ```suggestion
   spring.jackson.date-format|"yyyy-MM-dd HH:mm:ss"| time format
   ```



##########
docs/docs/en/architecture/configuration.md:
##########
@@ -139,321 +123,203 @@ export DOLPHINSCHEDULER_OPTS="
 
 > "-XX:DisableExplicitGC" is not recommended due to may lead to memory link 
 > (DS dependent on Netty to communicate).
 
-### datasource.properties [datasource config properties]
+### 2. Database connection related configuration
+
+DS uses Spring Hikari to manage database connections, configuration file 
location:
+
+|Service| Configuration file  |
+|--|--|
+Master Server | `master-server/conf/application.yaml`
+Api Server| `api-server/conf/application.yaml`
+Worker Server| `worker-server/conf/application.yaml`
+Alert Server| `alert-server/conf/application.yaml`
+
+The default configuration is as follows:
 
-DS uses Druid to manage database connections and default simplified configs 
are:
 |Parameters | Default value| Description|
 |--|--|--|
-spring.datasource.driver-class-name||datasource driver
-spring.datasource.url||datasource connection url
-spring.datasource.username||datasource username
-spring.datasource.password||datasource password

Review Comment:
   OMG, the old content same as you change, and it work in 
https://dolphinscheduler.apache.org/en-us/docs/dev/user_doc/architecture/configuration.html.
 But I suggest we should use the complete markdown table syntax here



##########
docs/docs/en/architecture/configuration.md:
##########
@@ -139,321 +123,203 @@ export DOLPHINSCHEDULER_OPTS="
 
 > "-XX:DisableExplicitGC" is not recommended due to may lead to memory link 
 > (DS dependent on Netty to communicate).
 
-### datasource.properties [datasource config properties]
+### 2. Database connection related configuration
+
+DS uses Spring Hikari to manage database connections, configuration file 
location:
+
+|Service| Configuration file  |
+|--|--|
+Master Server | `master-server/conf/application.yaml`
+Api Server| `api-server/conf/application.yaml`
+Worker Server| `worker-server/conf/application.yaml`
+Alert Server| `alert-server/conf/application.yaml`
+
+The default configuration is as follows:
 
-DS uses Druid to manage database connections and default simplified configs 
are:
 |Parameters | Default value| Description|
 |--|--|--|
-spring.datasource.driver-class-name||datasource driver
-spring.datasource.url||datasource connection url
-spring.datasource.username||datasource username
-spring.datasource.password||datasource password
-spring.datasource.initialSize|5| initial connection pool size number
-spring.datasource.minIdle|5| minimum connection pool size number
-spring.datasource.maxActive|5| maximum connection pool size number
-spring.datasource.maxWait|60000| max wait milliseconds
-spring.datasource.timeBetweenEvictionRunsMillis|60000| idle connection check 
interval
-spring.datasource.timeBetweenConnectErrorMillis|60000| retry interval
-spring.datasource.minEvictableIdleTimeMillis|300000| connections over 
minEvictableIdleTimeMillis will be collect when idle check
-spring.datasource.validationQuery|SELECT 1| validate connection by running the 
SQL
-spring.datasource.validationQueryTimeout|3| validate connection 
timeout[seconds]
-spring.datasource.testWhileIdle|true| set whether the pool validates the 
allocated connection when a new connection request comes
-spring.datasource.testOnBorrow|true| validity check when the program requests 
a new connection
-spring.datasource.testOnReturn|false| validity check when the program recalls 
a connection
-spring.datasource.defaultAutoCommit|true| whether auto commit
-spring.datasource.keepAlive|true| runs validationQuery SQL to avoid the 
connection closed by pool when the connection idles over 
minEvictableIdleTimeMillis
-spring.datasource.poolPreparedStatements|true| open PSCache
-spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| specify the 
size of PSCache on each connection
-
-
-### zookeeper.properties [zookeeper config properties]
+spring.datasource.driver-class-name| org.postgresql.Driver |datasource driver
+spring.datasource.url| jdbc:postgresql://127.0.0.1:5432/dolphinscheduler 
|datasource connection url
+spring.datasource.username|root|datasource username
+spring.datasource.password|root|datasource password
+spring.datasource.hikari.connection-test-query|select 1|validate connection by 
running the SQL
+spring.datasource.hikari.minimum-idle| 5| minimum connection pool size number
+spring.datasource.hikari.auto-commit|true|whether auto commit
+spring.datasource.hikari.pool-name|DolphinScheduler|name of the connection pool
+spring.datasource.hikari.maximum-pool-size|50| maximum connection pool size 
number
+spring.datasource.hikari.connection-timeout|30000|connection timeout
+spring.datasource.hikari.idle-timeout|600000|Maximum idle connection survival 
time
+spring.datasource.hikari.leak-detection-threshold|0|Connection leak detection 
threshold
+spring.datasource.hikari.initialization-fail-timeout|1|Connection pool 
initialization failed timeout
+
+
+### 3. Zookeeper related configuration

Review Comment:
   Same as database, we also supported `dolpinscheudler_env.sh` to change our 
zookeeper config



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to