This is an automated email from the ASF dual-hosted git repository.

zihaoxiang pushed a commit to branch 3.2.2-prepare
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler.git


The following commit(s) were added to refs/heads/3.2.2-prepare by this push:
     new 66b7d1274a [Chore] [Cherry-Pick] cherry pick some pr to 3.2.2 (#16215)
66b7d1274a is described below

commit 66b7d1274a65e939639e0bfe598b6981902b617a
Author: xiangzihao <[email protected]>
AuthorDate: Wed Jun 26 14:45:37 2024 +0800

    [Chore] [Cherry-Pick] cherry pick some pr to 3.2.2 (#16215)
    
    * [Fix-16174] Incorrect cluster installation guide. (#16208)
    
    * [Fix][CI] fix the ci error of Values.datasource.profile (#16031)
    
    * [Improvement][Helm] Make configmap of api/master/worker/alert 
configuration (#16058)
    
    Update 
deploy/kubernetes/dolphinscheduler/templates/statefulset-dolphinscheduler-worker.yaml
    
    * Update deploy/kubernetes/dolphinscheduler/values.yaml
    
    * [helm] remove appversion from labels (#16066)
---
 deploy/kubernetes/dolphinscheduler/README.md       |  10 +-
 .../dolphinscheduler/templates/_helpers.tpl        |   1 -
 .../configmap-dolphinscheduler-alert.yaml          |  87 +-------
 .../templates/configmap-dolphinscheduler-api.yaml  | 227 +--------------------
 .../configmap-dolphinscheduler-master.yaml         | 142 +------------
 .../configmap-dolphinscheduler-worker.yaml         |  90 +-------
 .../deployment-dolphinscheduler-alert.yaml         |   6 +-
 .../templates/deployment-dolphinscheduler-api.yaml |   6 +-
 .../statefulset-dolphinscheduler-master.yaml       |   6 +-
 .../statefulset-dolphinscheduler-worker.yaml       |   6 +-
 deploy/kubernetes/dolphinscheduler/values.yaml     |  57 +++++-
 docs/docs/en/guide/installation/cluster.md         |  16 +-
 docs/docs/en/guide/installation/pseudo-cluster.md  |  32 +--
 docs/docs/zh/guide/installation/cluster.md         |  16 +-
 docs/docs/zh/guide/installation/pseudo-cluster.md  |  31 +--
 .../plugin/task/k8s/K8sTaskTest.java               |   2 +-
 script/env/install_env.sh                          |  63 ------
 script/install.sh                                  |  58 ------
 18 files changed, 114 insertions(+), 742 deletions(-)

diff --git a/deploy/kubernetes/dolphinscheduler/README.md 
b/deploy/kubernetes/dolphinscheduler/README.md
index ba533b6e47..00a31d786c 100644
--- a/deploy/kubernetes/dolphinscheduler/README.md
+++ b/deploy/kubernetes/dolphinscheduler/README.md
@@ -14,6 +14,8 @@ Please refer to the [Quick Start in 
Kubernetes](../../../docs/docs/en/guide/inst
 |-----|------|---------|-------------|
 | alert.affinity | object | `{}` | Affinity is a group of affinity scheduling 
rules. If specified, the pod's scheduling constraints. More info: 
[node-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)
 |
 | alert.annotations | object | `{}` | You can use annotations to attach 
arbitrary non-identifying metadata to objects. Clients such as tools and 
libraries can retrieve this metadata. |
+| alert.customizedConfig | object | `{}` | configure aligned with 
https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-alert/dolphinscheduler-alert-server/src/main/resources/application.yaml
 |
+| alert.enableCustomizedConfig | bool | `false` | enable configure custom 
config |
 | alert.enabled | bool | `true` | Enable or disable the Alert-Server component 
|
 | alert.env.JAVA_OPTS | string | `"-Xms512m -Xmx512m -Xmn256m"` | The jvm 
options for alert server |
 | alert.livenessProbe | object | 
`{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}`
 | Periodic probe of container liveness. Container will be restarted if the 
probe fails. More info: 
[container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
 |
@@ -52,6 +54,8 @@ Please refer to the [Quick Start in 
Kubernetes](../../../docs/docs/en/guide/inst
 | alert.tolerations | list | `[]` | Tolerations are appended (excluding 
duplicates) to pods running with this RuntimeClass during admission, 
effectively unioning the set of nodes tolerated by the pod and the 
RuntimeClass. |
 | api.affinity | object | `{}` | Affinity is a group of affinity scheduling 
rules. If specified, the pod's scheduling constraints. More info: 
[node-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)
 |
 | api.annotations | object | `{}` | You can use annotations to attach 
arbitrary non-identifying metadata to objects. Clients such as tools and 
libraries can retrieve this metadata. |
+| api.customizedConfig | object | `{}` | configure aligned with 
https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-api/src/main/resources/application.yaml
 |
+| api.enableCustomizedConfig | bool | `false` | enable configure custom config 
|
 | api.enabled | bool | `true` | Enable or disable the API-Server component |
 | api.env.JAVA_OPTS | string | `"-Xms512m -Xmx512m -Xmn256m"` | The jvm 
options for api server |
 | api.livenessProbe | object | 
`{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}`
 | Periodic probe of container liveness. Container will be restarted if the 
probe fails. More info: 
[container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
 |
@@ -158,6 +162,7 @@ Please refer to the [Quick Start in 
Kubernetes](../../../docs/docs/en/guide/inst
 | conf.common."yarn.application.status.address" | string | 
`"http://ds1:%s/ws/v1/cluster/apps/%s"` | if resourcemanager HA is enabled or 
not use resourcemanager, please keep the default value; If resourcemanager is 
single, you only need to replace ds1 to actual resourcemanager hostname |
 | conf.common."yarn.job.history.status.address" | string | 
`"http://ds1:19888/ws/v1/history/mapreduce/jobs/%s"` | job history status url 
when application number threshold is reached(default 10000, maybe it was set to 
1000) |
 | conf.common."yarn.resourcemanager.ha.rm.ids" | string | 
`"192.168.xx.xx,192.168.xx.xx"` | if resourcemanager HA is enabled, please set 
the HA IPs; if resourcemanager is single, keep this value empty |
+| datasource.profile | string | `"postgresql"` | The profile of datasource |
 | externalDatabase.database | string | `"dolphinscheduler"` | The database of 
external database |
 | externalDatabase.driverClassName | string | `"org.postgresql.Driver"` | The 
driverClassName of external database |
 | externalDatabase.enabled | bool | `false` | If exists external database, and 
set postgresql.enable value to false. external database will be used, otherwise 
Dolphinscheduler's internal database will be used. |
@@ -189,6 +194,8 @@ Please refer to the [Quick Start in 
Kubernetes](../../../docs/docs/en/guide/inst
 | initImage.pullPolicy | string | `"IfNotPresent"` | Image pull policy. 
Options: Always, Never, IfNotPresent |
 | master.affinity | object | `{}` | Affinity is a group of affinity scheduling 
rules. If specified, the pod's scheduling constraints. More info: 
[node-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)
 |
 | master.annotations | object | `{}` | You can use annotations to attach 
arbitrary non-identifying metadata to objects. Clients such as tools and 
libraries can retrieve this metadata. |
+| master.customizedConfig | object | `{}` | configure aligned with 
https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-master/src/main/resources/application.yaml
 |
+| master.enableCustomizedConfig | bool | `false` | enable configure custom 
config |
 | master.enabled | bool | `true` | Enable or disable the Master component |
 | master.env.JAVA_OPTS | string | `"-Xms1g -Xmx1g -Xmn512m"` | The jvm options 
for master server |
 | master.env.MASTER_DISPATCH_TASK_NUM | string | `"3"` | Master dispatch task 
number per batch |
@@ -295,6 +302,8 @@ Please refer to the [Quick Start in 
Kubernetes](../../../docs/docs/en/guide/inst
 | timezone | string | `"Asia/Shanghai"` | World time and date for cities in 
all time zones |
 | worker.affinity | object | `{}` | Affinity is a group of affinity scheduling 
rules. If specified, the pod's scheduling constraints. More info: 
[node-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity)
 |
 | worker.annotations | object | `{}` | You can use annotations to attach 
arbitrary non-identifying metadata to objects. Clients such as tools and 
libraries can retrieve this metadata. |
+| worker.customizedConfig | object | `{}` | configure aligned with 
https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-worker/src/main/resources/application.yaml
 |
+| worker.enableCustomizedConfig | bool | `false` | enable configure custom 
config |
 | worker.enabled | bool | `true` | Enable or disable the Worker component |
 | worker.env.DEFAULT_TENANT_ENABLED | bool | `false` | If set true, will use 
worker bootstrap user as the tenant to execute task when the tenant is 
`default`; |
 | worker.env.WORKER_EXEC_THREADS | string | `"100"` | Worker execute thread 
number to limit task instances |
@@ -314,7 +323,6 @@ Please refer to the [Quick Start in 
Kubernetes](../../../docs/docs/en/guide/inst
 | worker.keda.minReplicaCount | int | `0` | Minimum number of workers created 
by keda |
 | worker.keda.namespaceLabels | object | `{}` | Keda namespace labels |
 | worker.keda.pollingInterval | int | `5` | How often KEDA polls the 
DolphinScheduler DB to report new scale requests to the HPA |
-| worker.livenessProbe | object | 
`{"enabled":true,"failureThreshold":"3","initialDelaySeconds":"30","periodSeconds":"30","successThreshold":"1","timeoutSeconds":"5"}`
 | Periodic probe of container liveness. Container will be restarted if the 
probe fails. More info: 
[container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
 |
 | worker.livenessProbe.enabled | bool | `true` | Turn on and off liveness 
probe |
 | worker.livenessProbe.failureThreshold | string | `"3"` | Minimum consecutive 
failures for the probe |
 | worker.livenessProbe.initialDelaySeconds | string | `"30"` | Delay before 
liveness probe is initiated |
diff --git a/deploy/kubernetes/dolphinscheduler/templates/_helpers.tpl 
b/deploy/kubernetes/dolphinscheduler/templates/_helpers.tpl
index 71287b1f10..368e0b290f 100644
--- a/deploy/kubernetes/dolphinscheduler/templates/_helpers.tpl
+++ b/deploy/kubernetes/dolphinscheduler/templates/_helpers.tpl
@@ -51,7 +51,6 @@ Create a default common labels.
 {{- define "dolphinscheduler.common.labels" -}}
 app.kubernetes.io/instance: {{ .Release.Name }}
 app.kubernetes.io/managed-by: {{ .Release.Service }}
-app.kubernetes.io/version: {{ .Chart.AppVersion }}
 {{- end -}}
 
 {{/*
diff --git 
a/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-alert.yaml
 
b/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-alert.yaml
index 268c78bc4e..15bffe06f2 100644
--- 
a/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-alert.yaml
+++ 
b/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-alert.yaml
@@ -14,7 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-{{- if and .Values.alert.enabled }}
+{{- if .Values.alert.enableCustomizedConfig }}
 apiVersion: v1
 kind: ConfigMap
 metadata:
@@ -23,83 +23,8 @@ metadata:
     app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-alert
     {{- include "dolphinscheduler.alert.labels" . | nindent 4 }}
 data:
-  application.yaml: |
-    spring:
-      profiles:
-        active: {{ .Values.datasource.profile }}
-      jackson:
-        time-zone: UTC
-        date-format: "yyyy-MM-dd HH:mm:ss"
-      banner:
-        charset: UTF-8
-      datasource:
-        profile: postgresql
-        config:
-          driver-class-name: org.postgresql.Driver
-          url: jdbc:postgresql://127.0.0.1:5432/dolphinscheduler
-          username: root
-          password: root
-          hikari:
-            connection-test-query: select 1
-            pool-name: DolphinScheduler
-    
-    # Mybatis-plus configuration, you don't need to change it
-    mybatis-plus:
-      mapper-locations: 
classpath:org/apache/dolphinscheduler/dao/mapper/*Mapper.xml
-      type-aliases-package: org.apache.dolphinscheduler.dao.entity
-      configuration:
-        cache-enabled: false
-        call-setters-on-nulls: true
-        map-underscore-to-camel-case: true
-        jdbc-type-for-null: NULL
-      global-config:
-        db-config:
-          id-type: auto
-        banner: false
-    
-    server:
-      port: 50053
-    
-    management:
-      endpoints:
-        web:
-          exposure:
-            include: health,metrics,prometheus
-      endpoint:
-        health:
-          enabled: true
-          show-details: always
-      health:
-        db:
-          enabled: true
-        defaults:
-          enabled: false
-      metrics:
-        tags:
-          application: ${spring.application.name}
-    
-    alert:
-      port: 50052
-      # Mark each alert of alert server if late after x milliseconds as failed.
-      # Define value is (0 = infinite), and alert server would be waiting 
alert result.
-      wait-timeout: 0
-      max-heartbeat-interval: 60s
-      query_alert_threshold: 100
-    
-    registry:
-      type: zookeeper
-      zookeeper:
-        namespace: dolphinscheduler
-        connect-string: localhost:2181
-        retry-policy:
-          base-sleep-time: 60ms
-          max-sleep: 300ms
-          max-retries: 5
-        session-timeout: 30s
-        connection-timeout: 9s
-        block-until-connected: 600ms
-        digest: ~
-    
-    metrics:
-      enabled: true
-{{- end }}
+{{- range $path, $config := .Values.alert.customizedConfig }}
+  {{ $path }}: |
+{{ $config | indent 4 -}}
+{{- end -}}
+{{- end -}}
diff --git 
a/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-api.yaml
 
b/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-api.yaml
index 7570f9b010..211d3dfda9 100644
--- 
a/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-api.yaml
+++ 
b/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-api.yaml
@@ -14,7 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-{{- if and .Values.api.enabled }}
+{{- if .Values.api.enableCustomizedConfig }}
 apiVersion: v1
 kind: ConfigMap
 metadata:
@@ -23,223 +23,8 @@ metadata:
     app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-api
     {{- include "dolphinscheduler.api.labels" . | nindent 4 }}
 data:
-  application.yaml: |
-    server:
-      port: 12345
-      servlet:
-        session:
-          timeout: 120m
-        context-path: /dolphinscheduler/
-      compression:
-        enabled: true
-        mime-types: 
text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json,application/xml
-      jetty:
-        max-http-form-post-size: 5000000
-        accesslog:
-          enabled: true
-          custom-format: '%{client}a - %u %t "%r" %s %O %{ms}Tms'
-    
-    spring:
-      profiles:
-        active: {{ .Values.datasource.profile }}
-      banner:
-        charset: UTF-8
-      jackson:
-        time-zone: UTC
-        date-format: "yyyy-MM-dd HH:mm:ss"
-      servlet:
-        multipart:
-          max-file-size: 1024MB
-          max-request-size: 1024MB
-      messages:
-        basename: i18n/messages
-      datasource:
-        profile: postgresql
-        config:
-          driver-class-name: org.postgresql.Driver
-          url: jdbc:postgresql://127.0.0.1:5432/dolphinscheduler
-          username: root
-          password: root
-          hikari:
-            connection-test-query: select 1
-            pool-name: DolphinScheduler
-      quartz:
-        auto-startup: false
-        job-store-type: jdbc
-        jdbc:
-          initialize-schema: never
-        properties:
-          org.quartz.jobStore.isClustered: true
-          org.quartz.jobStore.class: 
org.springframework.scheduling.quartz.LocalDataSourceJobStore
-          org.quartz.scheduler.instanceId: AUTO
-          org.quartz.jobStore.tablePrefix: QRTZ_
-          org.quartz.jobStore.acquireTriggersWithinLock: true
-          org.quartz.scheduler.instanceName: DolphinScheduler
-          org.quartz.threadPool.class: 
org.apache.dolphinscheduler.scheduler.quartz.QuartzZeroSizeThreadPool
-          org.quartz.jobStore.useProperties: false
-          org.quartz.jobStore.misfireThreshold: 60000
-          org.quartz.scheduler.makeSchedulerThreadDaemon: true
-          org.quartz.jobStore.driverDelegateClass: 
org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
-          org.quartz.jobStore.clusterCheckinInterval: 5000
-          org.quartz.scheduler.batchTriggerAcquisitionMaxCount: 1
-      mvc:
-        pathmatch:
-          matching-strategy: ANT_PATH_MATCHER
-        static-path-pattern: /static/**
-    springdoc:
-      swagger-ui:
-        path: /swagger-ui.html
-      packages-to-scan: org.apache.dolphinscheduler.api
-    
-    # Mybatis-plus configuration, you don't need to change it
-    mybatis-plus:
-      mapper-locations: 
classpath:org/apache/dolphinscheduler/dao/mapper/*Mapper.xml
-      type-aliases-package: org.apache.dolphinscheduler.dao.entity
-      configuration:
-        cache-enabled: false
-        call-setters-on-nulls: true
-        map-underscore-to-camel-case: true
-        jdbc-type-for-null: NULL
-      global-config:
-        db-config:
-          id-type: auto
-        banner: false
-    
-    management:
-      endpoints:
-        web:
-          exposure:
-            include: health,metrics,prometheus
-      endpoint:
-        health:
-          enabled: true
-          show-details: always
-      health:
-        db:
-          enabled: true
-        defaults:
-          enabled: false
-      metrics:
-        tags:
-          application: ${spring.application.name}
-    
-    registry:
-      type: zookeeper
-      zookeeper:
-        namespace: dolphinscheduler
-        connect-string: localhost:2181
-        retry-policy:
-          base-sleep-time: 60ms
-          max-sleep: 300ms
-          max-retries: 5
-        session-timeout: 60s
-        connection-timeout: 15s
-        block-until-connected: 15s
-        digest: ~
-    
-    api:
-      audit-enable: false
-      # Traffic control, if you turn on this config, the maximum number of 
request/s will be limited.
-      # global max request number per second
-      # default tenant-level max request number
-      traffic-control:
-        global-switch: false
-        max-global-qps-rate: 300
-        tenant-switch: false
-        default-tenant-qps-rate: 10
-          #customize-tenant-qps-rate:
-        # eg.
-        #tenant1: 11
-        #tenant2: 20
-      python-gateway:
-        # Weather enable python gateway server or not. The default value is 
false.
-        enabled: false
-        # Authentication token for connection from python api to python 
gateway server. Should be changed the default value
-        # when you deploy in public network.
-        auth-token: jwUDzpLsNKEFER4*a8gruBH_GsAurNxU7A@Xc
-        # The address of Python gateway server start. Set its value to 
`0.0.0.0` if your Python API run in different
-        # between Python gateway server. It could be be specific to other 
address like `127.0.0.1` or `localhost`
-        gateway-server-address: 0.0.0.0
-        # The port of Python gateway server start. Define which port you could 
connect to Python gateway server from
-        # Python API side.
-        gateway-server-port: 25333
-        # The address of Python callback client.
-        python-address: 127.0.0.1
-        # The port of Python callback client.
-        python-port: 25334
-        # Close connection of socket server if no other request accept after x 
milliseconds. Define value is (0 = infinite),
-        # and socket server would never close even though no requests accept
-        connect-timeout: 0
-        # Close each active connection of socket server if python program not 
active after x milliseconds. Define value is
-        # (0 = infinite), and socket server would never close even though no 
requests accept
-        read-timeout: 0
-    
-    metrics:
-      enabled: true
-    
-    security:
-      authentication:
-        # Authentication types (supported types: PASSWORD,LDAP,CASDOOR_SSO)
-        type: PASSWORD
-        # IF you set type `LDAP`, below config will be effective
-        ldap:
-          # ldap server config
-          urls: ldap://ldap.forumsys.com:389/
-          base-dn: dc=example,dc=com
-          username: cn=read-only-admin,dc=example,dc=com
-          password: password
-          user:
-            # admin userId when you use LDAP login
-            admin: read-only-admin
-            identity-attribute: uid
-            email-attribute: mail
-            # action when ldap user is not exist (supported types: CREATE,DENY)
-            not-exist-action: CREATE
-          ssl:
-            enable: false
-            # jks file absolute path && password
-            trust-store: "/ldapkeystore.jks"
-            trust-store-password: "password"
-        casdoor:
-          user:
-            admin: ""
-        oauth2:
-          enable: false
-          provider:
-            github:
-              authorizationUri: ""
-              redirectUri: ""
-              clientId: ""
-              clientSecret: ""
-              tokenUri: ""
-              userInfoUri: ""
-              callbackUrl: ""
-              iconUri: ""
-              provider: github
-            google:
-              authorizationUri: ""
-              redirectUri: ""
-              clientId: ""
-              clientSecret: ""
-              tokenUri: ""
-              userInfoUri: ""
-              callbackUrl: ""
-              iconUri: ""
-              provider: google
-    casdoor:
-      # Your Casdoor server url
-      endpoint: ""
-      client-id: ""
-      client-secret: ""
-      # The certificate may be multi-line, you can use `|-` for ease
-      certificate: ""
-      # Your organization name added in Casdoor
-      organization-name: ""
-      # Your application name added in Casdoor
-      application-name: ""
-      # Doplhinscheduler login url
-      redirect-url: ""
-{{- end }}
-
-
-
+{{- range $path, $config := .Values.api.customizedConfig }}
+  {{ $path }}: |
+{{ $config | indent 4 -}}
+{{- end -}}
+{{- end -}}
diff --git 
a/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-master.yaml
 
b/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-master.yaml
index 07e8352f0d..9bcb7dd411 100644
--- 
a/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-master.yaml
+++ 
b/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-master.yaml
@@ -14,7 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-{{- if and .Values.master.enabled }}
+{{- if .Values.master.enableCustomizedConfig }}
 apiVersion: v1
 kind: ConfigMap
 metadata:
@@ -23,138 +23,8 @@ metadata:
     app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-master
     {{- include "dolphinscheduler.master.labels" . | nindent 4 }}
 data:
-  application.yaml: |
-    spring:
-      profiles:
-        active: {{ .Values.datasource.profile }}
-      banner:
-        charset: UTF-8
-      jackson:
-        time-zone: UTC
-        date-format: "yyyy-MM-dd HH:mm:ss"
-      datasource:
-        profile: postgresql
-        config:
-          driver-class-name: org.postgresql.Driver
-          url: jdbc:postgresql://127.0.0.1:5432/dolphinscheduler
-          username: root
-          password: root
-          hikari:
-            connection-test-query: select 1
-            pool-name: DolphinScheduler
-      quartz:
-        job-store-type: jdbc
-        jdbc:
-          initialize-schema: never
-        properties:
-          org.quartz.threadPool.threadPriority: 5
-          org.quartz.jobStore.isClustered: true
-          org.quartz.jobStore.class: 
org.springframework.scheduling.quartz.LocalDataSourceJobStore
-          org.quartz.scheduler.instanceId: AUTO
-          org.quartz.jobStore.tablePrefix: QRTZ_
-          org.quartz.jobStore.acquireTriggersWithinLock: true
-          org.quartz.scheduler.instanceName: DolphinScheduler
-          org.quartz.threadPool.class: org.quartz.simpl.SimpleThreadPool
-          org.quartz.jobStore.useProperties: false
-          org.quartz.threadPool.makeThreadsDaemons: true
-          org.quartz.threadPool.threadCount: 25
-          org.quartz.jobStore.misfireThreshold: 60000
-          org.quartz.scheduler.batchTriggerAcquisitionMaxCount: 1
-          org.quartz.scheduler.makeSchedulerThreadDaemon: true
-          org.quartz.jobStore.driverDelegateClass: 
org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
-          org.quartz.jobStore.clusterCheckinInterval: 5000
-    
-    # Mybatis-plus configuration, you don't need to change it
-    mybatis-plus:
-      mapper-locations: 
classpath:org/apache/dolphinscheduler/dao/mapper/*Mapper.xml
-      type-aliases-package: org.apache.dolphinscheduler.dao.entity
-      configuration:
-        cache-enabled: false
-        call-setters-on-nulls: true
-        map-underscore-to-camel-case: true
-        jdbc-type-for-null: NULL
-      global-config:
-        db-config:
-          id-type: auto
-        banner: false
-    
-    
-    registry:
-      type: zookeeper
-      zookeeper:
-        namespace: dolphinscheduler
-        connect-string: localhost:2181
-        retry-policy:
-          base-sleep-time: 60ms
-          max-sleep: 300ms
-          max-retries: 5
-        session-timeout: 30s
-        connection-timeout: 9s
-        block-until-connected: 600ms
-        digest: ~
-    
-    master:
-      listen-port: 5678
-      # master fetch command num
-      fetch-command-num: 10
-      # master prepare execute thread number to limit handle commands in 
parallel
-      pre-exec-threads: 10
-      # master execute thread number to limit process instances in parallel
-      exec-threads: 100
-      # master dispatch task number per batch, if all the tasks dispatch 
failed in a batch, will sleep 1s.
-      dispatch-task-number: 3
-      # master host selector to select a suitable worker, default value: 
LowerWeight. Optional values include random, round_robin, lower_weight
-      host-selector: lower_weight
-      # master heartbeat interval
-      max-heartbeat-interval: 10s
-      # master commit task retry times
-      task-commit-retry-times: 5
-      # master commit task interval
-      task-commit-interval: 1s
-      state-wheel-interval: 5s
-      server-load-protection:
-        # If set true, will open master overload protection
-        enabled: true
-        # Master max system cpu usage, when the master's system cpu usage is 
smaller then this value, master server can execute workflow.
-        max-system-cpu-usage-percentage-thresholds: 0.7
-        # Master max jvm cpu usage, when the master's jvm cpu usage is smaller 
then this value, master server can execute workflow.
-        max-jvm-cpu-usage-percentage-thresholds: 0.7
-        # Master max System memory usage , when the master's system memory 
usage is smaller then this value, master server can execute workflow.
-        max-system-memory-usage-percentage-thresholds: 0.7
-        # Master max disk usage , when the master's disk usage is smaller then 
this value, master server can execute workflow.
-        max-disk-usage-percentage-thresholds: 0.7
-      # failover interval, the unit is minute
-      failover-interval: 10m
-      # kill yarn / k8s application when failover taskInstance, default true
-      kill-application-when-task-failover: true
-      registry-disconnect-strategy:
-        # The disconnect strategy: stop, waiting
-        strategy: waiting
-        # The max waiting time to reconnect to registry if you set the 
strategy to waiting
-        max-waiting-time: 100s
-      worker-group-refresh-interval: 10s
-    
-    server:
-      port: 5679
-    
-    management:
-      endpoints:
-        web:
-          exposure:
-            include: health,metrics,prometheus
-      endpoint:
-        health:
-          enabled: true
-          show-details: always
-      health:
-        db:
-          enabled: true
-        defaults:
-          enabled: false
-      metrics:
-        tags:
-          application: ${spring.application.name}
-    
-    metrics:
-      enabled: true
-{{- end }}
+{{- range $path, $config := .Values.master.customizedConfig }}
+  {{ $path }}: |
+{{ $config | indent 4 -}}
+{{- end -}}
+{{- end -}}
diff --git 
a/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-worker.yaml
 
b/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-worker.yaml
index b15cad5649..c1d81a1802 100644
--- 
a/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-worker.yaml
+++ 
b/deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-worker.yaml
@@ -14,7 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 #
-{{- if and .Values.worker.enabled }}
+{{- if .Values.worker.enableCustomizedConfig }}
 apiVersion: v1
 kind: ConfigMap
 metadata:
@@ -23,86 +23,8 @@ metadata:
     app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}-worker
     {{- include "dolphinscheduler.worker.labels" . | nindent 4 }}
 data:
-  application.yaml: |
-    spring:
-      banner:
-        charset: UTF-8
-      jackson:
-        time-zone: UTC
-        date-format: "yyyy-MM-dd HH:mm:ss"
-      autoconfigure:
-        exclude:
-          - 
org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
-    
-    registry:
-      type: zookeeper
-      zookeeper:
-        namespace: dolphinscheduler
-        connect-string: localhost:2181
-        retry-policy:
-          base-sleep-time: 60ms
-          max-sleep: 300ms
-          max-retries: 5
-        session-timeout: 30s
-        connection-timeout: 9s
-        block-until-connected: 600ms
-        digest: ~
-    
-    worker:
-      # worker listener port
-      listen-port: 1234
-      # worker execute thread number to limit task instances in parallel
-      exec-threads: 100
-      # worker heartbeat interval
-      max-heartbeat-interval: 10s
-      # worker host weight to dispatch tasks, default value 100
-      host-weight: 100
-      server-load-protection:
-        # If set true, will open worker overload protection
-        enabled: true
-        # Worker max system cpu usage, when the worker's system cpu usage is 
smaller then this value, worker server can be dispatched tasks.
-        max-system-cpu-usage-percentage-thresholds: 0.7
-        # Worker max jvm cpu usage, when the worker's jvm cpu usage is smaller 
then this value, worker server can be dispatched tasks.
-        max-jvm-cpu-usage-percentage-thresholds: 0.7
-        # Worker max System memory usage , when the master's system memory 
usage is smaller then this value, master server can execute workflow.
-        max-system-memory-usage-percentage-thresholds: 0.7
-        # Worker max disk usage , when the worker's disk usage is smaller then 
this value, worker server can be dispatched tasks.
-        max-disk-usage-percentage-thresholds: 0.7
-      registry-disconnect-strategy:
-        # The disconnect strategy: stop, waiting
-        strategy: waiting
-        # The max waiting time to reconnect to registry if you set the 
strategy to waiting
-        max-waiting-time: 100s
-      task-execute-threads-full-policy: REJECT
-      tenant-config:
-        # tenant corresponds to the user of the system, which is used by the 
worker to submit the job. If system does not have this user, it will be 
automatically created after the parameter worker.tenant.auto.create is true.
-        auto-create-tenant-enabled: true
-        # Scenes to be used for distributed users. For example, users created 
by FreeIpa are stored in LDAP. This parameter only applies to Linux, When this 
parameter is true, auto-create-tenant-enabled has no effect and will not 
automatically create tenants.
-        distributed-tenant-enabled: false
-        # If set true, will use worker bootstrap user as the tenant to execute 
task when the tenant is `default`.
-        default-tenant-enabled: false
-    
-    server:
-      port: 1235
-    
-    management:
-      endpoints:
-        web:
-          exposure:
-            include: health,metrics,prometheus
-      endpoint:
-        health:
-          enabled: true
-          show-details: always
-      health:
-        db:
-          enabled: true
-        defaults:
-          enabled: false
-      metrics:
-        tags:
-          application: ${spring.application.name}
-    
-    metrics:
-      enabled: true
-{{- end }}
+{{- range $path, $config := .Values.worker.customizedConfig }}
+  {{ $path }}: |
+{{ $config | indent 4 -}}
+{{- end -}}
+{{- end -}}
diff --git 
a/deploy/kubernetes/dolphinscheduler/templates/deployment-dolphinscheduler-alert.yaml
 
b/deploy/kubernetes/dolphinscheduler/templates/deployment-dolphinscheduler-alert.yaml
index 84f28f484b..7a90d08025 100644
--- 
a/deploy/kubernetes/dolphinscheduler/templates/deployment-dolphinscheduler-alert.yaml
+++ 
b/deploy/kubernetes/dolphinscheduler/templates/deployment-dolphinscheduler-alert.yaml
@@ -115,9 +115,11 @@ spec:
             - name: config-volume
               mountPath: /opt/dolphinscheduler/conf/common.properties
               subPath: common.properties
+            {{- if .Values.alert.enableCustomizedConfig }}
             - name: alert-config-volume
               mountPath: /opt/dolphinscheduler/conf/application.yaml
               subPath: application.yaml
+            {{- end }}
       volumes:
         - name: {{ include "dolphinscheduler.fullname" . }}-alert
           {{- if .Values.alert.persistentVolumeClaim.enabled }}
@@ -129,7 +131,9 @@ spec:
         - name: config-volume
           configMap:
             name: {{ include "dolphinscheduler.fullname" . }}-configs
+        {{- if .Values.alert.enableCustomizedConfig }}
         - name: alert-config-volume
           configMap:
-            name: { { include "dolphinscheduler.fullname" . } }-alert
+            name: {{ include "dolphinscheduler.fullname" . }}-alert
+        {{- end }}
 {{- end }}
diff --git 
a/deploy/kubernetes/dolphinscheduler/templates/deployment-dolphinscheduler-api.yaml
 
b/deploy/kubernetes/dolphinscheduler/templates/deployment-dolphinscheduler-api.yaml
index 0b359f15c8..b4fa07256c 100644
--- 
a/deploy/kubernetes/dolphinscheduler/templates/deployment-dolphinscheduler-api.yaml
+++ 
b/deploy/kubernetes/dolphinscheduler/templates/deployment-dolphinscheduler-api.yaml
@@ -121,9 +121,11 @@ spec:
               mountPath: /opt/dolphinscheduler/conf/task-type-config.yaml
               subPath: task-type-config.yaml
             {{- end }}
+            {{- if .Values.api.enableCustomizedConfig }}
             - name: api-config-volume
               mountPath: /opt/dolphinscheduler/conf/application.yaml
               subPath: application.yaml
+            {{- end }}
             {{- include "dolphinscheduler.sharedStorage.volumeMount" . | 
nindent 12 }}
             {{- include "dolphinscheduler.fsFileResource.volumeMount" . | 
nindent 12 }}
             {{- include "dolphinscheduler.ldap.ssl.volumeMount" . | nindent 12 
}}
@@ -139,9 +141,11 @@ spec:
         - name: config-volume
           configMap:
             name: {{ include "dolphinscheduler.fullname" . }}-configs
+        {{- if .Values.api.enableCustomizedConfig }}
         - name: api-config-volume
           configMap:
-            name: { { include "dolphinscheduler.fullname" . } }-api
+            name: {{ include "dolphinscheduler.fullname" . }}-api
+        {{- end }}
         {{- include "dolphinscheduler.sharedStorage.volume" . | nindent 8 }}
         {{- include "dolphinscheduler.fsFileResource.volume" . | nindent 8 }}
         {{- include "dolphinscheduler.ldap.ssl.volume" . | nindent 8 }}
diff --git 
a/deploy/kubernetes/dolphinscheduler/templates/statefulset-dolphinscheduler-master.yaml
 
b/deploy/kubernetes/dolphinscheduler/templates/statefulset-dolphinscheduler-master.yaml
index a93621b7f2..c4174b9ca0 100644
--- 
a/deploy/kubernetes/dolphinscheduler/templates/statefulset-dolphinscheduler-master.yaml
+++ 
b/deploy/kubernetes/dolphinscheduler/templates/statefulset-dolphinscheduler-master.yaml
@@ -109,9 +109,11 @@ spec:
           volumeMounts:
             - mountPath: "/opt/dolphinscheduler/logs"
               name: {{ include "dolphinscheduler.fullname" . }}-master
+            {{- if .Values.master.enableCustomizedConfig }}
             - name: master-config-volume
               mountPath: /opt/dolphinscheduler/conf/application.yaml
               subPath: application.yaml
+            {{- end }}
             {{- include "dolphinscheduler.sharedStorage.volumeMount" . | 
nindent 12 }}
             - name: config-volume
               mountPath: /opt/dolphinscheduler/conf/common.properties
@@ -125,9 +127,11 @@ spec:
           {{- else }}
           emptyDir: {}
           {{- end }}
+        {{- if .Values.master.enableCustomizedConfig }}
         - name: master-config-volume
           configMap:
-            name: { { include "dolphinscheduler.fullname" . } }-master
+            name: {{ include "dolphinscheduler.fullname" . }}-master
+        {{- end }}
         {{- include "dolphinscheduler.sharedStorage.volume" . | nindent 8 }}
         - name: config-volume
           configMap:
diff --git 
a/deploy/kubernetes/dolphinscheduler/templates/statefulset-dolphinscheduler-worker.yaml
 
b/deploy/kubernetes/dolphinscheduler/templates/statefulset-dolphinscheduler-worker.yaml
index 83ce8947dd..4c66ca5ffe 100644
--- 
a/deploy/kubernetes/dolphinscheduler/templates/statefulset-dolphinscheduler-worker.yaml
+++ 
b/deploy/kubernetes/dolphinscheduler/templates/statefulset-dolphinscheduler-worker.yaml
@@ -111,9 +111,11 @@ spec:
               name: {{ include "dolphinscheduler.fullname" . }}-worker-data
             - mountPath: "/opt/dolphinscheduler/logs"
               name: {{ include "dolphinscheduler.fullname" . }}-worker-logs
+            {{- if .Values.worker.enableCustomizedConfig }}
             - name: worker-config-volume
               mountPath: /opt/dolphinscheduler/conf/application.yaml
               subPath: application.yaml
+            {{- end }}
             - name: config-volume
               mountPath: /opt/dolphinscheduler/conf/common.properties
               subPath: common.properties
@@ -142,9 +144,11 @@ spec:
         - name: {{ include "dolphinscheduler.fullname" . }}-worker-logs
           emptyDir: {}
         {{- end }}
+        {{- if .Values.worker.enableCustomizedConfig }}
         - name: worker-config-volume
           configMap:
-            name: { { include "dolphinscheduler.fullname" . } }-worker
+            name: {{ include "dolphinscheduler.fullname" . }}-worker
+        {{- end }}
         - name: config-volume
           configMap:
             name: {{ include "dolphinscheduler.fullname" . }}-configs
diff --git a/deploy/kubernetes/dolphinscheduler/values.yaml 
b/deploy/kubernetes/dolphinscheduler/values.yaml
index 7a04ff5604..a830183e48 100644
--- a/deploy/kubernetes/dolphinscheduler/values.yaml
+++ b/deploy/kubernetes/dolphinscheduler/values.yaml
@@ -49,6 +49,10 @@ image:
   # -- tools image
   tools: dolphinscheduler-tools
 
+datasource:
+  # -- The profile of datasource
+  profile: postgresql
+
 postgresql:
   # -- If not exists external PostgreSQL, by default, the DolphinScheduler 
will use a internal PostgreSQL
   enabled: true
@@ -440,7 +444,19 @@ master:
   #   requests:
   #     memory: "2Gi"
   #     cpu: "500m"
-
+  # -- enable configure custom config
+  enableCustomizedConfig: false
+  # -- configure aligned with 
https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-master/src/main/resources/application.yaml
+  customizedConfig: { }
+  #  customizedConfig:
+  #    application.yaml: |
+  #      profiles:
+  #        active: postgresql
+  #      banner:
+  #        charset: UTF-8
+  #      jackson:
+  #        time-zone: UTC
+  #        date-format: "yyyy-MM-dd HH:mm:ss"
   # -- Periodic probe of container liveness. Container will be restarted if 
the probe fails.
   # More info: 
[container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
   livenessProbe:
@@ -569,6 +585,17 @@ worker:
 
   # -- Periodic probe of container liveness. Container will be restarted if 
the probe fails.
   # More info: 
[container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
+  # -- enable configure custom config
+  enableCustomizedConfig: false
+  # -- configure aligned with 
https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-worker/src/main/resources/application.yaml
+  customizedConfig: { }
+#  customizedConfig:
+#    application.yaml: |
+#      banner:
+#        charset: UTF-8
+#      jackson:
+#        time-zone: UTC
+#        date-format: "yyyy-MM-dd HH:mm:ss"
   livenessProbe:
     # -- Turn on and off liveness probe
     enabled: true
@@ -733,7 +760,19 @@ alert:
   #   requests:
   #     memory: "1Gi"
   #     cpu: "500m"
-
+  # -- enable configure custom config
+  enableCustomizedConfig: false
+  # -- configure aligned with 
https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-alert/dolphinscheduler-alert-server/src/main/resources/application.yaml
+  customizedConfig: { }
+  #  customizedConfig:
+  #    application.yaml: |
+  #      profiles:
+  #        active: postgresql
+  #      banner:
+  #        charset: UTF-8
+  #      jackson:
+  #        time-zone: UTC
+  #        date-format: "yyyy-MM-dd HH:mm:ss"
   # -- Periodic probe of container liveness. Container will be restarted if 
the probe fails.
   # More info: 
[container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
   livenessProbe:
@@ -833,7 +872,19 @@ api:
   #   requests:
   #     memory: "1Gi"
   #     cpu: "500m"
-
+  # -- enable configure custom config
+  enableCustomizedConfig: false
+  # -- configure aligned with 
https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-api/src/main/resources/application.yaml
+  customizedConfig: { }
+  #  customizedConfig:
+  #    application.yaml: |
+  #      profiles:
+  #        active: postgresql
+  #      banner:
+  #        charset: UTF-8
+  #      jackson:
+  #        time-zone: UTC
+  #        date-format: "yyyy-MM-dd HH:mm:ss"
   # -- Periodic probe of container liveness. Container will be restarted if 
the probe fails.
   # More info: 
[container-probes](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes)
   livenessProbe:
diff --git a/docs/docs/en/guide/installation/cluster.md 
b/docs/docs/en/guide/installation/cluster.md
index 14ae58a479..ce5b60baa2 100644
--- a/docs/docs/en/guide/installation/cluster.md
+++ b/docs/docs/en/guide/installation/cluster.md
@@ -14,21 +14,7 @@ Configure all the configurations refer to [pseudo-cluster 
deployment](pseudo-clu
 
 ### Modify Configuration
 
-This step differs quite a lot from [pseudo-cluster 
deployment](pseudo-cluster.md), because the deployment script transfers the 
required resources for installation to each deployment machine by using `scp`. 
So we only need to modify the configuration of the machine that runs 
`install.sh` script and configurations will dispatch to cluster by `scp`. The 
configuration file is under the path `bin/env/install_env.sh`, here we only 
need to modify section **INSTALL MACHINE**, **DolphinScheduler  [...]
-
-```shell
-# ---------------------------------------------------------
-# INSTALL MACHINE
-# ---------------------------------------------------------
-# Using IP or machine hostname for the server going to deploy master, worker, 
API server, the IP of the server
-# If you using a hostname, make sure machines could connect each other by 
hostname
-# As below, the hostname of the machine deploying DolphinScheduler is ds1, 
ds2, ds3, ds4, ds5, where ds1, ds2 install the master server, ds3, ds4, and ds5 
installs worker server, the alert server is installed in ds4, and the API 
server is installed in ds5
-ips="ds1,ds2,ds3,ds4,ds5"
-masters="ds1,ds2"
-workers="ds3:default,ds4:default,ds5:default"
-alertServer="ds4"
-apiServers="ds5"
-```
+This step differs quite a lot from [pseudo-cluster 
deployment](pseudo-cluster.md), please use `scp` or other methods to distribute 
the configuration files to each machine, then modify the configuration files.
 
 ## Start and Login DolphinScheduler
 
diff --git a/docs/docs/en/guide/installation/pseudo-cluster.md 
b/docs/docs/en/guide/installation/pseudo-cluster.md
index 7a3b43b00e..b6cb40007d 100644
--- a/docs/docs/en/guide/installation/pseudo-cluster.md
+++ b/docs/docs/en/guide/installation/pseudo-cluster.md
@@ -71,31 +71,7 @@ Go to the ZooKeeper installation directory, copy configure 
file `zoo_sample.cfg`
 ## Modify Configuration
 
 After completing the preparation of the basic environment, you need to modify 
the configuration file according to the
-environment you used. Change the environment configurations via `export 
<ENV_NAME>=<VALUE>`. The configuration files are located in directory `bin/env` 
as `install_env.sh` and `dolphinscheduler_env.sh`.
-
-### Modify `install_env.sh`
-
-File `install_env.sh` describes which machines will be installed 
DolphinScheduler and what server will be installed on
-each machine. You could find this file in the path `bin/env/install_env.sh` 
and the detail of the configuration as below.
-
-```shell
-# ---------------------------------------------------------
-# INSTALL MACHINE
-# ---------------------------------------------------------
-# Due to the master, worker, and API server being deployed on a single node, 
the IP of the server is the machine IP or localhost
-ips="localhost"
-sshPort="22"
-masters="localhost"
-workers="localhost:default"
-alertServer="localhost"
-apiServers="localhost"
-
-# DolphinScheduler installation path, it will auto-create if not exists
-installPath=~/dolphinscheduler
-
-# Deploy user, use the user you create in section **Configure machine SSH 
password-free login**
-deployUser="dolphinscheduler"
-```
+environment you used. Change the environment configurations via `export 
<ENV_NAME>=<VALUE>`. The configuration files are located in directory `bin/env` 
as `dolphinscheduler_env.sh`.
 
 ### Modify `dolphinscheduler_env.sh`
 
@@ -146,11 +122,7 @@ Follow the instructions in 
[datasource-setting](../howto/datasource-setting.md)
 
 ## Start DolphinScheduler
 
-Use **deployment user** you created above, running the following command to 
complete the deployment, and the server log will be stored in the logs folder.
-
-```shell
-bash ./bin/install.sh
-```
+Use **deployment user** you created above, running the command to complete the 
deployment, and the server log will be stored in the logs folder.
 
 > **_Note:_** For the first time deployment, there maybe occur five times of 
 > `sh: bin/dolphinscheduler-daemon.sh: No such file or directory` in the 
 > terminal,
 > this is non-important information that you can ignore.
diff --git a/docs/docs/zh/guide/installation/cluster.md 
b/docs/docs/zh/guide/installation/cluster.md
index 97c1792787..c266a671e0 100644
--- a/docs/docs/zh/guide/installation/cluster.md
+++ b/docs/docs/zh/guide/installation/cluster.md
@@ -14,21 +14,7 @@
 
 ### 修改相关配置
 
-这个是与[伪集群部署](pseudo-cluster.md)差异较大的一步,因为部署脚本会通过 `scp` 
的方式将安装需要的资源传输到各个机器上,所以这一步我们仅需要修改运行`install.sh`脚本的所在机器的配置即可。配置文件在路径在`bin/env/install_env.sh`下,此处我们仅需修改**INSTALL
 MACHINE**,**DolphinScheduler ENV、Database、Registry 
Server**与伪集群部署保持一致,下面对必须修改参数进行说明
-
-```shell
-# ---------------------------------------------------------
-# INSTALL MACHINE
-# ---------------------------------------------------------
-# 需要配置master、worker、API server,所在服务器的IP均为机器IP或者localhost
-# 如果是配置hostname的话,需要保证机器间可以通过hostname相互链接
-# 如下图所示,部署 DolphinScheduler 机器的 hostname 为 ds1,ds2,ds3,ds4,ds5,其中 ds1,ds2 安装 
master 服务,ds3,ds4,ds5安装 worker 服务,alert server安装在ds4中,api server 安装在ds5中
-ips="ds1,ds2,ds3,ds4,ds5"
-masters="ds1,ds2"
-workers="ds3:default,ds4:default,ds5:default"
-alertServer="ds4"
-apiServers="ds5"
-```
+这个是与[伪集群部署](pseudo-cluster.md)差异较大的一步,请使用 scp 等方式将配置文件分发到各台机器上,然后修改配置文件
 
 ## 启动 DolphinScheduler && 登录 DolphinScheduler && 启停服务
 
diff --git a/docs/docs/zh/guide/installation/pseudo-cluster.md 
b/docs/docs/zh/guide/installation/pseudo-cluster.md
index a199167e04..8e1c133c07 100644
--- a/docs/docs/zh/guide/installation/pseudo-cluster.md
+++ b/docs/docs/zh/guide/installation/pseudo-cluster.md
@@ -70,30 +70,7 @@ chmod 600 ~/.ssh/authorized_keys
 
 ## 修改相关配置
 
-完成基础环境的准备后,需要根据你的机器环境修改配置文件。配置文件可以在目录 `bin/env` 中找到,他们分别是 并命名为 
`install_env.sh` 和 `dolphinscheduler_env.sh`。
-
-### 修改 `install_env.sh` 文件
-
-文件 `install_env.sh` 描述了哪些机器将被安装 DolphinScheduler 以及每台机器对应安装哪些服务。您可以在路径 
`bin/env/install_env.sh` 中找到此文件,可通过以下方式更改 env 变量,export 
<ENV_NAME>=<VALUE>,配置详情如下。
-
-```shell
-# ---------------------------------------------------------
-# INSTALL MACHINE
-# ---------------------------------------------------------
-# Due to the master, worker, and API server being deployed on a single node, 
the IP of the server is the machine IP or localhost
-ips="localhost"
-sshPort="22"
-masters="localhost"
-workers="localhost:default"
-alertServer="localhost"
-apiServers="localhost"
-
-# DolphinScheduler installation path, it will auto-create if not exists
-installPath=~/dolphinscheduler
-
-# Deploy user, use the user you create in section **Configure machine SSH 
password-free login**
-deployUser="dolphinscheduler"
-```
+完成基础环境的准备后,需要根据你的机器环境修改配置文件。配置文件可以在目录 `bin/env/dolphinscheduler_env.sh` 中找到。
 
 ### 修改 `dolphinscheduler_env.sh` 文件
 
@@ -141,11 +118,7 @@ export 
PATH=$HADOOP_HOME/bin:$SPARK_HOME/bin:$PYTHON_LAUNCHER:$JAVA_HOME/bin:$HI
 
 ## 启动 DolphinScheduler
 
-使用上面创建的**部署用户**运行以下命令完成部署,部署后的运行日志将存放在 logs 文件夹内
-
-```shell
-bash ./bin/install.sh
-```
+使用上面创建的**部署用户**运行命令完成部署,部署后的运行日志将存放在 logs 文件夹内
 
 > **_注意:_** 第一次部署的话,可能出现 5 次`sh: bin/dolphinscheduler-daemon.sh: No such file 
 > or directory`相关信息,此为非重要信息直接忽略即可
 
diff --git 
a/dolphinscheduler-task-plugin/dolphinscheduler-task-k8s/src/test/java/org/apache/dolphinscheduler/plugin/task/k8s/K8sTaskTest.java
 
b/dolphinscheduler-task-plugin/dolphinscheduler-task-k8s/src/test/java/org/apache/dolphinscheduler/plugin/task/k8s/K8sTaskTest.java
index 241ac2ed10..444415e048 100644
--- 
a/dolphinscheduler-task-plugin/dolphinscheduler-task-k8s/src/test/java/org/apache/dolphinscheduler/plugin/task/k8s/K8sTaskTest.java
+++ 
b/dolphinscheduler-task-plugin/dolphinscheduler-task-k8s/src/test/java/org/apache/dolphinscheduler/plugin/task/k8s/K8sTaskTest.java
@@ -113,7 +113,7 @@ public class K8sTaskTest {
     @Test
     public void testGetParametersNormal() {
         String expectedStr =
-                "K8sTaskParameters(image=ds-dev, namespace=namespace, 
command=[\"/bin/bash\", \"-c\"], args=[\"echo hello world\"], 
pullSecret=ds-secret, imagePullPolicy=IfNotPresent, minCpuCores=2.0, 
minMemorySpace=10.0, customizedLabels=[Label(label=test, value=1234)], 
nodeSelectors=[NodeSelectorExpression(key=node-label, operator=In, 
values=1234,12345)], kubeConfig={}, datasource=0, type=K8S)";
+                "K8sTaskParameters(image=ds-dev, 
namespace={\"name\":\"default\",\"cluster\":\"lab\"}, command=[\"/bin/bash\", 
\"-c\"], args=[\"echo hello world\"], pullSecret=ds-secret, 
imagePullPolicy=IfNotPresent, minCpuCores=2.0, minMemorySpace=10.0, 
customizedLabels=[Label(label=test, value=1234)], 
nodeSelectors=[NodeSelectorExpression(key=node-label, operator=In, 
values=1234,12345)])";
         String result = k8sTask.getParameters().toString();
         Assertions.assertEquals(expectedStr, result);
     }
diff --git a/script/env/install_env.sh b/script/env/install_env.sh
deleted file mode 100644
index 8de1c78637..0000000000
--- a/script/env/install_env.sh
+++ /dev/null
@@ -1,63 +0,0 @@
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-# ---------------------------------------------------------
-# INSTALL MACHINE
-# ---------------------------------------------------------
-# A comma separated list of machine hostname or IP would be installed 
DolphinScheduler,
-# including master, worker, api, alert. If you want to deploy in 
pseudo-distributed
-# mode, just write a pseudo-distributed hostname
-# Example for hostnames: ips="ds1,ds2,ds3,ds4,ds5", Example for IPs: 
ips="192.168.8.1,192.168.8.2,192.168.8.3,192.168.8.4,192.168.8.5"
-ips=${ips:-"ds1,ds2,ds3,ds4,ds5"}
-
-# Port of SSH protocol, default value is 22. For now we only support same port 
in all `ips` machine
-# modify it if you use different ssh port
-sshPort=${sshPort:-"22"}
-
-# A comma separated list of machine hostname or IP would be installed Master 
server, it
-# must be a subset of configuration `ips`.
-# Example for hostnames: masters="ds1,ds2", Example for IPs: 
masters="192.168.8.1,192.168.8.2"
-masters=${masters:-"ds1,ds2"}
-
-# A comma separated list of machine <hostname>:<workerGroup> or 
<IP>:<workerGroup>.All hostname or IP must be a
-# subset of configuration `ips`, And workerGroup have default value as 
`default`, but we recommend you declare behind the hosts
-# Example for hostnames: workers="ds1:default,ds2:default,ds3:default", 
Example for IPs: 
workers="192.168.8.1:default,192.168.8.2:default,192.168.8.3:default"
-workers=${workers:-"ds1:default,ds2:default,ds3:default,ds4:default,ds5:default"}
-
-# A comma separated list of machine hostname or IP would be installed Alert 
server, it
-# must be a subset of configuration `ips`.
-# Example for hostname: alertServer="ds3", Example for IP: 
alertServer="192.168.8.3"
-alertServer=${alertServer:-"ds3"}
-
-# A comma separated list of machine hostname or IP would be installed API 
server, it
-# must be a subset of configuration `ips`.
-# Example for hostname: apiServers="ds1", Example for IP: 
apiServers="192.168.8.1"
-apiServers=${apiServers:-"ds1"}
-
-# The directory to install DolphinScheduler for all machine we config above. 
It will automatically be created by `install.sh` script if not exists.
-# Do not set this configuration same as the current path (pwd). Do not add 
quotes to it if you using related path.
-installPath=${installPath:-"/tmp/dolphinscheduler"}
-
-# The user to deploy DolphinScheduler for all machine we config above. For now 
user must create by yourself before running `install.sh`
-# script. The user needs to have sudo privileges and permissions to operate 
hdfs. If hdfs is enabled than the root directory needs
-# to be created by this user
-deployUser=${deployUser:-"dolphinscheduler"}
-
-# The root of zookeeper, for now DolphinScheduler default registry server is 
zookeeper.
-# It will delete ${zkRoot} in the zookeeper when you run install.sh, so please 
keep it same as registry.zookeeper.namespace in yml files.
-# Similarly, if you want to modify the value, please modify 
registry.zookeeper.namespace in yml files as well.
-zkRoot=${zkRoot:-"/dolphinscheduler"}
diff --git a/script/install.sh b/script/install.sh
deleted file mode 100755
index d36f90a3f3..0000000000
--- a/script/install.sh
+++ /dev/null
@@ -1,58 +0,0 @@
-#!/bin/bash
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-workDir=`dirname $0`
-workDir=`cd ${workDir};pwd`
-baseDir=`cd ${workDir}/..;pwd`
-
-source ${workDir}/env/install_env.sh
-source ${workDir}/env/dolphinscheduler_env.sh
-
-echo "1.create directory"
-
-# If install Path equal to "/" or related path is "/" or is empty, will cause 
directory "/bin" be overwrite or file adding,
-# so we should check its value. Here use command `realpath` to get the related 
path, and it will skip if your shell env
-# without command `realpath`.
-if [ ${baseDir} = $installPath ]; then
-  echo "Fatal: The installPath can not be same as the current path: 
${installPath}"
-  exit 1
-elif [ ! -d $installPath ];then
-  sudo mkdir -p $installPath
-  sudo chown -R $deployUser:$deployUser $installPath
-elif [[ -z "${installPath// }" || "${installPath// }" == "/" || ( $(command -v 
realpath) && $(realpath -s "${installPath}") == "/" ) ]]; then
-  echo "Parameter installPath can not be empty, use in root path or related 
path of root path, currently use ${installPath}"
-  exit 1
-fi
-
-echo "2.scp resources"
-bash ${workDir}/scp-hosts.sh
-if [ $? -eq 0 ];then
-       echo 'scp copy completed'
-else
-       echo 'scp copy failed to exit'
-       exit 1
-fi
-
-echo "3.stop server"
-bash ${workDir}/stop-all.sh
-
-echo "4.delete zk node"
-bash ${workDir}/remove-zk-node.sh $zkRoot
-
-echo "5.startup"
-bash ${workDir}/start-all.sh

Reply via email to