This is an automated email from the ASF dual-hosted git repository.

leonbao pushed a commit to branch json_split_two
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler.git


The following commit(s) were added to refs/heads/json_split_two by this push:
     new 3b495d2  [Feature][JsonSplit-api] merge code from dev to json2 (#6023)
3b495d2 is described below

commit 3b495d2fb0871db0fb2945ed79f60bb9059446b5
Author: JinyLeeChina <[email protected]>
AuthorDate: Mon Aug 23 16:23:40 2021 +0800

    [Feature][JsonSplit-api] merge code from dev to json2 (#6023)
    
    * [BUG-#5678][Registry]fix registry init node miss (#5686)
    
    * [Improvement][UI] Update the update time after the user information is 
successfully modified (#5684)
    
    * improve
    
    edit the userinfo success, but the updatetime is not the latest.
    
    * Improved shell task execution result log information, adding 
process.waitFor() and process.exitValue() information to the original log 
(#5691)
    
    Co-authored-by: shenglm <[email protected]>
    
    * [Feature-#5565][Master Worker-Server] Global Param passed by sense 
dependencies (#5603)
    
    * add globalParams new plan with varPool
    
    * add unit test
    
    * add python task varPoolParams
    
    
    Co-authored-by: wangxj <wangxj31>
    
    * Issue robot translation judgment changed to Chinese (#5694)
    
    
    
    Co-authored-by: chenxingchun <[email protected]>
    
    * the update function should use post instead of get (#5703)
    
    * enhance form verify (#5696)
    
    * checkState only supports %s not {} (#5711)
    
    * [Fix-5701]When deleting a user, the accessToken associated with the user 
should also be deleted (#5697)
    
    * update
    
    * fix the codestyle error
    
    * fix the compile error
    
    * support rollback
    
    * [Fix-5699][UI] Fix update user error in user information (#5700)
    
    * [Improvement] the automatically generated spi service name in 
alert-plugin is wrong (#5676)
    
    * bug fix
    
    the auto generated spi service can't be recongized
    
    
    
    * include a  new method
    
    * [Improvement-5622][project management] Modify the title (#5723)
    
    * [Fix-5714] When updating the existing alarm instance, the creation time 
should't be updated (#5715)
    
    
    
    * add a new init method.
    
    * [Fix#5758] There are some problems in the api documentation that need to 
be improved (#5759)
    
    * add the necessary parameters
    
    * openapi improve
    
    * fix code style error
    
    * [FIX-#5721][master-server] Global params parameter missing (#5757)
    
    
    
    Co-authored-by: wangxj <wangxj31>
    
    * [Fix-5738][UI] The cancel button in the pop-up dialog of `batch copy` and 
`batch move`  doesn't work. (#5739)
    
    * Update relatedItems.vue
    
    * Update relatedItems.vue
    
    * [Improvement#5741][Worker] Improve task process status log  (#5776)
    
    * [Improvement-5773][server] need to support two parameters related to task 
(#5774)
    
    * add some new parameter for task
    
    * restore official properties
    
    * improve imports
    
    * modify a variable's name
    
    Co-authored-by: jiang hua <[email protected]>
    
    * [FIX-5786][Improvement][Server] When the Worker turns down, the 
MasterServer cannot handle the Remove event correctly and throws NPE
    
    * [Improvement][Worker] Task log may be lost #5775 (#5783)
    
    * [Imporvement #5725][CheckStyle] upgrade checkstyle file (#5789)
    
    * [Imporvement #5725][CheckStyle] upgrade checkstyle file
      Upgrade checkstyle.xml to support checkstyle version 8.24+
    
    * change ci checkstyle version
    
    * [Fix-5795][Improvement][Server] The starttime field in the HttpTask log 
is not displayed as expected.  (#5796)
    
    * improve timestamp format
    
    make the startime in the log of httptask to be easier to read.
    
    
    * fix bad code smell and update the note.
    
    * [Imporvement #5621][job instance] start-time and end-time (#5621) (#5797)
    
    ·the list of workflow instances is sorted by start time and end time
    ·This closes #5621
    
    * fix (#5803)
    
    Co-authored-by: shuangbofu <[email protected]>
    
    * fix: Remove duplicate "registryClient.close" method calls (#5805)
    
    Co-authored-by: wen-hemin <[email protected]>
    
    * [Improvement][SPI] support load single plugin (#5794)
    
    change load operation of 'registry.plugin.dir'
    
    * [Improvement][Api Module] refactor registry client, remove spring 
annotation (#5814)
    
    * fix: refactor registry client, remove spring annotation
    
    * fix UT
    
    * fix UT
    
    * fix checkstyle
    
    * fix UT
    
    * fix UT
    
    * fix UT
    
    * fix: Rename RegistryCenterUtils method name
    
    Co-authored-by: wen-hemin <[email protected]>
    
    * [Fix-5699][UI] Fix update user error in user information introduced by 
#5700 (#5735)
    
    * [Fix-5726] When we used the UI page, we found some problems such as 
parameter validation, parameter update shows success but actually work (#5727)
    
    * enhance the validation in UI
    
    * enchance form verifaction
    
    * simplify disable condition
    
    * fix: Remove unused class (#5833)
    
    Co-authored-by: wen-hemin <[email protected]>
    
    * [fix-5737] [Bug][Datasource] datsource other param check error (#5835)
    
    Co-authored-by: wanggang <[email protected]>
    
    * [Fix-5719][K8s] Fix Ingress tls: got map expected array On TLS enabled On 
Kubernetes
    
    [Fix-5719][K8s] Fix Ingress tls: got map expected array On TLS enabled On 
Kubernetes
    
    * [Fix-5825][BUG][WEB] the resource tree in the process definition of 
latest dev branch can't display correctly (#5826)
    
    * resoures-shows-error
    
    * fix codestyle error
    
    * add license header for new js
    
    * fix codesmell
    
    * [Improvement-5852][server] Support two parameters related to task for the 
rest of type of tasks. (#5867)
    
    * provide two system parameters to support the rest of type of tasks
    
    * provide two system parameters to support the rest of type of tasks
    
    * improve test conversion
    
    * [Improvement][Fix-5769][UI]When we try to delete the existing dag, the 
console in web browser would shows exception (#5770)
    
    * fix bug
    
    * cache the this variable
    
    * Avoid self name
    
    * fix code style compile error
    
    * [Fix-5781][UT] Fix test coverage in sonar (#5817)
    
    * build(UT): make jacoco running in offline-instrumentation
    
    issue: #5781
    
    * build(UT): remove the jacoco agent dependency in microbench
    
    issue: #5781
    
    * [Fix-5808][Server]  When we try to transfer data using datax between  
different types of data sources, the worker will exit with ClassCastException 
(#5809)
    
    * bug fix
    
    * fix bug
    
    * simplify the code format
    
    * add a new parameter to make it easier to understand.
    
    * [Fix-5830][Improvement][UI] Improve the selection style in dag edit 
dialog (#5829)
    
    * improve the selection style
    
    * update another file
    
    * remove unnecessary css part.
    
    * [Fix-5904][upgrade]fix dev branch upgrade mysql sql script error (#5821)
    
    * fix dev branch upgrade mysql sql script error.
    
    * Update naming convention.
    
    * [Improvement][Api Module] refactor DataSourceParam and DependentParam, 
remove spring annotation (#5832)
    
    * fix: refactor api utils class, remove spring annotation.
    
    * fix: Optimization comments
    
    Co-authored-by: wen-hemin <[email protected]>
    
    * correct the wrong annotion from zk queue implemented to java priority 
blocking queue (#5906)
    
    Co-authored-by: ywang46 <[email protected]>
    
    * Add a Gitter chat badge to README.md (#5883)
    
    * Add Gitter badge
    
    * Update README.md
    
    Co-authored-by: David <[email protected]>
    
    * ci: improve maven connection in CI builds (#5924)
    
    issue: #5921
    
    * [Improvement][Master]fix typo (#5934)
    
    ·fix typo in MasterBaseTaskExecThread
    
    * [Fix-5886][server] Enhanced scheduler delete check (#5936)
    
    * Add:Name verification remove the first and last spaces.
    
    * Update: wrong word: 'WAITTING' ->'WAITING'
    
    * Add: Strengthen verification
    
    Co-authored-by: Squid <[email protected]>
    
    * [Improvement-5880][api] Optimized data structure of pagination query API 
results (#5895)
    
    * [5880][refactor]Optimized data structure of pagination query API results
    - refactor PageInfo and delete returnDataListPaging in API
    - modify the related Controller and Service and the corresponding Test
    
    * Merge branch 'dev' of github.com:apache/dolphinscheduler into dev
    
     Conflicts:
       
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessDefinitionServiceImpl.java
    
    Co-authored-by: 蔡泽华 <[email protected]>
    
    * [IMPROVEMENT]fix mysql comment error (#5959)
    
    * [Improvement][Api]fix typo (#5960)
    
    * [Imporvement #5621][job instance] start-time and end-time (#5621)
    ·the list of workflow instances is sorted by start time and end time
    ·This closes #5621
    
    * [FIX-5975]queryLastRunningProcess sql in ProcessInstanceMapper.xml (#5980)
    
    * [NEW FEATURE][FIX-4385] compensation task add the ability to configure 
parallelism  (#5912)
    
    * update
    
    * web improved
    
    * improve the ui
    
    * add the ability to configure the parallelism
    
    * update variables
    
    * enhance the ut and add necessary note
    
    * fix code style
    
    * fix code style issue
    
    * ensure the complation task in parallel mode can run the right numbers of 
tasks.
    
    * [Improvement][dao]When I search for the keyword description, the web UI 
shows empty (#5952)
    
    * [Bug][WorkerServer] SqlTask NullPointerException #5549
    
    * [Improvement][dao]When I search for the keyword Modify User, the web UI 
shows empty #5428
    
    * [Improvement][dao]When I search for the keyword Modify User, the web UI 
shows empty #5428
    
    * [Improvement][dao]When I search for the keyword Modify User, the web UI 
shows empty #5428
    
    * [Improvement][dao]When I search for the keyword Modify User, the web UI 
shows empty #5428
    
    * [Improvement][dao]When I search for the keyword Modify User, the web UI 
shows empty #5428
    
    * [Improvement][dao]When I search for the keyword Modify User, the web UI 
shows empty #5428
    
    * [Improvement][dao]When I search for the keyword description, the web UI 
shows empty #5428
    
    * fix the readme typing issue (#5998)
    
    * Fix unchecked type conversions
    
    * Use indentation level reported by checkstyle
    
    * Reorganize CI workflows to fasten the wasted time and resources (#6011)
    
    * Add standalone server module to make it easier to develop (#6022)
    
    * fix ut
    
    Co-authored-by: Kirs <[email protected]>
    Co-authored-by: kyoty <[email protected]>
    Co-authored-by: ji04xiaogang <[email protected]>
    Co-authored-by: shenglm <[email protected]>
    Co-authored-by: wangxj3 <[email protected]>
    Co-authored-by: xingchun-chen 
<[email protected]>
    Co-authored-by: chenxingchun <[email protected]>
    Co-authored-by: Shiwen Cheng <[email protected]>
    Co-authored-by: Jianchao Wang <[email protected]>
    Co-authored-by: Tanvi Moharir 
<[email protected]>
    Co-authored-by: Hua Jiang <[email protected]>
    Co-authored-by: jiang hua <[email protected]>
    Co-authored-by: Wenjun Ruan <[email protected]>
    Co-authored-by: Tandoy <[email protected]>
    Co-authored-by: 傅双波 <[email protected]>
    Co-authored-by: shuangbofu <[email protected]>
    Co-authored-by: wen-hemin <[email protected]>
    Co-authored-by: wen-hemin <[email protected]>
    Co-authored-by: geosmart <[email protected]>
    Co-authored-by: wanggang <[email protected]>
    Co-authored-by: AzureCN <[email protected]>
    Co-authored-by: 深刻 <[email protected]>
    Co-authored-by: zhuangchong <[email protected]>
    Co-authored-by: Yao WANG <[email protected]>
    Co-authored-by: ywang46 <[email protected]>
    Co-authored-by: The Gitter Badger <[email protected]>
    Co-authored-by: David <[email protected]>
    Co-authored-by: Squidyu <[email protected]>
    Co-authored-by: Squid <[email protected]>
    Co-authored-by: soreak <[email protected]>
    Co-authored-by: 蔡泽华 <[email protected]>
    Co-authored-by: yimaixinchen <[email protected]>
    Co-authored-by: atai-555 <[email protected]>
    Co-authored-by: didiaode18 <[email protected]>
    Co-authored-by: Roy <[email protected]>
    Co-authored-by: lyxell <[email protected]>
    Co-authored-by: Wenjun Ruan <[email protected]>
    Co-authored-by: kezhenxu94 <[email protected]>
    Co-authored-by: JinyLeeChina <[email protected]>
---
 .github/actions/reviewdog-setup                    |   1 +
 .github/actions/sanity-check/action.yml            |  53 ++
 .github/workflows/{ci_backend.yml => backend.yml}  |  38 +-
 .github/workflows/{ci_e2e.yml => e2e.yml}          |  22 +-
 .../workflows/{ci_frontend.yml => frontend.yml}    |  31 +-
 .github/workflows/{ci_ut.yml => unit-test.yml}     | 100 +--
 .gitmodules                                        |   6 +
 .licenserc.yaml                                    |   5 +
 README.md                                          |   5 +-
 docker/build/hooks/build                           |   4 +-
 docker/build/hooks/build.bat                       |   4 +-
 .../dolphinscheduler-alert-email/pom.xml           |  13 +-
 .../api/controller/ExecutorController.java         |   7 +-
 .../api/service/ExecutorService.java               |   3 +-
 .../api/service/impl/ExecutorServiceImpl.java      |  37 +-
 .../api/service/impl/TenantServiceImpl.java        |   4 +-
 .../api/service/impl/UsersServiceImpl.java         |   4 +-
 .../main/resources/i18n/messages_en_US.properties  |   1 +
 .../main/resources/i18n/messages_zh_CN.properties  |   1 +
 .../api/controller/ExecutorControllerTest.java     |  49 +-
 .../api/service/ExecutorServiceTest.java           |  14 +-
 .../dolphinscheduler/common/enums/DbType.java      |  66 +-
 .../apache/dolphinscheduler/common/graph/DAG.java  |  12 +-
 .../common/utils/PropertyUtils.java                |  18 +-
 .../dao/datasource/SpringConnectionFactory.java    |   1 +
 .../dao/mapper/ProcessDefinitionMapper.xml         |   4 +-
 .../dao/mapper/ProcessInstanceMapper.xml           |   4 +-
 .../dolphinscheduler/dao/mapper/ProjectMapper.xml  |   4 +-
 .../dao/mapper/WorkFlowLineageMapper.xml           |   3 +-
 .../utils/ResourceProcessDefinitionUtilsTest.java  |   2 +-
 dolphinscheduler-dist/pom.xml                      |   7 +-
 dolphinscheduler-dist/release-docs/LICENSE         |   3 +-
 .../dispatch/executor/NettyExecutorManager.java    |   2 +-
 .../master/runner/MasterBaseTaskExecThread.java    |   2 +-
 .../server/worker/task/sql/SqlTask.java            |  14 +-
 .../server/master/MasterExecThreadTest.java        |   6 +-
 .../server/worker/task/http/HttpTaskTest.java      |   4 +-
 .../service/quartz/ProcessScheduleJob.java         |   4 +-
 dolphinscheduler-standalone-server/pom.xml         |  52 ++
 .../dolphinscheduler/server/StandaloneServer.java  |  82 ++
 .../src/main/resources/registry.properties         |  36 +-
 .../pages/definition/pages/list/_source/start.vue  |  68 +-
 .../src/js/module/i18n/locale/en_US.js             |   4 +
 .../src/js/module/i18n/locale/zh_CN.js             |   3 +
 install.sh                                         | 103 ---
 pom.xml                                            |  18 +-
 script/dolphinscheduler-daemon.sh                  |   6 +-
 .../mysql/dolphinscheduler_ddl.sql                 |   4 +-
 sql/dolphinscheduler_h2.sql                        | 943 +++++++++++++++++++++
 style/checkstyle-suppressions.xml                  |  24 -
 style/checkstyle.xml                               |   8 +-
 tools/dependencies/known-dependencies.txt          |   1 +
 52 files changed, 1482 insertions(+), 428 deletions(-)

diff --git a/.github/actions/reviewdog-setup b/.github/actions/reviewdog-setup
new file mode 160000
index 0000000..2fc905b
--- /dev/null
+++ b/.github/actions/reviewdog-setup
@@ -0,0 +1 @@
+Subproject commit 2fc905b1875f2e6b91c4201a4dc6eaa21b86547e
diff --git a/.github/actions/sanity-check/action.yml 
b/.github/actions/sanity-check/action.yml
new file mode 100644
index 0000000..a1d03a3
--- /dev/null
+++ b/.github/actions/sanity-check/action.yml
@@ -0,0 +1,53 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+name: "Sanity Check"
+
+description: |
+  Action to perform some very basic lightweight checks, like code styles, 
license headers, etc.,
+  and fail fast to avoid wasting resources running heavyweight checks, like 
unit tests, e2e tests.
+
+inputs:
+  token:
+    description: 'The GitHub API token'
+    required: false
+
+runs:
+  using: "composite"
+  steps:
+    - name: Check License Header
+      uses: apache/skywalking-eyes@a63f4afcc287dfb3727ecc45a4afc55a5e69c15f
+
+    - uses: ./.github/actions/reviewdog-setup
+      with:
+        reviewdog_version: v0.10.2
+
+    - shell: bash
+      run: ./mvnw -B -q checkstyle:checkstyle-aggregate
+
+    - shell: bash
+      env:
+        REVIEWDOG_GITHUB_API_TOKEN: ${{ inputs.token }}
+      run: |
+        if [[ -n "${{ inputs.token }}" ]]; then
+          reviewdog -f=checkstyle \
+            -reporter="github-pr-check" \
+            -filter-mode="added" \
+            -fail-on-error="true" < target/checkstyle-result.xml
+        fi
diff --git a/.github/workflows/ci_backend.yml b/.github/workflows/backend.yml
similarity index 63%
rename from .github/workflows/ci_backend.yml
rename to .github/workflows/backend.yml
index e19336e..55475b2 100644
--- a/.github/workflows/ci_backend.yml
+++ b/.github/workflows/backend.yml
@@ -19,8 +19,10 @@ name: Backend
 
 on:
   push:
+    branches:
+      - dev
     paths:
-      - '.github/workflows/ci_backend.yml'
+      - '.github/workflows/backend.yml'
       - 'package.xml'
       - 'pom.xml'
       - 'dolphinscheduler-alert/**'
@@ -31,7 +33,7 @@ on:
       - 'dolphinscheduler-server/**'
   pull_request:
     paths:
-      - '.github/workflows/ci_backend.yml'
+      - '.github/workflows/backend.yml'
       - 'package.xml'
       - 'pom.xml'
       - 'dolphinscheduler-alert/**'
@@ -41,20 +43,34 @@ on:
       - 'dolphinscheduler-rpc/**'
       - 'dolphinscheduler-server/**'
 
+concurrency:
+  group: backend-${{ github.event.pull_request.number || github.ref }}
+  cancel-in-progress: true
+
 jobs:
-  Compile-check:
+  build:
+    name: Build
     runs-on: ubuntu-latest
     steps:
       - uses: actions/checkout@v2
         with:
-          submodule: true
-      - name: Check License Header
-        uses: apache/skywalking-eyes@ec88b7d850018c8983f87729ea88549e100c5c82
-      - name: Set up JDK 1.8
-        uses: actions/setup-java@v1
+          submodules: true
+      - name: Sanity Check
+        uses: ./.github/actions/sanity-check
+        with:
+          token: ${{ secrets.GITHUB_TOKEN }} # We only need to pass this token 
in one workflow
+      - uses: actions/cache@v2
         with:
-          java-version: 1.8
-      - name: Compile
-        run: mvn -B clean compile install -Prelease -Dmaven.test.skip=true
+          path: ~/.m2/repository
+          key: ${{ runner.os }}-maven
+      - name: Build and Package
+        run: |
+          ./mvnw -B clean install \
+                 -Prelease \
+                 -Dmaven.test.skip=true \
+                 -Dcheckstyle.skip=true \
+                 -Dhttp.keepAlive=false \
+                 -Dmaven.wagon.http.pool=false \
+                 -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
       - name: Check dependency license
         run: tools/dependencies/check-LICENSE.sh
diff --git a/.github/workflows/ci_e2e.yml b/.github/workflows/e2e.yml
similarity index 89%
rename from .github/workflows/ci_e2e.yml
rename to .github/workflows/e2e.yml
index 009b3fb..2fbbffa 100644
--- a/.github/workflows/ci_e2e.yml
+++ b/.github/workflows/e2e.yml
@@ -20,26 +20,26 @@ env:
   DOCKER_DIR: ./docker
   LOG_DIR: /tmp/dolphinscheduler
 
-name: e2e Test
+name: Test
 
-jobs:
+concurrency:
+  group: e2e-${{ github.event.pull_request.number || github.ref }}
+  cancel-in-progress: true
 
-  build:
-    name: Test
+jobs:
+  test:
+    name: E2E
     runs-on: ubuntu-latest
     steps:
-
       - uses: actions/checkout@v2
         with:
-          submodule: true
-      - name: Check License Header
-        uses: apache/skywalking-eyes@ec88b7d850018c8983f87729ea88549e100c5c82
+          submodules: true
+      - name: Sanity Check
+        uses: ./.github/actions/sanity-check
       - uses: actions/cache@v1
         with:
           path: ~/.m2/repository
-          key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
-          restore-keys: |
-            ${{ runner.os }}-maven-
+          key: ${{ runner.os }}-maven
       - name: Build Image
         run: |
           sh ./docker/build/hooks/build
diff --git a/.github/workflows/ci_frontend.yml b/.github/workflows/frontend.yml
similarity index 67%
copy from .github/workflows/ci_frontend.yml
copy to .github/workflows/frontend.yml
index afa0c8d..4ab1e0d 100644
--- a/.github/workflows/ci_frontend.yml
+++ b/.github/workflows/frontend.yml
@@ -19,31 +19,44 @@ name: Frontend
 
 on:
   push:
+    branches:
+      - dev
     paths:
-      - '.github/workflows/ci_frontend.yml'
+      - '.github/workflows/frontend.yml'
       - 'dolphinscheduler-ui/**'
   pull_request:
     paths:
-      - '.github/workflows/ci_frontend.yml'
+      - '.github/workflows/frontend.yml'
       - 'dolphinscheduler-ui/**'
 
+defaults:
+  run:
+    working-directory: dolphinscheduler-ui
+
+concurrency:
+  group: frontend-${{ github.event.pull_request.number || github.ref }}
+  cancel-in-progress: true
+
 jobs:
-  Compile-check:
+  build:
+    name: Build
     runs-on: ${{ matrix.os }}
     strategy:
       matrix:
-        os: [ubuntu-latest, macos-latest]
+        os: [ ubuntu-latest, macos-latest ]
     steps:
       - uses: actions/checkout@v2
         with:
-          submodule: true
+          submodules: true
+      - if: matrix.os == 'ubuntu-latest'
+        name: Sanity Check
+        uses: ./.github/actions/sanity-check
       - name: Set up Node.js
-        uses: actions/setup-node@v1
+        uses: actions/setup-node@v2
         with:
-          version: 8
-      - name: Compile
+          node-version: 8
+      - name: Compile and Build
         run: |
-          cd dolphinscheduler-ui
           npm install node-sass --unsafe-perm
           npm install
           npm run lint
diff --git a/.github/workflows/ci_ut.yml b/.github/workflows/unit-test.yml
similarity index 52%
rename from .github/workflows/ci_ut.yml
rename to .github/workflows/unit-test.yml
index 0246aaf..3087806 100644
--- a/.github/workflows/ci_ut.yml
+++ b/.github/workflows/unit-test.yml
@@ -15,103 +15,91 @@
 # limitations under the License.
 #
 
+name: Test
+
 on:
   pull_request:
+    paths-ignore:
+      - '**/*.md'
+      - 'dolphinscheduler-ui'
   push:
+    paths-ignore:
+      - '**/*.md'
+      - 'dolphinscheduler-ui'
     branches:
       - dev
+
 env:
   LOG_DIR: /tmp/dolphinscheduler
 
-name: Unit Test
+concurrency:
+  group: unit-test-${{ github.event.pull_request.number || github.ref }}
+  cancel-in-progress: true
 
 jobs:
-
-  build:
-    name: Build
+  unit-test:
+    name: Unit Test
     runs-on: ubuntu-latest
     steps:
-
       - uses: actions/checkout@v2
         with:
-          submodule: true
-      - name: Check License Header
-        uses: apache/skywalking-eyes@ec88b7d850018c8983f87729ea88549e100c5c82
-        env:
-          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Only enable review / 
suggestion here
-      - uses: actions/cache@v1
+          submodules: true
+      - name: Sanity Check
+        uses: ./.github/actions/sanity-check
+      - name: Set up JDK 1.8
+        uses: actions/setup-java@v2
+        with:
+          java-version: 8
+          distribution: 'adopt'
+      - uses: actions/cache@v2
         with:
           path: ~/.m2/repository
-          key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
-          restore-keys: |
-            ${{ runner.os }}-maven-
+          key: ${{ runner.os }}-maven
       - name: Bootstrap database
         run: |
           sed -i "/image: bitnami\/postgresql/a\    ports:\n    - 5432:5432" 
$(pwd)/docker/docker-swarm/docker-compose.yml
           sed -i "/image: bitnami\/zookeeper/a\    ports:\n    - 2181:2181" 
$(pwd)/docker/docker-swarm/docker-compose.yml
           docker-compose -f $(pwd)/docker/docker-swarm/docker-compose.yml up 
-d dolphinscheduler-zookeeper dolphinscheduler-postgresql
           until docker logs docker-swarm_dolphinscheduler-postgresql_1 2>&1 | 
grep 'listening on IPv4 address'; do echo "waiting for postgresql ready ..."; 
sleep 1; done
-          docker run --rm --network docker-swarm_dolphinscheduler -v 
$(pwd)/sql/dolphinscheduler_postgre.sql:/docker-entrypoint-initdb.d/dolphinscheduler_postgre.sql
 bitnami/postgresql:latest bash -c "PGPASSWORD=root psql -h 
docker-swarm_dolphinscheduler-postgresql_1 -U root -d dolphinscheduler -v 
ON_ERROR_STOP=1 -f /docker-entrypoint-initdb.d/dolphinscheduler_postgre.sql"
-      - name: Set up JDK 1.8
-        uses: actions/setup-java@v1
-        with:
-          java-version: 1.8
-      - name: Git fetch unshallow
-        run: |
-          git fetch --unshallow
-          git config remote.origin.fetch "+refs/heads/*:refs/remotes/origin/*"
-          git fetch origin
-      - name: Compile
-        run: |
-          export MAVEN_OPTS='-Dmaven.repo.local=.m2/repository 
-XX:+TieredCompilation -XX:TieredStopAtLevel=1 -XX:+CMSClassUnloadingEnabled 
-XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -Xmx5g'
-          mvn clean verify -B -Dmaven.test.skip=false
+          docker run --rm --network docker-swarm_dolphinscheduler -v 
$(pwd)/sql/dolphinscheduler_postgre.sql:/docker-entrypoint-initdb.d/dolphinscheduler_postgre.sql
 bitnami/postgresql:11.11.0 bash -c "PGPASSWORD=root psql -h 
docker-swarm_dolphinscheduler-postgresql_1 -U root -d dolphinscheduler -v 
ON_ERROR_STOP=1 -f /docker-entrypoint-initdb.d/dolphinscheduler_postgre.sql"
+
+      - name: Run Unit tests
+        run: ./mvnw clean verify -B -Dmaven.test.skip=false
       - name: Upload coverage report to codecov
-        run: |
-          CODECOV_TOKEN="09c2663f-b091-4258-8a47-c981827eb29a" bash <(curl -s 
https://codecov.io/bash)
+        run: CODECOV_TOKEN="09c2663f-b091-4258-8a47-c981827eb29a" bash <(curl 
-s https://codecov.io/bash)
+
       # Set up JDK 11 for SonarCloud.
-      - name: Set up JDK 1.11
-        uses: actions/setup-java@v1
+      - name: Set up JDK 11
+        uses: actions/setup-java@v2
         with:
-          java-version: 1.11
+          java-version: 11
+          distribution: 'adopt'
       - name: Run SonarCloud Analysis
         run: >
-          mvn --batch-mode verify sonar:sonar
+          ./mvnw --batch-mode verify sonar:sonar
           -Dsonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
           -Dmaven.test.skip=true
+          -Dcheckstyle.skip=true
           -Dsonar.host.url=https://sonarcloud.io
           -Dsonar.organization=apache
           -Dsonar.core.codeCoveragePlugin=jacoco
           -Dsonar.projectKey=apache-dolphinscheduler
           -Dsonar.login=e4058004bc6be89decf558ac819aa1ecbee57682
           
-Dsonar.exclusions=dolphinscheduler-ui/src/**/i18n/locale/*.js,dolphinscheduler-microbench/src/**/*
+          -Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false 
-Dmaven.wagon.httpconnectionManager.ttlSeconds=120
         env:
           GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
           SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
+
       - name: Collect logs
+        continue-on-error: true
         run: |
           mkdir -p ${LOG_DIR}
           docker-compose -f $(pwd)/docker/docker-swarm/docker-compose.yml logs 
dolphinscheduler-postgresql > ${LOG_DIR}/db.txt
-        continue-on-error: true
 
-  Checkstyle:
-    name: Check code style
-    runs-on: ubuntu-latest
-    steps:
-      - name: Checkout
-        uses: actions/checkout@v2
+      - name: Upload logs
+        uses: actions/upload-artifact@v2
+        continue-on-error: true
         with:
-          submodule: true
-      - name: check code style
-        env:
-          WORKDIR: ./
-          REVIEWDOG_GITHUB_API_TOKEN: ${{ secrets.GITHUB_TOKEN }}
-          CHECKSTYLE_CONFIG: style/checkstyle.xml
-          REVIEWDOG_VERSION: v0.10.2
-        run: |
-          wget -O - -q 
https://github.com/checkstyle/checkstyle/releases/download/checkstyle-8.43/checkstyle-8.43-all.jar
 > /opt/checkstyle.jar
-          wget -O - -q 
https://raw.githubusercontent.com/reviewdog/reviewdog/master/install.sh | sh -s 
-- -b /opt ${REVIEWDOG_VERSION}
-          java -jar /opt/checkstyle.jar "${WORKDIR}" -c "${CHECKSTYLE_CONFIG}" 
 -f xml \
-               | /opt/reviewdog -f=checkstyle \
-                    -reporter="${INPUT_REPORTER:-github-pr-check}" \
-                    -filter-mode="${INPUT_FILTER_MODE:-added}" \
-                    -fail-on-error="${INPUT_FAIL_ON_ERROR:-false}"
+          name: unit-test-logs
+          path: ${LOG_DIR}
diff --git a/.gitmodules b/.gitmodules
index 11414db..64a562a 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -21,3 +21,9 @@
 [submodule ".github/actions/lable-on-issue"]
        path = .github/actions/lable-on-issue
        url = https://github.com/xingchun-chen/labeler
+[submodule ".github/actions/translate-on-issue"]
+       path = .github/actions/translate-on-issue
+       url = https://github.com/xingchun-chen/translation-helper.git
+[submodule ".github/actions/reviewdog-setup"]
+       path = .github/actions/reviewdog-setup
+       url = https://github.com/reviewdog/action-setup
diff --git a/.licenserc.yaml b/.licenserc.yaml
index 8f69da5..44a776e 100644
--- a/.licenserc.yaml
+++ b/.licenserc.yaml
@@ -40,5 +40,10 @@ header:
     - '**/.gitignore'
     - '**/LICENSE'
     - '**/NOTICE'
+    - '**/node_modules/**'
+    - '.github/actions/comment-on-issue/**'
+    - '.github/actions/lable-on-issue/**'
+    - '.github/actions/reviewdog-setup/**'
+    - '.github/actions/translate-on-issue/**'
 
   comment: on-failure
diff --git a/README.md b/README.md
index 9a4d7a8..5e304fd 100644
--- a/README.md
+++ b/README.md
@@ -8,6 +8,7 @@ Dolphin Scheduler Official Website
 [![Quality Gate 
Status](https://sonarcloud.io/api/project_badges/measure?project=apache-dolphinscheduler&metric=alert_status)](https://sonarcloud.io/dashboard?id=apache-dolphinscheduler)
 [![Twitter 
Follow](https://img.shields.io/twitter/follow/dolphinschedule.svg?style=social&label=Follow)](https://twitter.com/dolphinschedule)
 [![Slack 
Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw)
+[![Join the chat at 
https://gitter.im/apache-dolphinscheduler/community](https://badges.gitter.im/apache-dolphinscheduler/community.svg)](https://gitter.im/apache-dolphinscheduler/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
 
 
 
@@ -91,11 +92,11 @@ We would like to express our deep gratitude to all the 
open-source projects used
 
 You are very welcome to communicate with the developers and users of Dolphin 
Scheduler. There are two ways to find them:
 1. Join the Slack channel by [this invitation 
link](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw).
-2. Follow the [Twitter account of Dolphin 
Scheduler](https://twitter.com/dolphinschedule) and get the latest news on time.
+2. Follow the [Twitter account of 
DolphinScheduler](https://twitter.com/dolphinschedule) and get the latest news 
on time.
 
 ### Contributor over time
   
-[![Contributor over 
time](https://contributor-graph-api.apiseven.com/contributors-svg?chart=contributorOverTime&repo=apache/dolphinscheduler)](https://www.apiseven.com/en/contributor-graph?chart=contributorOverTime&repo=apache/dolphinscheduler)
+[![Contributor over 
time](https://contributor-graph-api.apiseven.com/contributors-svg?chart=contributorOverTime&repo=apache/dolphinscheduler)](https://www.apiseven.com/en/contributor-graph?chart=contributorOverTime&repo=apache/dolphinscheduler)
 
 
 ## How to Contribute
 
diff --git a/docker/build/hooks/build b/docker/build/hooks/build
index 0590761..70ea260 100755
--- a/docker/build/hooks/build
+++ b/docker/build/hooks/build
@@ -39,8 +39,8 @@ echo "Repo: $DOCKER_REPO"
 echo -e "Current Directory is $(pwd)\n"
 
 # maven package(Project Directory)
-echo -e "mvn -B clean compile package -Prelease -Dmaven.test.skip=true"
-mvn -B clean compile package -Prelease -Dmaven.test.skip=true
+echo -e "./mvnw -B clean package -Prelease -Dmaven.test.skip=true 
-Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false 
-Dmaven.wagon.httpconnectionManager.ttlSeconds=120"
+./mvnw -B clean package -Prelease -Dmaven.test.skip=true 
-Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false 
-Dmaven.wagon.httpconnectionManager.ttlSeconds=120
 
 # mv dolphinscheduler-bin.tar.gz file to docker/build directory
 echo -e "mv 
$(pwd)/dolphinscheduler-dist/target/apache-dolphinscheduler-${VERSION}-bin.tar.gz
 $(pwd)/docker/build/\n"
diff --git a/docker/build/hooks/build.bat b/docker/build/hooks/build.bat
index d4d538b..6aa3726 100644
--- a/docker/build/hooks/build.bat
+++ b/docker/build/hooks/build.bat
@@ -39,8 +39,8 @@ echo "Repo: %DOCKER_REPO%"
 echo "Current Directory is %cd%"
 
 :: maven package(Project Directory)
-echo "call mvn clean compile package -Prelease"
-call mvn clean compile package -Prelease -DskipTests=true
+echo "mvn clean package -Prelease -DskipTests=true -Dhttp.keepAlive=false 
-Dmaven.wagon.http.pool=false 
-Dmaven.wagon.httpconnectionManager.ttlSeconds=120"
+call mvn clean package -Prelease -DskipTests=true -Dhttp.keepAlive=false 
-Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120
 if "%errorlevel%"=="1" goto :mvnFailed
 
 :: move dolphinscheduler-bin.tar.gz file to docker/build directory
diff --git a/dolphinscheduler-alert-plugin/dolphinscheduler-alert-email/pom.xml 
b/dolphinscheduler-alert-plugin/dolphinscheduler-alert-email/pom.xml
index 74dedf4..079185c 100644
--- a/dolphinscheduler-alert-plugin/dolphinscheduler-alert-email/pom.xml
+++ b/dolphinscheduler-alert-plugin/dolphinscheduler-alert-email/pom.xml
@@ -31,17 +31,6 @@
     <packaging>dolphinscheduler-plugin</packaging>
 
     <dependencies>
-
-        <dependency>
-            <groupId>com.fasterxml.jackson.core</groupId>
-            <artifactId>jackson-annotations</artifactId>
-            <scope>provided</scope>
-        </dependency>
-        <dependency>
-            <groupId>com.fasterxml.jackson.core</groupId>
-            <artifactId>jackson-databind</artifactId>
-            <scope>provided</scope>
-        </dependency>
         <dependency>
             <groupId>org.apache.commons</groupId>
             <artifactId>commons-collections4</artifactId>
@@ -131,4 +120,4 @@
         <finalName>dolphinscheduler-alert-email-${project.version}</finalName>
     </build>
 
-</project>
\ No newline at end of file
+</project>
diff --git 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/ExecutorController.java
 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/ExecutorController.java
index 605a960..762c525 100644
--- 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/ExecutorController.java
+++ 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/ExecutorController.java
@@ -99,6 +99,7 @@ public class ExecutorController extends BaseController {
         @ApiImplicitParam(name = "processInstancePriority", value = 
"PROCESS_INSTANCE_PRIORITY", required = true, dataType = "Priority"),
         @ApiImplicitParam(name = "workerGroup", value = "WORKER_GROUP", 
dataType = "String", example = "default"),
         @ApiImplicitParam(name = "timeout", value = "TIMEOUT", dataType = 
"Int", example = "100"),
+        @ApiImplicitParam(name = "expectedParallelismNumber", value = 
"EXPECTED_PARALLELISM_NUMBER", dataType = "Int", example = "8")
     })
     @PostMapping(value = "start-process-instance")
     @ResponseStatus(HttpStatus.OK)
@@ -118,7 +119,8 @@ public class ExecutorController extends BaseController {
                                        @RequestParam(value = 
"processInstancePriority", required = false) Priority processInstancePriority,
                                        @RequestParam(value = "workerGroup", 
required = false, defaultValue = "default") String workerGroup,
                                        @RequestParam(value = "timeout", 
required = false) Integer timeout,
-                                       @RequestParam(value = "startParams", 
required = false) String startParams) {
+                                       @RequestParam(value = "startParams", 
required = false) String startParams,
+                                       @RequestParam(value = "timeout", 
required = false) Integer expectedParallelismNumber) {
 
         if (timeout == null) {
             timeout = Constants.MAX_TASK_TIMEOUT;
@@ -128,8 +130,7 @@ public class ExecutorController extends BaseController {
             startParamMap = JSONUtils.toMap(startParams);
         }
         Map<String, Object> result = 
execService.execProcessInstance(loginUser, projectCode, processDefinitionCode, 
scheduleTime, execType, failureStrategy,
-            startNodeList, taskDependType, warningType,
-            warningGroupId, runMode, processInstancePriority, workerGroup, 
timeout, startParamMap);
+            startNodeList, taskDependType, warningType, warningGroupId, 
runMode, processInstancePriority, workerGroup, timeout, startParamMap, 
expectedParallelismNumber);
         return returnDataList(result);
     }
 
diff --git 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/ExecutorService.java
 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/ExecutorService.java
index ac850ff..88acff3 100644
--- 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/ExecutorService.java
+++ 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/ExecutorService.java
@@ -52,6 +52,7 @@ public interface ExecutorService {
      * @param runMode run mode
      * @param timeout timeout
      * @param startParams the global param values which pass to new process 
instance
+     * @param expectedParallelismNumber the expected parallelism number when 
execute complement in parallel mode
      * @return execute process instance code
      */
     Map<String, Object> execProcessInstance(User loginUser, long projectCode,
@@ -60,7 +61,7 @@ public interface ExecutorService {
                                             TaskDependType taskDependType, 
WarningType warningType, int warningGroupId,
                                             RunMode runMode,
                                             Priority processInstancePriority, 
String workerGroup, Integer timeout,
-                                            Map<String, String> startParams);
+                                            Map<String, String> startParams, 
Integer expectedParallelismNumber);
 
     /**
      * check whether the process definition can be executed
diff --git 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ExecutorServiceImpl.java
 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ExecutorServiceImpl.java
index 28c8c67..35f9330 100644
--- 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ExecutorServiceImpl.java
+++ 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ExecutorServiceImpl.java
@@ -51,6 +51,7 @@ import org.apache.dolphinscheduler.dao.entity.Schedule;
 import org.apache.dolphinscheduler.dao.entity.Tenant;
 import org.apache.dolphinscheduler.dao.entity.User;
 import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
+import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper;
 import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
 import org.apache.dolphinscheduler.service.process.ProcessService;
 import org.apache.dolphinscheduler.service.quartz.cron.CronUtils;
@@ -89,6 +90,11 @@ public class ExecutorServiceImpl extends BaseServiceImpl 
implements ExecutorServ
     @Autowired
     private MonitorService monitorService;
 
+
+    @Autowired
+    private ProcessInstanceMapper processInstanceMapper;
+
+
     @Autowired
     private ProcessService processService;
 
@@ -100,7 +106,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl 
implements ExecutorServ
      * @param processDefinitionCode process definition code
      * @param cronTime cron time
      * @param commandType command type
-     * @param failureStrategy failuer strategy
+     * @param failureStrategy failure strategy
      * @param startNodeList start nodelist
      * @param taskDependType node dependency type
      * @param warningType warning type
@@ -110,6 +116,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl 
implements ExecutorServ
      * @param runMode run mode
      * @param timeout timeout
      * @param startParams the global param values which pass to new process 
instance
+     * @param expectedParallelismNumber the expected parallelism number when 
execute complement in parallel mode
      * @return execute process instance code
      */
     @Override
@@ -119,7 +126,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl 
implements ExecutorServ
                                                    TaskDependType 
taskDependType, WarningType warningType, int warningGroupId,
                                                    RunMode runMode,
                                                    Priority 
processInstancePriority, String workerGroup, Integer timeout,
-                                                   Map<String, String> 
startParams) {
+                                                   Map<String, String> 
startParams, Integer expectedParallelismNumber) {
         Project project = projectMapper.queryByCode(projectCode);
         //check user access for project
         Map<String, Object> result = 
projectService.checkProjectAndAuth(loginUser, project, projectCode);
@@ -156,7 +163,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl 
implements ExecutorServ
          */
         int create = this.createCommand(commandType, 
processDefinition.getCode(),
                 taskDependType, failureStrategy, startNodeList, cronTime, 
warningType, loginUser.getId(),
-                warningGroupId, runMode, processInstancePriority, workerGroup, 
startParams);
+                warningGroupId, runMode, processInstancePriority, workerGroup, 
startParams, expectedParallelismNumber);
 
         if (create > 0) {
             processDefinition.setWarningGroupId(warningGroupId);
@@ -485,7 +492,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl 
implements ExecutorServ
                               String startNodeList, String schedule, 
WarningType warningType,
                               int executorId, int warningGroupId,
                               RunMode runMode, Priority 
processInstancePriority, String workerGroup,
-                              Map<String, String> startParams) {
+                              Map<String, String> startParams, Integer 
expectedParallelismNumber) {
 
         /**
          * instantiate command schedule instance
@@ -542,21 +549,31 @@ public class ExecutorServiceImpl extends BaseServiceImpl 
implements ExecutorServ
                     return processService.createCommand(command);
                 } else if (runMode == RunMode.RUN_MODE_PARALLEL) {
                     List<Schedule> schedules = 
processService.queryReleaseSchedulerListByProcessDefinitionCode(processDefineCode);
-                    List<Date> listDate = new LinkedList<>();
+                    LinkedList<Date> listDate = new LinkedList<>();
                     if (!CollectionUtils.isEmpty(schedules)) {
                         for (Schedule item : schedules) {
                             
listDate.addAll(CronUtils.getSelfFireDateList(start, end, item.getCrontab()));
                         }
                     }
                     if (!CollectionUtils.isEmpty(listDate)) {
-                        // loop by schedule date
-                        for (Date date : listDate) {
-                            cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, 
DateUtils.dateToString(date));
-                            cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, 
DateUtils.dateToString(date));
+                        int effectThreadsCount = expectedParallelismNumber == 
null ? listDate.size() : Math.min(listDate.size(), expectedParallelismNumber);
+                        logger.info("In parallel mode, current 
expectedParallelismNumber:{}", effectThreadsCount);
+
+                        int chunkSize = listDate.size() / effectThreadsCount;
+                        listDate.addFirst(start);
+                        listDate.addLast(end);
+
+                        for (int i = 0; i < effectThreadsCount; i++) {
+                            int rangeStart = i == 0 ? i : (i * chunkSize);
+                            int rangeEnd = i == effectThreadsCount - 1 ? 
listDate.size() - 1
+                                    : rangeStart + chunkSize + 1;
+                            cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, 
DateUtils.dateToString(listDate.get(rangeStart)));
+                            cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, 
DateUtils.dateToString(listDate.get(rangeEnd)));
                             
command.setCommandParam(JSONUtils.toJsonString(cmdParam));
                             processService.createCommand(command);
                         }
-                        return listDate.size();
+
+                        return effectThreadsCount;
                     } else {
                         // loop by day
                         int runCunt = 0;
diff --git 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TenantServiceImpl.java
 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TenantServiceImpl.java
index a362131..c690ed3 100644
--- 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TenantServiceImpl.java
+++ 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TenantServiceImpl.java
@@ -155,8 +155,8 @@ public class TenantServiceImpl extends BaseServiceImpl 
implements TenantService
      * updateProcessInstance tenant
      *
      * @param loginUser  login user
-     * @param id         tennat id
-     * @param tenantCode tennat code
+     * @param id         tenant id
+     * @param tenantCode tenant code
      * @param queueId    queue id
      * @param desc       description
      * @return update result code
diff --git 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/UsersServiceImpl.java
 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/UsersServiceImpl.java
index fd19570..3b4d78f 100644
--- 
a/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/UsersServiceImpl.java
+++ 
b/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/UsersServiceImpl.java
@@ -313,7 +313,7 @@ public class UsersServiceImpl extends BaseServiceImpl 
implements UsersService {
      *
      * @param loginUser login user
      * @param pageNo page number
-     * @param searchVal search avlue
+     * @param searchVal search value
      * @param pageSize page size
      * @return user list page
      */
@@ -347,7 +347,7 @@ public class UsersServiceImpl extends BaseServiceImpl 
implements UsersService {
      * @param userName user name
      * @param userPassword user password
      * @param email email
-     * @param tenantId tennat id
+     * @param tenantId tenant id
      * @param phone phone
      * @param queue queue
      * @return update result code
diff --git 
a/dolphinscheduler-api/src/main/resources/i18n/messages_en_US.properties 
b/dolphinscheduler-api/src/main/resources/i18n/messages_en_US.properties
index a7c8b84..db705be 100644
--- a/dolphinscheduler-api/src/main/resources/i18n/messages_en_US.properties
+++ b/dolphinscheduler-api/src/main/resources/i18n/messages_en_US.properties
@@ -171,6 +171,7 @@ PROCESS_INSTANCE_START_TIME=process instance start time
 PROCESS_INSTANCE_END_TIME=process instance end time
 PROCESS_INSTANCE_SIZE=process instance size
 PROCESS_INSTANCE_PRIORITY=process instance priority
+EXPECTED_PARALLELISM_NUMBER=custom parallelism to set the complement task 
threads
 UPDATE_SCHEDULE_NOTES=update schedule 
 SCHEDULE_ID=schedule id
 ONLINE_SCHEDULE_NOTES=online schedule
diff --git 
a/dolphinscheduler-api/src/main/resources/i18n/messages_zh_CN.properties 
b/dolphinscheduler-api/src/main/resources/i18n/messages_zh_CN.properties
index acc5be8..ec88f74 100644
--- a/dolphinscheduler-api/src/main/resources/i18n/messages_zh_CN.properties
+++ b/dolphinscheduler-api/src/main/resources/i18n/messages_zh_CN.properties
@@ -157,6 +157,7 @@ RECEIVERS=收件人
 RECEIVERS_CC=收件人(抄送)
 WORKER_GROUP_ID=Worker Server分组ID
 PROCESS_INSTANCE_PRIORITY=流程实例优先级
+EXPECTED_PARALLELISM_NUMBER=补数任务自定义并行度
 UPDATE_SCHEDULE_NOTES=更新定时
 SCHEDULE_ID=定时ID
 ONLINE_SCHEDULE_NOTES=定时上线
diff --git 
a/dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/controller/ExecutorControllerTest.java
 
b/dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/controller/ExecutorControllerTest.java
index 1bf10ae..5751e01 100644
--- 
a/dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/controller/ExecutorControllerTest.java
+++ 
b/dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/controller/ExecutorControllerTest.java
@@ -22,23 +22,16 @@ import static 
org.springframework.test.web.servlet.result.MockMvcResultMatchers.
 import static 
org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
 
 import org.apache.dolphinscheduler.api.enums.ExecuteType;
-import org.apache.dolphinscheduler.api.enums.Status;
-import org.apache.dolphinscheduler.api.service.ExecutorService;
 import org.apache.dolphinscheduler.api.utils.Result;
-import org.apache.dolphinscheduler.common.Constants;
 import org.apache.dolphinscheduler.common.enums.FailureStrategy;
 import org.apache.dolphinscheduler.common.enums.WarningType;
 import org.apache.dolphinscheduler.common.utils.JSONUtils;
 
-import java.util.HashMap;
-import java.util.Map;
-
 import org.junit.Assert;
+import org.junit.Ignore;
 import org.junit.Test;
-import org.mockito.Mockito;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-import org.springframework.boot.test.mock.mockito.MockBean;
 import org.springframework.http.MediaType;
 import org.springframework.test.web.servlet.MvcResult;
 import org.springframework.util.LinkedMultiValueMap;
@@ -51,79 +44,65 @@ public class ExecutorControllerTest extends 
AbstractControllerTest {
 
     private static Logger logger = 
LoggerFactory.getLogger(ExecutorControllerTest.class);
 
-    @MockBean
-    private ExecutorService executorService;
-
+    @Ignore
     @Test
     public void testStartProcessInstance() throws Exception {
-        Map<String, Object> resultData = new HashMap<>();
-        resultData.put(Constants.STATUS, Status.SUCCESS);
-        Mockito.when(executorService.execProcessInstance(Mockito.any(), 
Mockito.anyLong(), Mockito.anyLong(), Mockito.any(),
-                Mockito.any(), Mockito.any(), Mockito.any(), Mockito.any(), 
Mockito.any(), Mockito.anyInt(),
-                Mockito.any(), Mockito.any(), Mockito.any(), Mockito.anyInt(), 
Mockito.any())).thenReturn(resultData);
-
         MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>();
-        paramsMap.add("processDefinitionCode", "1");
+        paramsMap.add("processDefinitionId", "40");
         paramsMap.add("scheduleTime", "");
         paramsMap.add("failureStrategy", 
String.valueOf(FailureStrategy.CONTINUE));
         paramsMap.add("startNodeList", "");
         paramsMap.add("taskDependType", "");
         paramsMap.add("execType", "");
         paramsMap.add("warningType", String.valueOf(WarningType.NONE));
-        paramsMap.add("warningGroupId", "1");
+        paramsMap.add("warningGroupId", "");
         paramsMap.add("receivers", "");
         paramsMap.add("receiversCc", "");
         paramsMap.add("runMode", "");
         paramsMap.add("processInstancePriority", "");
-        paramsMap.add("workerGroupId", "1");
+        paramsMap.add("workerGroupId", "");
         paramsMap.add("timeout", "");
 
-        MvcResult mvcResult = 
mockMvc.perform(post("/projects/{projectCode}/executors/start-process-instance",
 1L)
+        MvcResult mvcResult = 
mockMvc.perform(post("/projects/{projectName}/executors/start-process-instance",
 "cxc_1113")
             .header("sessionId", sessionId)
             .params(paramsMap))
             .andExpect(status().isOk())
             .andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
             .andReturn();
         Result result = 
JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), 
Result.class);
-        Assert.assertEquals(Status.SUCCESS.getCode(), 
result.getCode().intValue());
+        Assert.assertTrue(result != null && result.isSuccess());
         logger.info(mvcResult.getResponse().getContentAsString());
     }
 
+    @Ignore
     @Test
     public void testExecute() throws Exception {
-        Map<String, Object> resultData = new HashMap<>();
-        resultData.put(Constants.STATUS, Status.SUCCESS);
-        Mockito.when(executorService.execute(Mockito.any(), Mockito.anyLong(), 
Mockito.anyInt(), Mockito.any())).thenReturn(resultData);
-
         MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>();
         paramsMap.add("processInstanceId", "40");
         paramsMap.add("executeType", String.valueOf(ExecuteType.NONE));
 
-        MvcResult mvcResult = 
mockMvc.perform(post("/projects/{projectCode}/executors/execute", 1L)
+        MvcResult mvcResult = 
mockMvc.perform(post("/projects/{projectName}/executors/execute", "cxc_1113")
             .header("sessionId", sessionId)
             .params(paramsMap))
             .andExpect(status().isOk())
             .andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
             .andReturn();
         Result result = 
JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), 
Result.class);
-        Assert.assertEquals(Status.SUCCESS.getCode(), 
result.getCode().intValue());
+        Assert.assertTrue(result != null && result.isSuccess());
         logger.info(mvcResult.getResponse().getContentAsString());
     }
 
     @Test
-    public void testStartCheck() throws Exception {
-        Map<String, Object> resultData = new HashMap<>();
-        resultData.put(Constants.STATUS, Status.SUCCESS);
-        
Mockito.when(executorService.startCheckByProcessDefinedCode(Mockito.anyLong())).thenReturn(resultData);
+    public void testStartCheckProcessDefinition() throws Exception {
 
-        MvcResult mvcResult = 
mockMvc.perform(post("/projects/{projectCode}/executors/start-check", 1L)
+        MvcResult mvcResult = 
mockMvc.perform(post("/projects/{projectName}/executors/start-check", 
"cxc_1113")
             .header(SESSION_ID, sessionId)
-            .param("processDefinitionCode", "1"))
+            .param("processDefinitionId", "40"))
             .andExpect(status().isOk())
             .andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
             .andReturn();
         Result result = 
JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), 
Result.class);
-        Assert.assertEquals(Status.SUCCESS.getCode(), 
result.getCode().intValue());
+        Assert.assertTrue(result != null && result.isSuccess());
         logger.info(mvcResult.getResponse().getContentAsString());
     }
 
diff --git 
a/dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/ExecutorServiceTest.java
 
b/dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/ExecutorServiceTest.java
index 4c1b3e4..e962be5 100644
--- 
a/dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/ExecutorServiceTest.java
+++ 
b/dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/ExecutorServiceTest.java
@@ -158,7 +158,7 @@ public class ExecutorServiceTest {
                 null, null,
                 null, null, 0,
                 RunMode.RUN_MODE_SERIAL,
-                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null);
+                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
         Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
         verify(processService, times(1)).createCommand(any(Command.class));
 
@@ -176,7 +176,7 @@ public class ExecutorServiceTest {
                 null, "n1,n2",
                 null, null, 0,
                 RunMode.RUN_MODE_SERIAL,
-                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null);
+                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
         Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
         verify(processService, times(1)).createCommand(any(Command.class));
 
@@ -194,7 +194,7 @@ public class ExecutorServiceTest {
                 null, null,
                 null, null, 0,
                 RunMode.RUN_MODE_SERIAL,
-                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null);
+                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
         Assert.assertEquals(Status.START_PROCESS_INSTANCE_ERROR, 
result.get(Constants.STATUS));
         verify(processService, times(0)).createCommand(any(Command.class));
     }
@@ -211,7 +211,7 @@ public class ExecutorServiceTest {
                 null, null,
                 null, null, 0,
                 RunMode.RUN_MODE_SERIAL,
-                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null);
+                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
         Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
         verify(processService, times(1)).createCommand(any(Command.class));
     }
@@ -228,7 +228,7 @@ public class ExecutorServiceTest {
                 null, null,
                 null, null, 0,
                 RunMode.RUN_MODE_PARALLEL,
-                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null);
+                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
         Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
         verify(processService, times(31)).createCommand(any(Command.class));
 
@@ -246,7 +246,7 @@ public class ExecutorServiceTest {
                 null, null,
                 null, null, 0,
                 RunMode.RUN_MODE_PARALLEL,
-                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null);
+                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 15);
         Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
         verify(processService, times(15)).createCommand(any(Command.class));
 
@@ -261,7 +261,7 @@ public class ExecutorServiceTest {
                 null, null,
                 null, null, 0,
                 RunMode.RUN_MODE_PARALLEL,
-                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null);
+                Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 110, null, 0);
         Assert.assertEquals(result.get(Constants.STATUS), 
Status.MASTER_NOT_EXISTS);
 
     }
diff --git 
a/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/enums/DbType.java
 
b/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/enums/DbType.java
index 46d59d1..b994afb 100644
--- 
a/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/enums/DbType.java
+++ 
b/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/enums/DbType.java
@@ -14,65 +14,45 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+
 package org.apache.dolphinscheduler.common.enums;
 
-import com.baomidou.mybatisplus.annotation.EnumValue;
+import static java.util.stream.Collectors.toMap;
 
-import java.util.HashMap;
+import java.util.Arrays;
+import java.util.Map;
 
-/**
- * data base types
- */
-public enum DbType {
-    /**
-     * 0 mysql
-     * 1 postgresql
-     * 2 hive
-     * 3 spark
-     * 4 clickhouse
-     * 5 oracle
-     * 6 sqlserver
-     * 7 db2
-     * 8 presto
-     */
-    MYSQL(0, "mysql"),
-    POSTGRESQL(1, "postgresql"),
-    HIVE(2, "hive"),
-    SPARK(3, "spark"),
-    CLICKHOUSE(4, "clickhouse"),
-    ORACLE(5, "oracle"),
-    SQLSERVER(6, "sqlserver"),
-    DB2(7, "db2"),
-    PRESTO(8, "presto");
+import com.baomidou.mybatisplus.annotation.EnumValue;
+import com.google.common.base.Functions;
 
-    DbType(int code, String descp) {
+public enum DbType {
+    MYSQL(0),
+    POSTGRESQL(1),
+    HIVE(2),
+    SPARK(3),
+    CLICKHOUSE(4),
+    ORACLE(5),
+    SQLSERVER(6),
+    DB2(7),
+    PRESTO(8),
+    H2(9);
+
+    DbType(int code) {
         this.code = code;
-        this.descp = descp;
     }
 
     @EnumValue
     private final int code;
-    private final String descp;
 
     public int getCode() {
         return code;
     }
 
-    public String getDescp() {
-        return descp;
-    }
-
-
-    private static HashMap<Integer, DbType> DB_TYPE_MAP =new HashMap<>();
-
-    static {
-        for (DbType dbType:DbType.values()){
-            DB_TYPE_MAP.put(dbType.getCode(),dbType);
-        }
-    }
+    private static final Map<Integer, DbType> DB_TYPE_MAP =
+            Arrays.stream(DbType.values()).collect(toMap(DbType::getCode, 
Functions.identity()));
 
-    public static DbType of(int type){
-        if(DB_TYPE_MAP.containsKey(type)){
+    public static DbType of(int type) {
+        if (DB_TYPE_MAP.containsKey(type)) {
             return DB_TYPE_MAP.get(type);
         }
         return null;
diff --git 
a/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/graph/DAG.java
 
b/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/graph/DAG.java
index deaf80f..397f32e 100644
--- 
a/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/graph/DAG.java
+++ 
b/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/graph/DAG.java
@@ -432,13 +432,11 @@ public class DAG<Node, NodeInfo, EdgeInfo> {
    * @return all neighbor nodes of the node
    */
   private Set<Node> getNeighborNodes(Node node, final Map<Node, Map<Node, 
EdgeInfo>> edges) {
-    final Map<Node, EdgeInfo> neighborEdges = edges.get(node);
-
-    if (neighborEdges == null) {
-      return Collections.EMPTY_MAP.keySet();
-    }
-
-    return neighborEdges.keySet();
+        final Map<Node, EdgeInfo> neighborEdges = edges.get(node);
+        if (neighborEdges == null) {
+            return Collections.emptySet();
+        }
+        return neighborEdges.keySet();
   }
 
 
diff --git 
a/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java
 
b/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java
index 065d7bc..53a97d9 100644
--- 
a/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java
+++ 
b/dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java
@@ -34,15 +34,7 @@ import java.util.Set;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-/**
- * property utils
- * single instance
- */
 public class PropertyUtils {
-
-    /**
-     * logger
-     */
     private static final Logger logger = 
LoggerFactory.getLogger(PropertyUtils.class);
 
     private static final Properties properties = new Properties();
@@ -55,9 +47,6 @@ public class PropertyUtils {
         loadPropertyFile(COMMON_PROPERTIES_PATH);
     }
 
-    /**
-     * init properties
-     */
     public static synchronized void loadPropertyFile(String... propertyFiles) {
         for (String fileName : propertyFiles) {
             try (InputStream fis = 
PropertyUtils.class.getResourceAsStream(fileName);) {
@@ -68,6 +57,13 @@ public class PropertyUtils {
                 System.exit(1);
             }
         }
+
+        // Override from system properties
+        System.getProperties().forEach((k, v) -> {
+            final String key = String.valueOf(k);
+            logger.info("Overriding property from system property: {}", key);
+            PropertyUtils.setValue(key, String.valueOf(v));
+        });
     }
 
     /**
diff --git 
a/dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/datasource/SpringConnectionFactory.java
 
b/dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/datasource/SpringConnectionFactory.java
index a58955d..ca4a7e2 100644
--- 
a/dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/datasource/SpringConnectionFactory.java
+++ 
b/dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/datasource/SpringConnectionFactory.java
@@ -166,6 +166,7 @@ public class SpringConnectionFactory {
         Properties properties = new Properties();
         properties.setProperty("MySQL", "mysql");
         properties.setProperty("PostgreSQL", "pg");
+        properties.setProperty("h2", "h2");
         databaseIdProvider.setProperties(properties);
         return databaseIdProvider;
     }
diff --git 
a/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessDefinitionMapper.xml
 
b/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessDefinitionMapper.xml
index 7817cba..9f76dd1 100644
--- 
a/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessDefinitionMapper.xml
+++ 
b/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessDefinitionMapper.xml
@@ -77,7 +77,9 @@
         left join t_ds_user tu on td.user_id = tu.id
         where td.project_code = #{projectCode}
         <if test=" searchVal != null and searchVal != ''">
-            and td.name like concat('%', #{searchVal}, '%')
+            AND (td.name like concat('%', #{searchVal}, '%')
+            OR td.description like concat('%', #{searchVal}, '%')
+            )
         </if>
         <if test=" userId != 0">
             and td.user_id = #{userId}
diff --git 
a/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessInstanceMapper.xml
 
b/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessInstanceMapper.xml
index 5880434..db56301 100644
--- 
a/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessInstanceMapper.xml
+++ 
b/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProcessInstanceMapper.xml
@@ -191,8 +191,8 @@
             </foreach>
         </if>
         <if test="startTime!=null and endTime != null ">
-            and (schedule_time <![CDATA[ >= ]]> #{startTime} and schedule_time 
<![CDATA[ <= ]]> #{endTime}
-            or start_time <![CDATA[ >= ]]> #{startTime} and start_time 
<![CDATA[ <= ]]> #{endTime})
+            and ((schedule_time <![CDATA[ >= ]]> #{startTime} and 
schedule_time <![CDATA[ <= ]]> #{endTime})
+            or (start_time <![CDATA[ >= ]]> #{startTime} and start_time 
<![CDATA[ <= ]]> #{endTime}))
         </if>
         order by start_time desc limit 1
     </select>
diff --git 
a/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProjectMapper.xml
 
b/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProjectMapper.xml
index 2fc077c..d1cc5f7 100644
--- 
a/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProjectMapper.xml
+++ 
b/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/ProjectMapper.xml
@@ -88,7 +88,9 @@
             )
         </if>
         <if test="searchName!=null and searchName != ''">
-            and p.name like concat('%', #{searchName}, '%')
+            AND (p.name LIKE concat('%', #{searchName}, '%')
+            OR p.description LIKE concat('%', #{searchName}, '%')
+            )
         </if>
         order by p.create_time desc
     </select>
diff --git 
a/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/WorkFlowLineageMapper.xml
 
b/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/WorkFlowLineageMapper.xml
index eeddaf5..7d1dbfb 100644
--- 
a/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/WorkFlowLineageMapper.xml
+++ 
b/dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/WorkFlowLineageMapper.xml
@@ -64,8 +64,7 @@
                 and project_code = #{projectCode}
     </select>
 
-    <select id="queryWorkFlowLineageByCode" 
resultType="org.apache.dolphinscheduler.dao.entity.WorkFlowLineage"
-            databaseId="mysql">
+    <select id="queryWorkFlowLineageByCode" 
resultType="org.apache.dolphinscheduler.dao.entity.WorkFlowLineage">
         select tepd.id as work_flow_id,tepd.name as work_flow_name,
                "" as source_work_flow_id,
                 tepd.release_state as work_flow_publish_status,
diff --git 
a/dolphinscheduler-dao/src/test/java/org/apache/dolphinscheduler/dao/utils/ResourceProcessDefinitionUtilsTest.java
 
b/dolphinscheduler-dao/src/test/java/org/apache/dolphinscheduler/dao/utils/ResourceProcessDefinitionUtilsTest.java
index 482aa6e..67828d7 100644
--- 
a/dolphinscheduler-dao/src/test/java/org/apache/dolphinscheduler/dao/utils/ResourceProcessDefinitionUtilsTest.java
+++ 
b/dolphinscheduler-dao/src/test/java/org/apache/dolphinscheduler/dao/utils/ResourceProcessDefinitionUtilsTest.java
@@ -31,7 +31,7 @@ public class ResourceProcessDefinitionUtilsTest {
     @Test
     public void getResourceProcessDefinitionMapTest(){
         List<Map<String,Object>> mapList = new ArrayList<>();
-        Map<String,Object> map = new HashMap();
+        Map<String,Object> map = new HashMap<>();
         map.put("code",1L);
         map.put("resource_ids","1,2,3");
         mapList.add(map);
diff --git a/dolphinscheduler-dist/pom.xml b/dolphinscheduler-dist/pom.xml
index 33a711c..e999a49 100644
--- a/dolphinscheduler-dist/pom.xml
+++ b/dolphinscheduler-dist/pom.xml
@@ -39,6 +39,11 @@
 
         <dependency>
             <groupId>org.apache.dolphinscheduler</groupId>
+            <artifactId>dolphinscheduler-standalone-server</artifactId>
+        </dependency>
+
+        <dependency>
+            <groupId>org.apache.dolphinscheduler</groupId>
             <artifactId>dolphinscheduler-api</artifactId>
         </dependency>
 
@@ -377,4 +382,4 @@
 
     </profiles>
 
-</project>
\ No newline at end of file
+</project>
diff --git a/dolphinscheduler-dist/release-docs/LICENSE 
b/dolphinscheduler-dist/release-docs/LICENSE
index 2308359..19da58b 100644
--- a/dolphinscheduler-dist/release-docs/LICENSE
+++ b/dolphinscheduler-dist/release-docs/LICENSE
@@ -249,6 +249,7 @@ The text of each license is also included at 
licenses/LICENSE-[project].txt.
     curator-client 4.3.0: 
https://mvnrepository.com/artifact/org.apache.curator/curator-client/4.3.0, 
Apache 2.0
     curator-framework 4.3.0: 
https://mvnrepository.com/artifact/org.apache.curator/curator-framework/4.3.0, 
Apache 2.0
     curator-recipes 4.3.0: 
https://mvnrepository.com/artifact/org.apache.curator/curator-recipes/4.3.0, 
Apache 2.0
+    curator-test 2.12.0: 
https://mvnrepository.com/artifact/org.apache.curator/curator-test/2.12.0, 
Apache 2.0
     datanucleus-api-jdo 4.2.1: 
https://mvnrepository.com/artifact/org.datanucleus/datanucleus-api-jdo/4.2.1, 
Apache 2.0
     datanucleus-core 4.1.6: 
https://mvnrepository.com/artifact/org.datanucleus/datanucleus-core/4.1.6, 
Apache 2.0
     datanucleus-rdbms 4.1.7: 
https://mvnrepository.com/artifact/org.datanucleus/datanucleus-rdbms/4.1.7, 
Apache 2.0
@@ -557,4 +558,4 @@ Apache 2.0 licenses
 ========================================
 BSD licenses
 ========================================
-    d3 3.5.17: https://github.com/d3/d3 BSD-3-Clause
\ No newline at end of file
+    d3 3.5.17: https://github.com/d3/d3 BSD-3-Clause
diff --git 
a/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/dispatch/executor/NettyExecutorManager.java
 
b/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/dispatch/executor/NettyExecutorManager.java
index bb1e314..91c954a 100644
--- 
a/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/dispatch/executor/NettyExecutorManager.java
+++ 
b/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/dispatch/executor/NettyExecutorManager.java
@@ -178,7 +178,7 @@ public class NettyExecutorManager extends 
AbstractExecutorManager<Boolean>{
      * @return nodes
      */
     private Set<String> getAllNodes(ExecutionContext context){
-        Set<String> nodes = Collections.EMPTY_SET;
+        Set<String> nodes = Collections.emptySet();
         /**
          * executor type
          */
diff --git 
a/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterBaseTaskExecThread.java
 
b/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterBaseTaskExecThread.java
index a784e12..cfd8a9a 100644
--- 
a/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterBaseTaskExecThread.java
+++ 
b/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterBaseTaskExecThread.java
@@ -191,7 +191,7 @@ public class MasterBaseTaskExecThread implements 
Callable<Boolean> {
     }
 
     /**
-     * dispatcht task
+     * dispatch task
      *
      * @param taskInstance taskInstance
      * @return whether submit task success
diff --git 
a/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/sql/SqlTask.java
 
b/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/sql/SqlTask.java
index 9dd8b51..3c4b3ab 100644
--- 
a/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/sql/SqlTask.java
+++ 
b/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/sql/SqlTask.java
@@ -42,6 +42,8 @@ import 
org.apache.dolphinscheduler.server.worker.task.AbstractTask;
 import org.apache.dolphinscheduler.service.alert.AlertClientService;
 import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
 
+import org.apache.commons.collections.MapUtils;
+
 import java.sql.Connection;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
@@ -271,11 +273,11 @@ public class SqlTask extends AbstractTask {
 
     public String setNonQuerySqlReturn(String updateResult, List<Property> 
properties) {
         String result = null;
-        for (Property info :properties) {
+        for (Property info : properties) {
             if (Direct.OUT == info.getDirect()) {
-                List<Map<String,String>> updateRL = new ArrayList<>();
-                Map<String,String> updateRM = new HashMap<>();
-                updateRM.put(info.getProp(),updateResult);
+                List<Map<String, String>> updateRL = new ArrayList<>();
+                Map<String, String> updateRM = new HashMap<>();
+                updateRM.put(info.getProp(), updateResult);
                 updateRL.add(updateRM);
                 result = JSONUtils.toJsonString(updateRL);
                 break;
@@ -490,6 +492,10 @@ public class SqlTask extends AbstractTask {
     public void printReplacedSql(String content, String formatSql, String 
rgex, Map<Integer, Property> sqlParamsMap) {
         //parameter print style
         logger.info("after replace sql , preparing : {}", formatSql);
+        if (MapUtils.isEmpty(sqlParamsMap)) {
+            logger.info("sqlParamsMap should not be Empty");
+            return;
+        }
         StringBuilder logPrint = new StringBuilder("replaced sql , 
parameters:");
         if (sqlParamsMap == null) {
             logger.info("printReplacedSql: sqlParamsMap is null.");
diff --git 
a/dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/master/MasterExecThreadTest.java
 
b/dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/master/MasterExecThreadTest.java
index 196fb54..b8eb7ff 100644
--- 
a/dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/master/MasterExecThreadTest.java
+++ 
b/dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/master/MasterExecThreadTest.java
@@ -101,8 +101,8 @@ public class MasterExecThreadTest {
         cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, "2020-01-20 23:00:00");
         
Mockito.when(processInstance.getCommandParam()).thenReturn(JSONUtils.toJsonString(cmdParam));
         ProcessDefinition processDefinition = new ProcessDefinition();
-        processDefinition.setGlobalParamMap(Collections.EMPTY_MAP);
-        processDefinition.setGlobalParamList(Collections.EMPTY_LIST);
+        processDefinition.setGlobalParamMap(Collections.emptyMap());
+        processDefinition.setGlobalParamList(Collections.emptyList());
         
Mockito.when(processInstance.getProcessDefinition()).thenReturn(processDefinition);
         
Mockito.when(processInstance.getProcessDefinitionCode()).thenReturn(processDefinitionCode);
 
@@ -257,7 +257,7 @@ public class MasterExecThreadTest {
     }
 
     private List<Schedule> zeroSchedulerList() {
-        return Collections.EMPTY_LIST;
+        return Collections.emptyList();
     }
 
     private List<Schedule> oneSchedulerList() {
diff --git 
a/dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/http/HttpTaskTest.java
 
b/dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/http/HttpTaskTest.java
index f0d5d79..04b2a0d 100644
--- 
a/dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/http/HttpTaskTest.java
+++ 
b/dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/http/HttpTaskTest.java
@@ -55,8 +55,6 @@ import org.springframework.context.ApplicationContext;
 public class HttpTaskTest {
     private static final Logger logger = 
LoggerFactory.getLogger(HttpTaskTest.class);
 
-
-
     private HttpTask httpTask;
 
     private ProcessService processService;
@@ -168,7 +166,7 @@ public class HttpTaskTest {
 
         } catch (IOException e) {
             e.printStackTrace();
-        };
+        }
     }
 
     @Test
diff --git 
a/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/quartz/ProcessScheduleJob.java
 
b/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/quartz/ProcessScheduleJob.java
index eacd8bc..1de5c56 100644
--- 
a/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/quartz/ProcessScheduleJob.java
+++ 
b/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/quartz/ProcessScheduleJob.java
@@ -75,8 +75,8 @@ public class ProcessScheduleJob implements Job {
 
         // query schedule
         Schedule schedule = getProcessService().querySchedule(scheduleId);
-        if (schedule == null) {
-            logger.warn("process schedule does not exist in db,delete schedule 
job in quartz, projectId:{}, scheduleId:{}", projectId, scheduleId);
+        if (schedule == null || ReleaseState.OFFLINE == 
schedule.getReleaseState()) {
+            logger.warn("process schedule does not exist in db or process 
schedule offline,delete schedule job in quartz, projectId:{}, scheduleId:{}", 
projectId, scheduleId);
             deleteJob(projectId, scheduleId);
             return;
         }
diff --git a/dolphinscheduler-standalone-server/pom.xml 
b/dolphinscheduler-standalone-server/pom.xml
new file mode 100644
index 0000000..505a3b5
--- /dev/null
+++ b/dolphinscheduler-standalone-server/pom.xml
@@ -0,0 +1,52 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one or more
+  ~ contributor license agreements.  See the NOTICE file distributed with
+  ~ this work for additional information regarding copyright ownership.
+  ~ The ASF licenses this file to You under the Apache License, Version 2.0
+  ~ (the "License"); you may not use this file except in compliance with
+  ~ the License.  You may obtain a copy of the License at
+  ~
+  ~     http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing, software
+  ~ distributed under the License is distributed on an "AS IS" BASIS,
+  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~ See the License for the specific language governing permissions and
+  ~ limitations under the License.
+  -->
+<project xmlns="http://maven.apache.org/POM/4.0.0";
+         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+    <parent>
+        <artifactId>dolphinscheduler</artifactId>
+        <groupId>org.apache.dolphinscheduler</groupId>
+        <version>1.3.6-SNAPSHOT</version>
+    </parent>
+    <modelVersion>4.0.0</modelVersion>
+
+    <artifactId>dolphinscheduler-standalone-server</artifactId>
+
+    <dependencies>
+        <dependency>
+            <groupId>org.apache.dolphinscheduler</groupId>
+            <artifactId>dolphinscheduler-server</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.dolphinscheduler</groupId>
+            <artifactId>dolphinscheduler-api</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.curator</groupId>
+            <artifactId>curator-test</artifactId>
+            <version>${curator.test}</version>
+            <exclusions>
+                <exclusion>
+                    <groupId>org.javassist</groupId>
+                    <artifactId>javassist</artifactId>
+                </exclusion>
+            </exclusions>
+        </dependency>
+    </dependencies>
+
+</project>
diff --git 
a/dolphinscheduler-standalone-server/src/main/java/org/apache/dolphinscheduler/server/StandaloneServer.java
 
b/dolphinscheduler-standalone-server/src/main/java/org/apache/dolphinscheduler/server/StandaloneServer.java
new file mode 100644
index 0000000..3b92b7f
--- /dev/null
+++ 
b/dolphinscheduler-standalone-server/src/main/java/org/apache/dolphinscheduler/server/StandaloneServer.java
@@ -0,0 +1,82 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.dolphinscheduler.server;
+
+import static 
org.apache.dolphinscheduler.common.Constants.SPRING_DATASOURCE_DRIVER_CLASS_NAME;
+import static 
org.apache.dolphinscheduler.common.Constants.SPRING_DATASOURCE_PASSWORD;
+import static 
org.apache.dolphinscheduler.common.Constants.SPRING_DATASOURCE_URL;
+import static 
org.apache.dolphinscheduler.common.Constants.SPRING_DATASOURCE_USERNAME;
+
+import org.apache.dolphinscheduler.api.ApiApplicationServer;
+import org.apache.dolphinscheduler.common.utils.ScriptRunner;
+import org.apache.dolphinscheduler.dao.datasource.ConnectionFactory;
+import org.apache.dolphinscheduler.server.master.MasterServer;
+import org.apache.dolphinscheduler.server.worker.WorkerServer;
+
+import org.apache.curator.test.TestingServer;
+
+import java.io.FileReader;
+import java.nio.file.Files;
+import java.nio.file.Path;
+
+import javax.sql.DataSource;
+
+import org.h2.tools.Server;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.springframework.boot.autoconfigure.SpringBootApplication;
+import org.springframework.boot.builder.SpringApplicationBuilder;
+
+@SpringBootApplication
+public class StandaloneServer {
+    private static final Logger LOGGER = 
LoggerFactory.getLogger(StandaloneServer.class);
+
+    public static void main(String[] args) throws Exception {
+        System.setProperty("spring.profiles.active", "api");
+
+        final Path temp = Files.createTempDirectory("dolphinscheduler_");
+        LOGGER.info("H2 database directory: {}", temp);
+        System.setProperty(
+                SPRING_DATASOURCE_DRIVER_CLASS_NAME,
+                org.h2.Driver.class.getName()
+        );
+        System.setProperty(
+                SPRING_DATASOURCE_URL,
+                String.format("jdbc:h2:tcp://localhost/%s", 
temp.toAbsolutePath())
+        );
+        System.setProperty(SPRING_DATASOURCE_USERNAME, "sa");
+        System.setProperty(SPRING_DATASOURCE_PASSWORD, "");
+
+        Server.createTcpServer("-ifNotExists").start();
+
+        final DataSource ds = ConnectionFactory.getInstance().getDataSource();
+        final ScriptRunner runner = new ScriptRunner(ds.getConnection(), true, 
true);
+        runner.runScript(new FileReader("sql/dolphinscheduler_h2.sql"));
+
+        final TestingServer server = new TestingServer(true);
+        System.setProperty("registry.servers", server.getConnectString());
+
+        Thread.currentThread().setName("Standalone-Server");
+
+        new SpringApplicationBuilder(
+                ApiApplicationServer.class,
+                MasterServer.class,
+                WorkerServer.class
+        ).run(args);
+    }
+}
diff --git a/.github/workflows/ci_frontend.yml 
b/dolphinscheduler-standalone-server/src/main/resources/registry.properties
similarity index 52%
rename from .github/workflows/ci_frontend.yml
rename to 
dolphinscheduler-standalone-server/src/main/resources/registry.properties
index afa0c8d..3f557ce 100644
--- a/.github/workflows/ci_frontend.yml
+++ b/dolphinscheduler-standalone-server/src/main/resources/registry.properties
@@ -15,36 +15,8 @@
 # limitations under the License.
 #
 
-name: Frontend
+# This file is only to override the production configurations in standalone 
server.
 
-on:
-  push:
-    paths:
-      - '.github/workflows/ci_frontend.yml'
-      - 'dolphinscheduler-ui/**'
-  pull_request:
-    paths:
-      - '.github/workflows/ci_frontend.yml'
-      - 'dolphinscheduler-ui/**'
-
-jobs:
-  Compile-check:
-    runs-on: ${{ matrix.os }}
-    strategy:
-      matrix:
-        os: [ubuntu-latest, macos-latest]
-    steps:
-      - uses: actions/checkout@v2
-        with:
-          submodule: true
-      - name: Set up Node.js
-        uses: actions/setup-node@v1
-        with:
-          version: 8
-      - name: Compile
-        run: |
-          cd dolphinscheduler-ui
-          npm install node-sass --unsafe-perm
-          npm install
-          npm run lint
-          npm run build
+registry.plugin.dir=./dolphinscheduler-dist/target/dolphinscheduler-dist-1.3.6-SNAPSHOT/lib/plugin/registry/zookeeper
+registry.plugin.name=zookeeper
+registry.servers=127.0.0.1:2181
diff --git 
a/dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/start.vue
 
b/dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/start.vue
index 2b478be..982a156 100644
--- 
a/dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/start.vue
+++ 
b/dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/start.vue
@@ -116,12 +116,31 @@
           {{$t('Mode of execution')}}
         </div>
         <div class="cont">
-          <el-radio-group v-model="runMode" style="margin-top: 7px;">
+          <el-radio-group @change="_updateParallelStatus" style="margin-top: 
7px;"
+                          v-model="runMode">
             <el-radio :label="'RUN_MODE_SERIAL'">{{$t('Serial 
execution')}}</el-radio>
             <el-radio :label="'RUN_MODE_PARALLEL'">{{$t('Parallel 
execution')}}</el-radio>
           </el-radio-group>
         </div>
       </div>
+      <div class="clearfix list" style="margin:-6px 0 16px 0" v-if="runMode 
=== 'RUN_MODE_PARALLEL'">
+        <div class="text" style="padding-top: 6px;">
+          <em @click="_showParallelismInfo" class="ans el-icon-warning"></em>
+          {{$t('Parallelism')}}
+        </div>
+        <div class="cont" style="padding-top: 8px;">
+          <el-checkbox @change="_updateEnableCustomParallel" size="small"
+                       v-model="enableCustomParallelism">{{$t('Custom 
Parallelism')}}
+            <el-input :disabled="!enableCustomParallelism"
+                      :placeholder="$t('Please enter Parallelism')"
+                      ref="parallelismInput"
+                      size="mini"
+                      type="input"
+                      v-model="parallismNumber">
+            </el-input>
+          </el-checkbox>
+        </div>
+      </div>
       <div class="clearfix list">
         <div class="text">
           {{$t('Schedule date')}}
@@ -164,6 +183,7 @@
 </template>
 <script>
   import _ from 'lodash'
+  import i18n from '@/module/i18n'
   import dayjs from 'dayjs'
   import store from '@/conf/home/store'
   import { warningTypeList } from './util'
@@ -188,6 +208,8 @@
         scheduleTime: '',
         spinnerLoading: false,
         execType: false,
+        enableCustomParallelism: false,
+        parallismNumber: null,
         taskDependType: 'TASK_POST',
         runMode: 'RUN_MODE_SERIAL',
         processInstancePriority: 'MEDIUM',
@@ -208,13 +230,33 @@
     },
     methods: {
       ...mapMutations('dag', ['setIsDetails', 'resetParams']),
+      _showParallelismInfo () {
+        this.$message.warning({
+          dangerouslyUseHTMLString: true,
+          message: `<p style='font-size: 14px;'>${i18n.$t('Parallelism 
tip')}</p>`
+        })
+      },
       _onLocalParams (a) {
         this.udpList = a
       },
       _datepicker (val) {
         this.scheduleTime = val
       },
+      _verification () {
+        if (this.enableCustomParallelism && !this.parallismNumber) {
+          this.$message.warning(`${i18n.$t('Parallelism number should be 
positive integer')}`)
+          return false
+        }
+        if (this.parallismNumber && 
!(/(^[1-9]\d*$)/.test(this.parallismNumber))) {
+          this.$message.warning(`${i18n.$t('Parallelism number should be 
positive integer')}`)
+          return false
+        }
+        return true
+      },
       _start () {
+        if (!this._verification()) {
+          return
+        }
         this.spinnerLoading = true
         let startParams = {}
         for (const item of this.udpList) {
@@ -234,7 +276,8 @@
           runMode: this.runMode,
           processInstancePriority: this.processInstancePriority,
           workerGroup: this.workerGroup,
-          startParams: !_.isEmpty(startParams) ? JSON.stringify(startParams) : 
''
+          startParams: !_.isEmpty(startParams) ? JSON.stringify(startParams) : 
'',
+          expectedParallelismNumber: this.parallismNumber
         }
         // Executed from the specified node
         if (this.sourceType === 'contextmenu') {
@@ -262,6 +305,19 @@
           })
         })
       },
+      _updateParallelStatus () {
+        this.enableCustomParallelism = false
+        this.parallismNumber = null
+      },
+      _updateEnableCustomParallel () {
+        if (!this.enableCustomParallelism) {
+          this.parallismNumber = null
+        } else {
+          this.$nextTick(() => {
+            this.$refs.parallelismInput.focus()
+          })
+        }
+      },
       _getGlobalParams () {
         this.store.dispatch('dag/getProcessDetails', 
this.startData.id).then(res => {
           this.definitionGlobalParams = 
_.cloneDeep(this.store.state.dag.globalParams)
@@ -325,6 +381,14 @@
         display: block;
       }
     }
+
+    .ans {
+      color: #0097e0;
+      font-size: 14px;
+      vertical-align: middle;
+      cursor: pointer;
+    }
+
     .list {
       margin-bottom: 14px;
       .text {
diff --git a/dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js 
b/dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js
index b404564..ac5be37 100755
--- a/dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js
+++ b/dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js
@@ -125,7 +125,11 @@ export default {
   'Slot Number': 'Slot Number',
   'Please enter Slot number': 'Please enter Slot number',
   Parallelism: 'Parallelism',
+  'Custom Parallelism': 'Configure parallelism',
   'Please enter Parallelism': 'Please enter Parallelism',
+  'Parallelism tip': 'If there are a large number of tasks requiring 
complement, you can use the custom parallelism to ' +
+  'set the complement task thread to a reasonable value to avoid too large 
impact on the server.',
+  'Parallelism number should be positive integer': 'Parallelism number should 
be positive integer',
   'TaskManager Number': 'TaskManager Number',
   'Please enter TaskManager number': 'Please enter TaskManager number',
   'App Name': 'App Name',
diff --git a/dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js 
b/dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js
index 8016258..6cdae3f 100755
--- a/dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js
+++ b/dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js
@@ -125,7 +125,10 @@ export default {
   'Slot Number': 'Slot数量',
   'Please enter Slot number': '请输入Slot数量',
   Parallelism: '并行度',
+  'Custom Parallelism': '自定义并行度',
   'Please enter Parallelism': '请输入并行度',
+  'Parallelism number should be positive integer': '并行度必须为正整数',
+  'Parallelism tip': '如果存在大量任务需要补数时,可以利用自定义并行度将补数的任务线程设置成合理的数值,避免对服务器造成过大的影响',
   'TaskManager Number': 'TaskManager数量',
   'Please enter TaskManager number': '请输入TaskManager数量',
   'App Name': '任务名称',
diff --git a/install.sh b/install.sh
deleted file mode 100755
index 3bf4940..0000000
--- a/install.sh
+++ /dev/null
@@ -1,103 +0,0 @@
-#!/bin/sh
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-
-workDir=`dirname $0`
-workDir=`cd ${workDir};pwd`
-
-source ${workDir}/conf/config/install_config.conf
-
-# 1.replace file
-echo "1.replace file"
-
-txt=""
-if [[ "$OSTYPE" == "darwin"* ]]; then
-    # Mac OSX
-    txt="''"
-fi
-
-datasourceDriverClassname="com.mysql.jdbc.Driver"
-if [ $dbtype == "postgresql" ];then
-  datasourceDriverClassname="org.postgresql.Driver"
-fi
-sed -i ${txt} 
"s@^spring.datasource.driver-class-name=.*@spring.datasource.driver-class-name=${datasourceDriverClassname}@g"
 conf/datasource.properties
-sed -i ${txt} 
"s@^spring.datasource.url=.*@spring.datasource.url=jdbc:${dbtype}://${dbhost}/${dbname}?characterEncoding=UTF-8\&allowMultiQueries=true@g"
 conf/datasource.properties
-sed -i ${txt} 
"s@^spring.datasource.username=.*@spring.datasource.username=${username}@g" 
conf/datasource.properties
-sed -i ${txt} 
"s@^spring.datasource.password=.*@spring.datasource.password=${password}@g" 
conf/datasource.properties
-
-sed -i ${txt} "s@^#\?zookeeper.quorum=.*@zookeeper.quorum=${zkQuorum}@g" 
conf/zookeeper.properties
-sed -i ${txt} 
"s@^#\?zookeeper.dolphinscheduler.root=.*@zookeeper.dolphinscheduler.root=${zkRoot}@g"
 conf/zookeeper.properties
-
-sed -i ${txt} "s@^data.basedir.path=.*@data.basedir.path=${dataBasedirPath}@g" 
conf/common.properties
-sed -i ${txt} 
"s@^resource.storage.type=.*@resource.storage.type=${resourceStorageType}@g" 
conf/common.properties
-sed -i ${txt} 
"s@^resource.upload.path=.*@resource.upload.path=${resourceUploadPath}@g" 
conf/common.properties
-sed -i ${txt} 
"s@^hadoop.security.authentication.startup.state=.*@hadoop.security.authentication.startup.state=${kerberosStartUp}@g"
 conf/common.properties
-sed -i ${txt} 
"s@^java.security.krb5.conf.path=.*@java.security.krb5.conf.path=${krb5ConfPath}@g"
 conf/common.properties
-sed -i ${txt} 
"s@^login.user.keytab.username=.*@login.user.keytab.username=${keytabUserName}@g"
 conf/common.properties
-sed -i ${txt} 
"s@^login.user.keytab.path=.*@login.user.keytab.path=${keytabPath}@g" 
conf/common.properties
-sed -i ${txt} 
"s@^kerberos.expire.time=.*@kerberos.expire.time=${kerberosExpireTime}@g" 
conf/common.properties
-sed -i ${txt} "s@^hdfs.root.user=.*@hdfs.root.user=${hdfsRootUser}@g" 
conf/common.properties
-sed -i ${txt} "s@^fs.defaultFS=.*@fs.defaultFS=${defaultFS}@g" 
conf/common.properties
-sed -i ${txt} "s@^fs.s3a.endpoint=.*@fs.s3a.endpoint=${s3Endpoint}@g" 
conf/common.properties
-sed -i ${txt} "s@^fs.s3a.access.key=.*@fs.s3a.access.key=${s3AccessKey}@g" 
conf/common.properties
-sed -i ${txt} "s@^fs.s3a.secret.key=.*@fs.s3a.secret.key=${s3SecretKey}@g" 
conf/common.properties
-sed -i ${txt} 
"s@^resource.manager.httpaddress.port=.*@resource.manager.httpaddress.port=${resourceManagerHttpAddressPort}@g"
 conf/common.properties
-sed -i ${txt} 
"s@^yarn.resourcemanager.ha.rm.ids=.*@yarn.resourcemanager.ha.rm.ids=${yarnHaIps}@g"
 conf/common.properties
-sed -i ${txt} 
"s@^yarn.application.status.address=.*@yarn.application.status.address=http://${singleYarnIp}:%s/ws/v1/cluster/apps/%s@g";
 conf/common.properties
-sed -i ${txt} 
"s@^yarn.job.history.status.address=.*@yarn.job.history.status.address=http://${singleYarnIp}:19888/ws/v1/history/mapreduce/jobs/%s@g";
 conf/common.properties
-sed -i ${txt} "s@^sudo.enable=.*@sudo.enable=${sudoEnable}@g" 
conf/common.properties
-
-# the following configurations may be commented, so ddd #\? to ensure 
successful sed
-sed -i ${txt} 
"s@^#\?worker.tenant.auto.create=.*@worker.tenant.auto.create=${workerTenantAutoCreate}@g"
 conf/worker.properties
-sed -i ${txt} "s@^#\?alert.listen.host=.*@alert.listen.host=${alertServer}@g" 
conf/worker.properties
-sed -i ${txt} "s@^#\?alert.plugin.dir=.*@alert.plugin.dir=${alertPluginDir}@g" 
conf/alert.properties
-sed -i ${txt} "s@^#\?server.port=.*@server.port=${apiServerPort}@g" 
conf/application-api.properties
-
-# 2.create directory
-echo "2.create directory"
-
-if [ ! -d $installPath ];then
-  sudo mkdir -p $installPath
-  sudo chown -R $deployUser:$deployUser $installPath
-fi
-
-# 3.scp resources
-echo "3.scp resources"
-sh ${workDir}/script/scp-hosts.sh
-if [ $? -eq 0 ]
-then
-       echo 'scp copy completed'
-else
-       echo 'scp copy failed to exit'
-       exit 1
-fi
-
-
-# 4.stop server
-echo "4.stop server"
-sh ${workDir}/script/stop-all.sh
-
-
-# 5.delete zk node
-echo "5.delete zk node"
-
-sh ${workDir}/script/remove-zk-node.sh $zkRoot
-
-
-# 6.startup
-echo "6.startup"
-sh ${workDir}/script/start-all.sh
diff --git a/pom.xml b/pom.xml
index ec7867a..4b47e8f 100644
--- a/pom.xml
+++ b/pom.xml
@@ -97,7 +97,7 @@
         <mssql.jdbc.version>6.1.0.jre8</mssql.jdbc.version>
         <presto.jdbc.version>0.238.1</presto.jdbc.version>
         <spotbugs.version>3.1.12</spotbugs.version>
-        <checkstyle.version>3.0.0</checkstyle.version>
+        <checkstyle.version>3.1.2</checkstyle.version>
         <zookeeper.version>3.4.14</zookeeper.version>
         <curator.test>2.12.0</curator.test>
         <frontend-maven-plugin.version>1.6</frontend-maven-plugin.version>
@@ -208,6 +208,11 @@
             </dependency>
             <dependency>
                 <groupId>org.apache.dolphinscheduler</groupId>
+                <artifactId>dolphinscheduler-standalone-server</artifactId>
+                <version>${project.version}</version>
+            </dependency>
+            <dependency>
+                <groupId>org.apache.dolphinscheduler</groupId>
                 <artifactId>dolphinscheduler-common</artifactId>
                 <version>${project.version}</version>
             </dependency>
@@ -310,7 +315,6 @@
                 <groupId>org.apache.curator</groupId>
                 <artifactId>curator-test</artifactId>
                 <version>${curator.test}</version>
-                <scope>test</scope>
             </dependency>
             <dependency>
                 <groupId>commons-codec</groupId>
@@ -661,7 +665,6 @@
                 <artifactId>javax.mail</artifactId>
                 <version>1.6.2</version>
             </dependency>
-
         </dependencies>
     </dependencyManagement>
 
@@ -900,7 +903,6 @@
                         <include>**/api/utils/ResultTest.java</include>
                         <include>**/common/graph/DAGTest.java</include>
                         <include>**/common/os/OshiTest.java</include>
-                        <include>**/common/os/OSUtilsTest.java</include>
                         
<include>**/common/shell/ShellExecutorTest.java</include>
                         
<include>**/common/task/DataxParametersTest.java</include>
                         <include>**/common/task/EntityTestUtils.java</include>
@@ -920,7 +922,6 @@
                         <include>**/common/utils/JSONUtilsTest.java</include>
                         <include>**/common/utils/LoggerUtilsTest.java</include>
                         <include>**/common/utils/NetUtilsTest.java</include>
-                        <include>**/common/utils/OSUtilsTest.java</include>
                         
<include>**/common/utils/ParameterUtilsTest.java</include>
                         
<include>**/common/utils/TimePlaceholderUtilsTest.java</include>
                         
<include>**/common/utils/PreconditionsTest.java</include>
@@ -1066,7 +1067,6 @@
                         
<include>**/plugin/alert/email/EmailAlertChannelFactoryTest.java</include>
                         
<include>**/plugin/alert/email/EmailAlertChannelTest.java</include>
                         
<include>**/plugin/alert/email/ExcelUtilsTest.java</include>
-                        
<include>**/plugin/alert/email/MailUtilsTest.java</include>
                         
<include>**/plugin/alert/email/template/DefaultHTMLTemplateTest.java</include>
                         
<include>**/plugin/alert/dingtalk/DingTalkSenderTest.java</include>
                         
<include>**/plugin/alert/dingtalk/DingTalkAlertChannelFactoryTest.java</include>
@@ -1154,15 +1154,13 @@
                     <dependency>
                         <groupId>com.puppycrawl.tools</groupId>
                         <artifactId>checkstyle</artifactId>
-                        <version>8.18</version>
+                        <version>8.45</version>
                     </dependency>
                 </dependencies>
                 <configuration>
                     <consoleOutput>true</consoleOutput>
                     <encoding>UTF-8</encoding>
                     <configLocation>style/checkstyle.xml</configLocation>
-                    
<suppressionsLocation>style/checkstyle-suppressions.xml</suppressionsLocation>
-                    
<suppressionsFileExpression>checkstyle.suppressions.file</suppressionsFileExpression>
                     <failOnViolation>true</failOnViolation>
                     <violationSeverity>warning</violationSeverity>
                     
<includeTestSourceDirectory>true</includeTestSourceDirectory>
@@ -1170,7 +1168,6 @@
                         
<sourceDirectory>${project.build.sourceDirectory}</sourceDirectory>
                     </sourceDirectories>
                     <excludes>**\/generated-sources\/</excludes>
-                    <skip>true</skip>
                 </configuration>
                 <executions>
                     <execution>
@@ -1216,5 +1213,6 @@
         <module>dolphinscheduler-remote</module>
         <module>dolphinscheduler-service</module>
         <module>dolphinscheduler-microbench</module>
+        <module>dolphinscheduler-standalone-server</module>
     </modules>
 </project>
diff --git a/script/dolphinscheduler-daemon.sh 
b/script/dolphinscheduler-daemon.sh
index cf3aeeb..81af5fd 100755
--- a/script/dolphinscheduler-daemon.sh
+++ b/script/dolphinscheduler-daemon.sh
@@ -16,7 +16,7 @@
 # limitations under the License.
 #
 
-usage="Usage: dolphinscheduler-daemon.sh (start|stop|status) 
<api-server|master-server|worker-server|alert-server> "
+usage="Usage: dolphinscheduler-daemon.sh (start|stop|status) 
<api-server|master-server|worker-server|alert-server|standalone-server> "
 
 # if no args specified, show usage
 if [ $# -le 1 ]; then
@@ -87,6 +87,8 @@ elif [ "$command" = "zookeeper-server" ]; then
   #note: this command just for getting a quick experience,not recommended for 
production. this operation will start a standalone zookeeper server
   LOG_FILE="-Dlogback.configurationFile=classpath:logback-zookeeper.xml"
   CLASS=org.apache.dolphinscheduler.service.zk.ZKServer
+elif [ "$command" = "standalone-server" ]; then
+  CLASS=org.apache.dolphinscheduler.server.StandaloneServer
 else
   echo "Error: No command named '$command' was found."
   exit 1
@@ -159,4 +161,4 @@ case $startStop in
 
 esac
 
-echo "End $startStop $command."
\ No newline at end of file
+echo "End $startStop $command."
diff --git a/sql/create/release-1.0.0_schema/mysql/dolphinscheduler_ddl.sql 
b/sql/create/release-1.0.0_schema/mysql/dolphinscheduler_ddl.sql
index ac0851a..f82bb75 100644
--- a/sql/create/release-1.0.0_schema/mysql/dolphinscheduler_ddl.sql
+++ b/sql/create/release-1.0.0_schema/mysql/dolphinscheduler_ddl.sql
@@ -113,9 +113,9 @@ CREATE TABLE `t_escheduler_master_server` (
   `host` varchar(45) DEFAULT NULL COMMENT 'ip',
   `port` int(11) DEFAULT NULL COMMENT 'port',
   `zk_directory` varchar(64) DEFAULT NULL COMMENT 'the server path in zk 
directory',
-  `res_info` varchar(256) DEFAULT NULL COMMENT 'json resource 
information:{"cpu":xxx,"memroy":xxx}',
+  `res_info` varchar(255) DEFAULT NULL COMMENT 'json resource 
information:{"cpu":xxx,"memory":xxx}',
   `create_time` datetime DEFAULT NULL COMMENT 'create time',
-  `last_heartbeat_time` datetime DEFAULT NULL COMMENT 'last head beat time',
+  `last_heartbeat_time` datetime DEFAULT NULL COMMENT 'last heart beat time',
   PRIMARY KEY (`id`)
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
 
diff --git a/sql/dolphinscheduler_h2.sql b/sql/dolphinscheduler_h2.sql
new file mode 100644
index 0000000..a550416
--- /dev/null
+++ b/sql/dolphinscheduler_h2.sql
@@ -0,0 +1,943 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+*/
+
+SET FOREIGN_KEY_CHECKS=0;
+
+-- ----------------------------
+-- Table structure for QRTZ_JOB_DETAILS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_JOB_DETAILS;
+CREATE TABLE QRTZ_JOB_DETAILS (
+                                  SCHED_NAME varchar(120) NOT NULL,
+                                  JOB_NAME varchar(200) NOT NULL,
+                                  JOB_GROUP varchar(200) NOT NULL,
+                                  DESCRIPTION varchar(250) DEFAULT NULL,
+                                  JOB_CLASS_NAME varchar(250) NOT NULL,
+                                  IS_DURABLE varchar(1) NOT NULL,
+                                  IS_NONCONCURRENT varchar(1) NOT NULL,
+                                  IS_UPDATE_DATA varchar(1) NOT NULL,
+                                  REQUESTS_RECOVERY varchar(1) NOT NULL,
+                                  JOB_DATA blob,
+                                  PRIMARY KEY (SCHED_NAME,JOB_NAME,JOB_GROUP)
+);
+
+-- ----------------------------
+-- Table structure for QRTZ_TRIGGERS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_TRIGGERS;
+CREATE TABLE QRTZ_TRIGGERS (
+                               SCHED_NAME varchar(120) NOT NULL,
+                               TRIGGER_NAME varchar(200) NOT NULL,
+                               TRIGGER_GROUP varchar(200) NOT NULL,
+                               JOB_NAME varchar(200) NOT NULL,
+                               JOB_GROUP varchar(200) NOT NULL,
+                               DESCRIPTION varchar(250) DEFAULT NULL,
+                               NEXT_FIRE_TIME bigint(13) DEFAULT NULL,
+                               PREV_FIRE_TIME bigint(13) DEFAULT NULL,
+                               PRIORITY int(11) DEFAULT NULL,
+                               TRIGGER_STATE varchar(16) NOT NULL,
+                               TRIGGER_TYPE varchar(8) NOT NULL,
+                               START_TIME bigint(13) NOT NULL,
+                               END_TIME bigint(13) DEFAULT NULL,
+                               CALENDAR_NAME varchar(200) DEFAULT NULL,
+                               MISFIRE_INSTR smallint(2) DEFAULT NULL,
+                               JOB_DATA blob,
+                               PRIMARY KEY 
(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
+                               CONSTRAINT QRTZ_TRIGGERS_ibfk_1 FOREIGN KEY 
(SCHED_NAME, JOB_NAME, JOB_GROUP) REFERENCES QRTZ_JOB_DETAILS (SCHED_NAME, 
JOB_NAME, JOB_GROUP)
+);
+
+-- ----------------------------
+-- Table structure for QRTZ_BLOB_TRIGGERS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_BLOB_TRIGGERS;
+CREATE TABLE QRTZ_BLOB_TRIGGERS (
+  SCHED_NAME varchar(120) NOT NULL,
+  TRIGGER_NAME varchar(200) NOT NULL,
+  TRIGGER_GROUP varchar(200) NOT NULL,
+  BLOB_DATA blob,
+  PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
+  FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES 
QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP)
+);
+
+-- ----------------------------
+-- Records of QRTZ_BLOB_TRIGGERS
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for QRTZ_CALENDARS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_CALENDARS;
+CREATE TABLE QRTZ_CALENDARS (
+  SCHED_NAME varchar(120) NOT NULL,
+  CALENDAR_NAME varchar(200) NOT NULL,
+  CALENDAR blob NOT NULL,
+  PRIMARY KEY (SCHED_NAME,CALENDAR_NAME)
+);
+
+-- ----------------------------
+-- Records of QRTZ_CALENDARS
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for QRTZ_CRON_TRIGGERS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_CRON_TRIGGERS;
+CREATE TABLE QRTZ_CRON_TRIGGERS (
+  SCHED_NAME varchar(120) NOT NULL,
+  TRIGGER_NAME varchar(200) NOT NULL,
+  TRIGGER_GROUP varchar(200) NOT NULL,
+  CRON_EXPRESSION varchar(120) NOT NULL,
+  TIME_ZONE_ID varchar(80) DEFAULT NULL,
+  PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
+  CONSTRAINT QRTZ_CRON_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, 
TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, 
TRIGGER_GROUP)
+);
+
+-- ----------------------------
+-- Records of QRTZ_CRON_TRIGGERS
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for QRTZ_FIRED_TRIGGERS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_FIRED_TRIGGERS;
+CREATE TABLE QRTZ_FIRED_TRIGGERS (
+  SCHED_NAME varchar(120) NOT NULL,
+  ENTRY_ID varchar(200) NOT NULL,
+  TRIGGER_NAME varchar(200) NOT NULL,
+  TRIGGER_GROUP varchar(200) NOT NULL,
+  INSTANCE_NAME varchar(200) NOT NULL,
+  FIRED_TIME bigint(13) NOT NULL,
+  SCHED_TIME bigint(13) NOT NULL,
+  PRIORITY int(11) NOT NULL,
+  STATE varchar(16) NOT NULL,
+  JOB_NAME varchar(200) DEFAULT NULL,
+  JOB_GROUP varchar(200) DEFAULT NULL,
+  IS_NONCONCURRENT varchar(1) DEFAULT NULL,
+  REQUESTS_RECOVERY varchar(1) DEFAULT NULL,
+  PRIMARY KEY (SCHED_NAME,ENTRY_ID)
+);
+
+-- ----------------------------
+-- Records of QRTZ_FIRED_TRIGGERS
+-- ----------------------------
+
+-- ----------------------------
+-- Records of QRTZ_JOB_DETAILS
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for QRTZ_LOCKS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_LOCKS;
+CREATE TABLE QRTZ_LOCKS (
+  SCHED_NAME varchar(120) NOT NULL,
+  LOCK_NAME varchar(40) NOT NULL,
+  PRIMARY KEY (SCHED_NAME,LOCK_NAME)
+);
+
+-- ----------------------------
+-- Records of QRTZ_LOCKS
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for QRTZ_PAUSED_TRIGGER_GRPS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_PAUSED_TRIGGER_GRPS;
+CREATE TABLE QRTZ_PAUSED_TRIGGER_GRPS (
+  SCHED_NAME varchar(120) NOT NULL,
+  TRIGGER_GROUP varchar(200) NOT NULL,
+  PRIMARY KEY (SCHED_NAME,TRIGGER_GROUP)
+);
+
+-- ----------------------------
+-- Records of QRTZ_PAUSED_TRIGGER_GRPS
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for QRTZ_SCHEDULER_STATE
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_SCHEDULER_STATE;
+CREATE TABLE QRTZ_SCHEDULER_STATE (
+  SCHED_NAME varchar(120) NOT NULL,
+  INSTANCE_NAME varchar(200) NOT NULL,
+  LAST_CHECKIN_TIME bigint(13) NOT NULL,
+  CHECKIN_INTERVAL bigint(13) NOT NULL,
+  PRIMARY KEY (SCHED_NAME,INSTANCE_NAME)
+);
+
+-- ----------------------------
+-- Records of QRTZ_SCHEDULER_STATE
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for QRTZ_SIMPLE_TRIGGERS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_SIMPLE_TRIGGERS;
+CREATE TABLE QRTZ_SIMPLE_TRIGGERS (
+  SCHED_NAME varchar(120) NOT NULL,
+  TRIGGER_NAME varchar(200) NOT NULL,
+  TRIGGER_GROUP varchar(200) NOT NULL,
+  REPEAT_COUNT bigint(7) NOT NULL,
+  REPEAT_INTERVAL bigint(12) NOT NULL,
+  TIMES_TRIGGERED bigint(10) NOT NULL,
+  PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
+  CONSTRAINT QRTZ_SIMPLE_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, 
TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, 
TRIGGER_NAME, TRIGGER_GROUP)
+);
+
+-- ----------------------------
+-- Records of QRTZ_SIMPLE_TRIGGERS
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for QRTZ_SIMPROP_TRIGGERS
+-- ----------------------------
+DROP TABLE IF EXISTS QRTZ_SIMPROP_TRIGGERS;
+CREATE TABLE QRTZ_SIMPROP_TRIGGERS (
+  SCHED_NAME varchar(120) NOT NULL,
+  TRIGGER_NAME varchar(200) NOT NULL,
+  TRIGGER_GROUP varchar(200) NOT NULL,
+  STR_PROP_1 varchar(512) DEFAULT NULL,
+  STR_PROP_2 varchar(512) DEFAULT NULL,
+  STR_PROP_3 varchar(512) DEFAULT NULL,
+  INT_PROP_1 int(11) DEFAULT NULL,
+  INT_PROP_2 int(11) DEFAULT NULL,
+  LONG_PROP_1 bigint(20) DEFAULT NULL,
+  LONG_PROP_2 bigint(20) DEFAULT NULL,
+  DEC_PROP_1 decimal(13,4) DEFAULT NULL,
+  DEC_PROP_2 decimal(13,4) DEFAULT NULL,
+  BOOL_PROP_1 varchar(1) DEFAULT NULL,
+  BOOL_PROP_2 varchar(1) DEFAULT NULL,
+  PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
+  CONSTRAINT QRTZ_SIMPROP_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, 
TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, 
TRIGGER_NAME, TRIGGER_GROUP)
+);
+
+-- ----------------------------
+-- Records of QRTZ_SIMPROP_TRIGGERS
+-- ----------------------------
+
+-- ----------------------------
+-- Records of QRTZ_TRIGGERS
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_access_token
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_access_token;
+CREATE TABLE t_ds_access_token (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  user_id int(11) DEFAULT NULL,
+  token varchar(64) DEFAULT NULL,
+  expire_time datetime DEFAULT NULL,
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+);
+
+-- ----------------------------
+-- Records of t_ds_access_token
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_alert
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_alert;
+CREATE TABLE t_ds_alert (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  title varchar(64) DEFAULT NULL,
+  content text,
+  alert_status tinyint(4) DEFAULT '0',
+  log text,
+  alertgroup_id int(11) DEFAULT NULL,
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_alert
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_alertgroup
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_alertgroup;
+CREATE TABLE t_ds_alertgroup(
+  id             int(11) NOT NULL AUTO_INCREMENT,
+  alert_instance_ids varchar (255) DEFAULT NULL,
+  create_user_id int(11) DEFAULT NULL,
+  group_name     varchar(255) DEFAULT NULL,
+  description    varchar(255) DEFAULT NULL,
+  create_time    datetime     DEFAULT NULL,
+  update_time    datetime     DEFAULT NULL,
+  PRIMARY KEY (id),
+  UNIQUE KEY t_ds_alertgroup_name_un (group_name)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_alertgroup
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_command
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_command;
+CREATE TABLE t_ds_command (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  command_type tinyint(4) DEFAULT NULL,
+  process_definition_id int(11) DEFAULT NULL,
+  command_param text,
+  task_depend_type tinyint(4) DEFAULT NULL,
+  failure_strategy tinyint(4) DEFAULT '0',
+  warning_type tinyint(4) DEFAULT '0',
+  warning_group_id int(11) DEFAULT NULL,
+  schedule_time datetime DEFAULT NULL,
+  start_time datetime DEFAULT NULL,
+  executor_id int(11) DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  process_instance_priority int(11) DEFAULT NULL,
+  worker_group varchar(64) ,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_command
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_datasource
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_datasource;
+CREATE TABLE t_ds_datasource (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  name varchar(64) NOT NULL,
+  note varchar(255) DEFAULT NULL,
+  type tinyint(4) NOT NULL,
+  user_id int(11) NOT NULL,
+  connection_params text NOT NULL,
+  create_time datetime NOT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id),
+  UNIQUE KEY t_ds_datasource_name_un (name, type)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_datasource
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_error_command
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_error_command;
+CREATE TABLE t_ds_error_command (
+  id int(11) NOT NULL,
+  command_type tinyint(4) DEFAULT NULL,
+  executor_id int(11) DEFAULT NULL,
+  process_definition_id int(11) DEFAULT NULL,
+  command_param text,
+  task_depend_type tinyint(4) DEFAULT NULL,
+  failure_strategy tinyint(4) DEFAULT '0',
+  warning_type tinyint(4) DEFAULT '0',
+  warning_group_id int(11) DEFAULT NULL,
+  schedule_time datetime DEFAULT NULL,
+  start_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  process_instance_priority int(11) DEFAULT NULL,
+  worker_group varchar(64) ,
+  message text,
+  PRIMARY KEY (id)
+);
+
+-- ----------------------------
+-- Records of t_ds_error_command
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_process_definition
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_process_definition;
+CREATE TABLE t_ds_process_definition (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  code bigint(20) NOT NULL,
+  name varchar(255) DEFAULT NULL,
+  version int(11) DEFAULT NULL,
+  description text,
+  project_code bigint(20) NOT NULL,
+  release_state tinyint(4) DEFAULT NULL,
+  user_id int(11) DEFAULT NULL,
+  global_params text,
+  flag tinyint(4) DEFAULT NULL,
+  locations text,
+  connects text,
+  warning_group_id int(11) DEFAULT NULL,
+  timeout int(11) DEFAULT '0',
+  tenant_id int(11) NOT NULL DEFAULT '-1',
+  create_time datetime NOT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id),
+  UNIQUE KEY process_unique (name,project_code) USING BTREE,
+  UNIQUE KEY code_unique (code)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_process_definition
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_process_definition_log
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_process_definition_log;
+CREATE TABLE t_ds_process_definition_log (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  code bigint(20) NOT NULL,
+  name varchar(200) DEFAULT NULL,
+  version int(11) DEFAULT NULL,
+  description text,
+  project_code bigint(20) NOT NULL,
+  release_state tinyint(4) DEFAULT NULL,
+  user_id int(11) DEFAULT NULL,
+  global_params text,
+  flag tinyint(4) DEFAULT NULL,
+  locations text,
+  connects text,
+  warning_group_id int(11) DEFAULT NULL,
+  timeout int(11) DEFAULT '0',
+  tenant_id int(11) NOT NULL DEFAULT '-1',
+  operator int(11) DEFAULT NULL,
+  operate_time datetime DEFAULT NULL,
+  create_time datetime NOT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Table structure for t_ds_task_definition
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_task_definition;
+CREATE TABLE t_ds_task_definition (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  code bigint(20) NOT NULL,
+  name varchar(200) DEFAULT NULL,
+  version int(11) DEFAULT NULL,
+  description text,
+  project_code bigint(20) NOT NULL,
+  user_id int(11) DEFAULT NULL,
+  task_type varchar(50) NOT NULL,
+  task_params longtext,
+  flag tinyint(2) DEFAULT NULL,
+  task_priority tinyint(4) DEFAULT NULL,
+  worker_group varchar(200) DEFAULT NULL,
+  fail_retry_times int(11) DEFAULT NULL,
+  fail_retry_interval int(11) DEFAULT NULL,
+  timeout_flag tinyint(2) DEFAULT '0',
+  timeout_notify_strategy tinyint(4) DEFAULT NULL,
+  timeout int(11) DEFAULT '0',
+  delay_time int(11) DEFAULT '0',
+  resource_ids varchar(255) DEFAULT NULL,
+  create_time datetime NOT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id,code),
+  UNIQUE KEY task_unique (name,project_code) USING BTREE
+) ;
+
+-- ----------------------------
+-- Table structure for t_ds_task_definition_log
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_task_definition_log;
+CREATE TABLE t_ds_task_definition_log (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  code bigint(20) NOT NULL,
+  name varchar(200) DEFAULT NULL,
+  version int(11) DEFAULT NULL,
+  description text,
+  project_code bigint(20) NOT NULL,
+  user_id int(11) DEFAULT NULL,
+  task_type varchar(50) NOT NULL,
+  task_params text,
+  flag tinyint(2) DEFAULT NULL,
+  task_priority tinyint(4) DEFAULT NULL,
+  worker_group varchar(200) DEFAULT NULL,
+  fail_retry_times int(11) DEFAULT NULL,
+  fail_retry_interval int(11) DEFAULT NULL,
+  timeout_flag tinyint(2) DEFAULT '0',
+  timeout_notify_strategy tinyint(4) DEFAULT NULL,
+  timeout int(11) DEFAULT '0',
+  delay_time int(11) DEFAULT '0',
+  resource_ids varchar(255) DEFAULT NULL,
+  operator int(11) DEFAULT NULL,
+  operate_time datetime DEFAULT NULL,
+  create_time datetime NOT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Table structure for t_ds_process_task_relation
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_process_task_relation;
+CREATE TABLE t_ds_process_task_relation (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  name varchar(200) DEFAULT NULL,
+  process_definition_version int(11) DEFAULT NULL,
+  project_code bigint(20) NOT NULL,
+  process_definition_code bigint(20) NOT NULL,
+  pre_task_code bigint(20) NOT NULL,
+  pre_task_version int(11) NOT NULL,
+  post_task_code bigint(20) NOT NULL,
+  post_task_version int(11) NOT NULL,
+  condition_type tinyint(2) DEFAULT NULL,
+  condition_params text,
+  create_time datetime NOT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Table structure for t_ds_process_task_relation_log
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_process_task_relation_log;
+CREATE TABLE t_ds_process_task_relation_log (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  name varchar(200) DEFAULT NULL,
+  process_definition_version int(11) DEFAULT NULL,
+  project_code bigint(20) NOT NULL,
+  process_definition_code bigint(20) NOT NULL,
+  pre_task_code bigint(20) NOT NULL,
+  pre_task_version int(11) NOT NULL,
+  post_task_code bigint(20) NOT NULL,
+  post_task_version int(11) NOT NULL,
+  condition_type tinyint(2) DEFAULT NULL,
+  condition_params text,
+  operator int(11) DEFAULT NULL,
+  operate_time datetime DEFAULT NULL,
+  create_time datetime NOT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Table structure for t_ds_process_instance
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_process_instance;
+CREATE TABLE t_ds_process_instance (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  name varchar(255) DEFAULT NULL,
+  process_definition_version int(11) DEFAULT NULL,
+  process_definition_code bigint(20) not NULL,
+  state tinyint(4) DEFAULT NULL,
+  recovery tinyint(4) DEFAULT NULL,
+  start_time datetime DEFAULT NULL,
+  end_time datetime DEFAULT NULL,
+  run_times int(11) DEFAULT NULL,
+  host varchar(135) DEFAULT NULL,
+  command_type tinyint(4) DEFAULT NULL,
+  command_param text,
+  task_depend_type tinyint(4) DEFAULT NULL,
+  max_try_times tinyint(4) DEFAULT '0',
+  failure_strategy tinyint(4) DEFAULT '0',
+  warning_type tinyint(4) DEFAULT '0',
+  warning_group_id int(11) DEFAULT NULL,
+  schedule_time datetime DEFAULT NULL,
+  command_start_time datetime DEFAULT NULL,
+  global_params text,
+  flag tinyint(4) DEFAULT '1',
+  update_time timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE 
CURRENT_TIMESTAMP,
+  is_sub_process int(11) DEFAULT '0',
+  executor_id int(11) NOT NULL,
+  history_cmd text,
+  process_instance_priority int(11) DEFAULT NULL,
+  worker_group varchar(64) DEFAULT NULL,
+  timeout int(11) DEFAULT '0',
+  tenant_id int(11) NOT NULL DEFAULT '-1',
+  var_pool longtext,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_process_instance
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_project
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_project;
+CREATE TABLE t_ds_project (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  name varchar(100) DEFAULT NULL,
+  code bigint(20) NOT NULL,
+  description varchar(200) DEFAULT NULL,
+  user_id int(11) DEFAULT NULL,
+  flag tinyint(4) DEFAULT '1',
+  create_time datetime NOT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_project
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_queue
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_queue;
+CREATE TABLE t_ds_queue (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  queue_name varchar(64) DEFAULT NULL,
+  queue varchar(64) DEFAULT NULL,
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_queue
+-- ----------------------------
+INSERT INTO t_ds_queue VALUES ('1', 'default', 'default', null, null);
+
+-- ----------------------------
+-- Table structure for t_ds_relation_datasource_user
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_relation_datasource_user;
+CREATE TABLE t_ds_relation_datasource_user (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  user_id int(11) NOT NULL,
+  datasource_id int(11) DEFAULT NULL,
+  perm int(11) DEFAULT '1',
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_relation_datasource_user
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_relation_process_instance
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_relation_process_instance;
+CREATE TABLE t_ds_relation_process_instance (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  parent_process_instance_id int(11) DEFAULT NULL,
+  parent_task_instance_id int(11) DEFAULT NULL,
+  process_instance_id int(11) DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_relation_process_instance
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_relation_project_user
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_relation_project_user;
+CREATE TABLE t_ds_relation_project_user (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  user_id int(11) NOT NULL,
+  project_id int(11) DEFAULT NULL,
+  perm int(11) DEFAULT '1',
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_relation_project_user
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_relation_resources_user
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_relation_resources_user;
+CREATE TABLE t_ds_relation_resources_user (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  user_id int(11) NOT NULL,
+  resources_id int(11) DEFAULT NULL,
+  perm int(11) DEFAULT '1',
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_relation_resources_user
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_relation_udfs_user
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_relation_udfs_user;
+CREATE TABLE t_ds_relation_udfs_user (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  user_id int(11) NOT NULL,
+  udf_id int(11) DEFAULT NULL,
+  perm int(11) DEFAULT '1',
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Table structure for t_ds_resources
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_resources;
+CREATE TABLE t_ds_resources (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  alias varchar(64) DEFAULT NULL,
+  file_name varchar(64) DEFAULT NULL,
+  description varchar(255) DEFAULT NULL,
+  user_id int(11) DEFAULT NULL,
+  type tinyint(4) DEFAULT NULL,
+  size bigint(20) DEFAULT NULL,
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  pid int(11) DEFAULT NULL,
+  full_name varchar(64) DEFAULT NULL,
+  is_directory tinyint(4) DEFAULT NULL,
+  PRIMARY KEY (id),
+  UNIQUE KEY t_ds_resources_un (full_name,type)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_resources
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_schedules
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_schedules;
+CREATE TABLE t_ds_schedules (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  process_definition_id int(11) NOT NULL,
+  start_time datetime NOT NULL,
+  end_time datetime NOT NULL,
+  timezone_id varchar(40) DEFAULT NULL,
+  crontab varchar(255) NOT NULL,
+  failure_strategy tinyint(4) NOT NULL,
+  user_id int(11) NOT NULL,
+  release_state tinyint(4) NOT NULL,
+  warning_type tinyint(4) NOT NULL,
+  warning_group_id int(11) DEFAULT NULL,
+  process_instance_priority int(11) DEFAULT NULL,
+  worker_group varchar(64) DEFAULT '',
+  create_time datetime NOT NULL,
+  update_time datetime NOT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_schedules
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_session
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_session;
+CREATE TABLE t_ds_session (
+  id varchar(64) NOT NULL,
+  user_id int(11) DEFAULT NULL,
+  ip varchar(45) DEFAULT NULL,
+  last_login_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+);
+
+-- ----------------------------
+-- Records of t_ds_session
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_task_instance
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_task_instance;
+CREATE TABLE t_ds_task_instance (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  name varchar(255) DEFAULT NULL,
+  task_type varchar(50) NOT NULL,
+  task_code bigint(20) NOT NULL,
+  task_definition_version int(11) DEFAULT NULL,
+  process_instance_id int(11) DEFAULT NULL,
+  state tinyint(4) DEFAULT NULL,
+  submit_time datetime DEFAULT NULL,
+  start_time datetime DEFAULT NULL,
+  end_time datetime DEFAULT NULL,
+  host varchar(135) DEFAULT NULL,
+  execute_path varchar(200) DEFAULT NULL,
+  log_path varchar(200) DEFAULT NULL,
+  alert_flag tinyint(4) DEFAULT NULL,
+  retry_times int(4) DEFAULT '0',
+  pid int(4) DEFAULT NULL,
+  app_link text,
+  task_params text,
+  flag tinyint(4) DEFAULT '1',
+  retry_interval int(4) DEFAULT NULL,
+  max_retry_times int(2) DEFAULT NULL,
+  task_instance_priority int(11) DEFAULT NULL,
+  worker_group varchar(64) DEFAULT NULL,
+  executor_id int(11) DEFAULT NULL,
+  first_submit_time datetime DEFAULT NULL,
+  delay_time int(4) DEFAULT '0',
+  var_pool longtext,
+  PRIMARY KEY (id),
+  FOREIGN KEY (process_instance_id) REFERENCES t_ds_process_instance (id) ON 
DELETE CASCADE
+) ;
+
+-- ----------------------------
+-- Records of t_ds_task_instance
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_tenant
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_tenant;
+CREATE TABLE t_ds_tenant (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  tenant_code varchar(64) DEFAULT NULL,
+  description varchar(255) DEFAULT NULL,
+  queue_id int(11) DEFAULT NULL,
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_tenant
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_udfs
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_udfs;
+CREATE TABLE t_ds_udfs (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  user_id int(11) NOT NULL,
+  func_name varchar(100) NOT NULL,
+  class_name varchar(255) NOT NULL,
+  type tinyint(4) NOT NULL,
+  arg_types varchar(255) DEFAULT NULL,
+  database varchar(255) DEFAULT NULL,
+  description varchar(255) DEFAULT NULL,
+  resource_id int(11) NOT NULL,
+  resource_name varchar(255) NOT NULL,
+  create_time datetime NOT NULL,
+  update_time datetime NOT NULL,
+  PRIMARY KEY (id)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_udfs
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_user
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_user;
+CREATE TABLE t_ds_user (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  user_name varchar(64) DEFAULT NULL,
+  user_password varchar(64) DEFAULT NULL,
+  user_type tinyint(4) DEFAULT NULL,
+  email varchar(64) DEFAULT NULL,
+  phone varchar(11) DEFAULT NULL,
+  tenant_id int(11) DEFAULT NULL,
+  create_time datetime DEFAULT NULL,
+  update_time datetime DEFAULT NULL,
+  queue varchar(64) DEFAULT NULL,
+  state int(1) DEFAULT 1,
+  PRIMARY KEY (id),
+  UNIQUE KEY user_name_unique (user_name)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_user
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_worker_group
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_worker_group;
+CREATE TABLE t_ds_worker_group (
+  id bigint(11) NOT NULL AUTO_INCREMENT,
+  name varchar(255) NOT NULL,
+  addr_list text NULL DEFAULT NULL,
+  create_time datetime NULL DEFAULT NULL,
+  update_time datetime NULL DEFAULT NULL,
+  PRIMARY KEY (id),
+  UNIQUE KEY name_unique (name)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_worker_group
+-- ----------------------------
+
+-- ----------------------------
+-- Table structure for t_ds_version
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_version;
+CREATE TABLE t_ds_version (
+  id int(11) NOT NULL AUTO_INCREMENT,
+  version varchar(200) NOT NULL,
+  PRIMARY KEY (id),
+  UNIQUE KEY version_UNIQUE (version)
+) ;
+
+-- ----------------------------
+-- Records of t_ds_version
+-- ----------------------------
+INSERT INTO t_ds_version VALUES ('1', '1.4.0');
+
+
+-- ----------------------------
+-- Records of t_ds_alertgroup
+-- ----------------------------
+INSERT INTO t_ds_alertgroup(alert_instance_ids, create_user_id, group_name, 
description, create_time, update_time)
+VALUES ('1,2', 1, 'default admin warning group', 'default admin warning 
group', '2018-11-29 10:20:39', '2018-11-29 10:20:39');
+
+-- ----------------------------
+-- Records of t_ds_user
+-- ----------------------------
+INSERT INTO t_ds_user
+VALUES ('1', 'admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', '[email protected]', 
'', '0', '2018-03-27 15:48:50', '2018-10-24 17:40:22', null, 1);
+
+-- ----------------------------
+-- Table structure for t_ds_plugin_define
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_plugin_define;
+CREATE TABLE t_ds_plugin_define (
+  id int NOT NULL AUTO_INCREMENT,
+  plugin_name varchar(100) NOT NULL,
+  plugin_type varchar(100) NOT NULL,
+  plugin_params text,
+  create_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
+  update_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE 
CURRENT_TIMESTAMP,
+  PRIMARY KEY (id),
+  UNIQUE KEY t_ds_plugin_define_UN (plugin_name,plugin_type)
+);
+
+-- ----------------------------
+-- Table structure for t_ds_alert_plugin_instance
+-- ----------------------------
+DROP TABLE IF EXISTS t_ds_alert_plugin_instance;
+CREATE TABLE t_ds_alert_plugin_instance (
+  id int NOT NULL AUTO_INCREMENT,
+  plugin_define_id int NOT NULL,
+  plugin_instance_params text,
+  create_time timestamp NULL DEFAULT CURRENT_TIMESTAMP,
+  update_time timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE 
CURRENT_TIMESTAMP,
+  instance_name varchar(200) DEFAULT NULL,
+  PRIMARY KEY (id)
+);
diff --git a/style/checkstyle-suppressions.xml 
b/style/checkstyle-suppressions.xml
deleted file mode 100644
index 50cf910..0000000
--- a/style/checkstyle-suppressions.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-<?xml version="1.0"?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<!DOCTYPE suppressions PUBLIC
-     "-//Puppy Crawl//DTD Suppressions 1.0//EN"
-     "http://www.puppycrawl.com/dtds/suppressions_1_0.dtd";>
- 
-<suppressions>
-
-</suppressions>
diff --git a/style/checkstyle.xml b/style/checkstyle.xml
index 2dba3b9..4d95262 100644
--- a/style/checkstyle.xml
+++ b/style/checkstyle.xml
@@ -29,9 +29,9 @@
         <property name="eachLine" value="true"/>
     </module>
 
-    <module name="SuppressionFilter">
-        <property name="file" value="${checkstyle.suppressions.file}" 
default="checkstyle-suppressions.xml"/>
-        <property name="optional" value="true"/>
+    <module name="LineLength">
+        <property name="max" value="200"/>
+        <property name="ignorePattern" value="^ *\* *[^ ]+$"/>
     </module>
 
     <module name="LineLength">
@@ -282,4 +282,4 @@
         <module name="AvoidStarImport"/>
 
     </module>
-</module>
\ No newline at end of file
+</module>
diff --git a/tools/dependencies/known-dependencies.txt 
b/tools/dependencies/known-dependencies.txt
index 8ff0c29..a48d40f 100755
--- a/tools/dependencies/known-dependencies.txt
+++ b/tools/dependencies/known-dependencies.txt
@@ -49,6 +49,7 @@ cron-utils-5.0.5.jar
 curator-client-4.3.0.jar
 curator-framework-4.3.0.jar
 curator-recipes-4.3.0.jar
+curator-test-2.12.0.jar
 curvesapi-1.06.jar
 datanucleus-api-jdo-4.2.1.jar
 datanucleus-core-4.1.6.jar

Reply via email to