This is an automated email from the ASF dual-hosted git repository.

casion pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new 2ccf1f135d [feat:3945] add release note (#615)
2ccf1f135d is described below

commit 2ccf1f135da6a09f7fdee9945493c1863ef25774
Author: aiceflower <[email protected]>
AuthorDate: Sat Dec 3 12:34:03 2022 +0800

    [feat:3945] add release note (#615)
    
    * add release note, version overlook, trino engineplugin
---
 blog/2022-12-02-material-manage/img/bml.jpg        | Bin 0 -> 73552 bytes
 blog/2022-12-02-material-manage/index.md           |  53 +++++
 docs/development/debug.md                          |   2 +-
 docs/engine-usage/images/check-seatunnel.png       | Bin 0 -> 107950 bytes
 docs/engine-usage/images/trino-config.png          | Bin 0 -> 113587 bytes
 docs/engine-usage/seatunnel.md                     | 254 +++++++++++++++++++++
 docs/engine-usage/trino.md                         | 243 ++++++++++++++++++++
 docs/introduction.md                               |   2 +-
 docs/release.md                                    |  79 +++----
 download/release-notes-1.3.1.md                    |  77 +++++++
 .../2022-12-02-material-manage/img/bml.jpg         | Bin 0 -> 73552 bytes
 .../2022-12-02-material-manage/index.md            |  53 +++++
 .../current/release-notes-1.3.1.md                 |  77 +++++++
 .../current/development/debug.md                   |   2 +-
 .../engine-usage/images/check-seatunnel.png        | Bin 0 -> 107950 bytes
 .../current/engine-usage/images/trino-config.png   | Bin 0 -> 90625 bytes
 .../current/engine-usage/seatunnel.md              | 254 +++++++++++++++++++++
 .../current/engine-usage/trino.md                  | 243 ++++++++++++++++++++
 .../current/release.md                             |  69 ++----
 19 files changed, 1312 insertions(+), 96 deletions(-)

diff --git a/blog/2022-12-02-material-manage/img/bml.jpg 
b/blog/2022-12-02-material-manage/img/bml.jpg
new file mode 100644
index 0000000000..d81b3bb5c9
Binary files /dev/null and b/blog/2022-12-02-material-manage/img/bml.jpg differ
diff --git a/blog/2022-12-02-material-manage/index.md 
b/blog/2022-12-02-material-manage/index.md
new file mode 100644
index 0000000000..f966efb0ca
--- /dev/null
+++ b/blog/2022-12-02-material-manage/index.md
@@ -0,0 +1,53 @@
+---
+title: Engine Material Management
+authors: [aiceflower]
+tags: [bml,linki1.3.1]
+---
+# Overview
+
+## background
+
+Engine material management is the linkis engine material management system, 
which is mainly used to manage Linkis engine material files and store various 
engine files of users, including engine type, engine version and other 
information. The overall process is that the compressed file is uploaded to the 
material library (BML) through the front-end browser, and the material 
compressed file is decompressed and verified. If the engine does not exist 
locally when it needs to be executed, it  [...]
+
+Has the following function points:
+
+1) Support uploading packaged engine files. The size of uploaded files is 
affected by nginx configuration, and the file type is zip file type. It is not 
supported to package zip compressed files by yourself in the windows 
environment.
+
+2) Support for updating existing engine materials. After updating, add a 
storage version of bml engine materials in BML, and the current version can be 
rolled back and deleted.
+
+3) An engine involves two engine materials, namely lib and conf, which can be 
managed separately.
+
+## Architecture Diagram
+
+![](./img/bml.jpg)
+
+## Architecture Description
+
+1. Engine material management requires administrator privileges in the Linkis 
web management console, and the administrator field in the test environment 
needs to be set during development and debugging.
+
+2. Engine material management involves adding, updating, and deleting engine 
material files. Material files are divided into lib and conf to store them 
separately. The concept of two versions is involved in the file, one is the 
version of the engine itself, and the other is the material version. In the 
update operation, if the material is modified, a new material version will be 
added and stored in BML, which supports the material version delete and 
rollback.
+
+3. Use the BML Service to store the engine material files, call the BML 
service to store the files through RPC, and obtain the stored resource id and 
version and save them.
+
+### Core process
+
+1. Upload the engine plug-in file of zip type, first store it in the Home 
directory of the engine plug-in and decompress the file, and then start the 
refresh program.
+2. Compress the conf and lib directories in the decompressed engine file, 
upload it to the BML (material management system), obtain the corresponding BML 
resource id and resource version, and read the corresponding engine name and 
version information.
+3. In the engine material resource table, add a new engine material record, 
and each upload will generate lib and conf data respectively. In addition to 
recording the name and type information of the engine, the most important thing 
is to record the information of the engine in the material management system, 
including the resource id and version information of the engine, which are 
linked to the resource table in BML.
+
+## Database Design
+
+Engine Material Resource Information Table 
(linkis_cg_engine_conn_plugin_bml_resources)
+
+| Field name | Function | Remarks |
+| --- | --- | --- |
+| id | engine material package identification id | Primary key |
+| engine_conn_type | The location where resources are stored | such as Spark |
+| version | engine version | such as Spark's v2.4.3 |
+| file_name | engine file name | such as lib.zip |
+| file_size | engine file size | |
+| last_modified | The last modification time of the file | |
+| bml_resource_id | The id of the record resource in BML (material management 
system) | The id used to identify the engine file in BML |
+| bml_resource_version | record resource version in BML | such as v000001 |
+| create_time | resource creation time | |
+| last_update_time | The last update time of the resource | |
\ No newline at end of file
diff --git a/docs/development/debug.md b/docs/development/debug.md
index 22dbbfc12b..94f5a566a1 100644
--- a/docs/development/debug.md
+++ b/docs/development/debug.md
@@ -512,4 +512,4 @@ Open the window as shown below and configure the remote 
debugging port, service,
 ### 4.5 Start debugging
 
 Click the debug button, and the following information appears, indicating that 
you can start debugging
-![debug](https://user-images.githubusercontent.com/29391030/163559920-05aba3c3-b146-4f62-8e20-93f94a65158d.png)
\ No newline at end of file
+![debug](https://user-images.githubusercontent.com/29391030/163559920-05aba3c3-b146-4f62-8e20-93f94a65158d.png)
diff --git a/docs/engine-usage/images/check-seatunnel.png 
b/docs/engine-usage/images/check-seatunnel.png
new file mode 100644
index 0000000000..982c227195
Binary files /dev/null and b/docs/engine-usage/images/check-seatunnel.png differ
diff --git a/docs/engine-usage/images/trino-config.png 
b/docs/engine-usage/images/trino-config.png
new file mode 100644
index 0000000000..b6dc459a2f
Binary files /dev/null and b/docs/engine-usage/images/trino-config.png differ
diff --git a/docs/engine-usage/seatunnel.md b/docs/engine-usage/seatunnel.md
new file mode 100644
index 0000000000..68917ac4d2
--- /dev/null
+++ b/docs/engine-usage/seatunnel.md
@@ -0,0 +1,254 @@
+---
+title: Seatunnel Engine
+sidebar_position: 14
+---
+
+This article mainly introduces the installation, usage and configuration of 
the `Seatunnel` engine plugin in `Linkis`.
+
+## 1. Pre-work
+
+### 1.1 Engine installation
+
+If you want to use `Seatunnel` engine on your `Linkis` service, you need to 
install `Seatunnel` engine. Moreover, `Seatunnel` depends on the `Spark` or 
`Flink` environment. Before using the `linkis-seatunnel` engine, it is strongly 
recommended to run through the `Seatunnel` environment locally.
+
+`Seatunnel 2.1.2` download address: 
https://dlcdn.apache.org/incubator/seatunnel/2.1.2/apache-seatunnel-incubating-2.1.2-bin.tar.gz
+
+| Environment variable name | Environment variable content | Required or not |
+|-----------------|----------------|-------------- 
-----------------------------|
+| JAVA_HOME | JDK installation path | Required |
+| SEATUNNEL_HOME | Seatunnel installation path | required |
+|SPARK_HOME| Spark installation path| Seatunnel needs to run based on Spark |
+|FLINK_HOME| Flink installation path| Seatunnel execution is based on Flink |
+
+Table 1-1 Environment configuration list
+
+| Linkis variable name| variable content| required |
+| --------------------------- | --------------------- 
-------------------------------------- | ------------ 
--------------------------------------------------- |
+| wds.linkis.engine.seatunnel.plugin.home | Seatunnel installation path | Yes |
+
+### 1.2 Engine Environment Verification
+
+Take the execution of `Spark` tasks as an example
+
+```shell
+cd $SEATUNNEL_HOME
+./bin/start-seatunnel-spark.sh --master local[4] --deploy-mode client --config 
./config/spark.batch.conf.template
+```
+The output is as follows:
+
+![](./images/check-seatunnel.png)
+
+## 2. Engine plugin deployment
+
+### 2.1 Engine plugin preparation (choose one) [non-default 
engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin 
Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (requires `maven` environment)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/seatunnel/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/seatunnel/target/out/
+```
+[EngineConnPlugin Engine Plugin 
Installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine package in 2.1 to the engine directory of the server
+```bash
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── seat tunnel
+│ ├── dist
+│ │ └── v2.1.2
+│ │ ├── conf
+│ │ └── lib
+│ └── plugin
+│ └── 2.1.2
+```
+
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check whether the engine is refreshed successfully
+You can check whether the `last_update_time` of the 
`linkis_engine_conn_plugin_bml_resources` table in the database is the time to 
trigger the refresh.
+
+```sql
+#login to `linkis` database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3. Engine usage
+
+### 3.1 Submit tasks through `Linkis-cli`
+
+
+```shell
+sh ./bin/linkis-cli --mode once -code 'test' -engineType seatunnel-2.1.2 
-codeType sspark -labelMap userCreator=hadoop-seatunnel -labelMap 
engineConnMode=once -jobContentMap code='env {
+   spark.app.name = "SeaTunnel"
+   spark.executor.instances = 2
+   spark.executor.cores = 1
+   spark.executor.memory = "1g"
+   }
+   source {
+     Fake {
+       result_table_name = "my_dataset"
+     }
+   }
+   transform {}
+   sink {Console {}}' -jobContentMap master=local[4] -jobContentMap 
deploy-mode=client -sourceMap jobName=OnceJobTest -submitUser hadoop -proxyUser 
hadoop
+```
+
+### 3.2 Submit tasks through OnceEngineConn
+
+OnceEngineConn calls LinkisManager's createEngineConn interface through 
LinkisManagerClient, and sends the code to the created Seatunnel engine, and 
then Seatunnel engine starts to execute. The use of Client is also very simple, 
first create a new maven project, or introduce the following dependencies in 
the project
+
+```xml
+<dependency>
+    <groupId>org.apache.linkis</groupId>
+    <artifactId>linkis-computation-client</artifactId>
+    <version>${linkis.version}</version>
+</dependency>
+```
+
+**Example Code**
+```java
+package org.apache.linkis.computation.client;
+import org.apache.linkis.common.conf.Configuration;
+import 
org.apache.linkis.computation.client.once.simple.SubmittableSimpleOnceJob;
+import org.apache.linkis.computation.client.utils.LabelKeyUtils;
+public class SeatunnelOnceJobTest {
+    public static void main(String[] args) {
+        LinkisJobClient.config().setDefaultServerUrl("http://ip:9001";);
+        String code =
+                "\n"
+                        + "env {\n"
+                        + " spark.app.name = \"SeaTunnel\"\n"
+                        + "spark.executor.instances = 2\n"
+                        + "spark.executor.cores = 1\n"
+                        + " spark.executor.memory = \"1g\"\n"
+                        + "}\n"
+                        + "\n"
+                        + "source {\n"
+                        + "Fake {\n"
+                        + " result_table_name = \"my_dataset\"\n"
+                        + " }\n"
+                        + "\n"
+                        + "}\n"
+                        + "\n"
+                        + "transform {\n"
+                        + "}\n"
+                        + "\n"
+                        + "sink {\n"
+                        + " Console {}\n"
+                        + "}";
+        SubmittableSimpleOnceJob onceJob =
+                LinkisJobClient.once()
+                        .simple()
+                        .builder()
+                        .setCreateService("seatunnel-Test")
+                        .setMaxSubmitTime(300000)
+                        .addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY(), 
"seatunnel-2.1.2")
+                        .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY(), 
"hadoop-seatunnel")
+                        .addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY(), 
"once")
+                        .addStartupParam(Configuration.IS_TEST_MODE().key(), 
true)
+                        .addExecuteUser("hadoop")
+                        .addJobContent("runType", "sspark")
+                        .addJobContent("code", code)
+                        .addJobContent("master", "local[4]")
+                        .addJobContent("deploy-mode", "client")
+                        .addSource("jobName", "OnceJobTest")
+                        .build();
+        onceJob. submit();
+        System.out.println(onceJob.getId());
+        onceJob. waitForCompleted();
+        System.out.println(onceJob.getStatus());
+        LinkisJobMetrics jobMetrics = onceJob. getJobMetrics();
+        System.out.println(jobMetrics.getMetrics());
+    }
+}
+```
+## 4. Engine configuration instructions
+
+### 4.1 Default Configuration Description
+
+| Configuration | Default | Description | Required |
+| ----------------------------------------- | ---------- ----------- | 
-------------------------------------- ----- | -------- |
+| wds.linkis.engine.seatunnel.plugin.home | /opt/linkis/seatunnel | Seatunnel 
installation path | true |
+### 4.2 Configuration modification
+
+If the default parameters are not satisfied, there are the following ways to 
configure some basic parameters
+
+#### 4.2.1 Client Configuration Parameters
+
+```shell
+sh ./bin/linkis-cli --mode once-code 'test' \
+-engineType seatunnel-2.1.2 -codeType sspark\
+-labelMap userCreator=hadoop-seatunnel -labelMap engineConnMode=once \
+-jobContentMap code='env {
+   spark.app.name = "SeaTunnel"
+   spark.executor.instances = 2
+   spark.executor.cores = 1
+   spark.executor.memory = "1g"
+   }
+   source {
+     Fake {
+       result_table_name = "my_dataset"
+     }
+   }
+   transform {}
+   sink {Console {}}' -jobContentMap master=local[4] \
+   -jobContentMap deploy-mode=client \
+   -sourceMap jobName=OnceJobTest\
+   -runtimeMap wds.linkis.engine.seatunnel.plugin.home=/opt/linkis/seatunnel \
+   -submitUser hadoop -proxyUser hadoop
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface and configure it through the parameter 
`params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+    "executionContent": {"code": 'env {
+    spark.app.name = "SeaTunnel"
+    spark.executor.instances = 2
+    spark.executor.cores = 1
+    spark.executor.memory = "1g"
+    }
+    source {
+        Fake {
+            result_table_name = "my_dataset"
+        }
+    }
+    transform {}
+    sink {Console {}}',
+    "runType": "sql"},
+    "params": {
+        "variable": {},
+        "configuration": {
+                "runtime": {
+                    
"wds.linkis.engine.seatunnel.plugin.home":"/opt/linkis/seatunnel"
+                    }
+                }
+        },
+    "labels": {
+        "engineType": "seatunnel-2.1.2",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
\ No newline at end of file
diff --git a/docs/engine-usage/trino.md b/docs/engine-usage/trino.md
new file mode 100644
index 0000000000..0c4c7dfff9
--- /dev/null
+++ b/docs/engine-usage/trino.md
@@ -0,0 +1,243 @@
+---
+title: Trino Engine
+sidebar_position: 13
+---
+
+This article mainly introduces the installation, use and configuration of the 
`Trino` engine plugin in `Linkis`.
+
+
+## 1. Pre-work
+
+### 1.1 Engine installation
+
+If you want to use `Trino` engine on your `Linkis` service, you need to 
install `Trino` service and make sure the service is available.
+
+### 1.2 Service Verification
+
+```shell
+# prepare trino-cli
+wget 
https://repo1.maven.org/maven2/io/trino/trino-cli/374/trino-cli-374-executable.jar
+mv trill-cli-374-executable.jar trill-cli
+chmod +x trino-cli
+
+# Execute the task
+./trino-cli --server localhost:8080 --execute 'show tables from system.jdbc'
+
+# Get the following output to indicate that the service is available
+"attributes"
+"catalogs"
+"columns"
+"procedure_columns"
+"procedures"
+"pseudo_columns"
+"schemas"
+"super_tables"
+"super_types"
+"table_types"
+"tables"
+"types"
+"udts"
+```
+
+## 2. Engine plugin deployment
+
+### 2.1 Engine plugin preparation (choose one) [non-default 
engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin 
Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (requires `maven` environment)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/trino/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/trino/target/out/
+```
+[EngineConnPlugin Engine Plugin 
Installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine package in 2.1 to the engine directory of the server
+```bash 
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── triune
+│   ├── dist
+│ │ └── v371
+│   │       ├── conf
+│ │ └── lib
+│   └── plugin
+│ └── 371
+```
+
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check whether the engine is refreshed successfully
+You can check whether the `last_update_time` of the 
`linkis_engine_conn_plugin_bml_resources` table in the database is the time to 
trigger the refresh.
+
+```sql
+#login to `linkis` database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3 Engine usage
+
+### 3.1 Submit tasks through `Linkis-cli`
+
+```shell
+ sh ./bin/linkis-cli -submitUser Hadoop \
+ -engineType trino-371 -codeType sql \
+ -code 'select * from system.jdbc.schemas limit 10' \
+ -runtimeMap linkis.trino.url=http://127.0.0.1:8080
+```
+
+If the management console, task interface, and configuration file are not 
configured (see 4.2 for the configuration method), they can be configured 
through the `-runtimeMap` attribute in the `Linkis-cli` client
+
+```shell
+sh ./bin/linkis-cli -engineType trino-371 \
+-codeType  sql -code 'select * from system.jdbc.schemas limit 10;'  \
+-runtimeMap linkis.trino.urll=http://127.0.0.1:8080 \
+-runtimeMap linkis.trino.catalog=hive \
+-runtimeMap linkis.trino.schema=default \
+-submitUser hadoop -proxyUser hadoop
+```
+
+More `Linkis-Cli` command parameter reference: [Linkis-Cli 
usage](../user-guide/linkiscli-manual.md)
+
+## 4. Engine configuration instructions
+
+### 4.1 Default Configuration Description
+
+| Configuration | Default | Description | Required |
+| ----------------------------------------- | ---------- ----------- | 
-------------------------------------- ----- | -------- |
+| linkis.trino.url | http://127.0.0.1:8080 | Trino cluster connection URL | 
true |
+| linkis.trino.default.limit | 5000 | No | Limit the number of result sets |
+| linkis.trino.http.connectTimeout | 60 | No | Connection timeout (seconds) |
+| linkis.trino.http.readTimeout | 60 | No | Transmission timeout (seconds) |
+| linkis.trino.resultSet.cache.max | 512k | no | result set buffer |
+| linkis.trino.user | null | no | username |
+| linkis.trino.password | null | no | password |
+| linkis.trino.passwordCmd | null | no | password callback command |
+| linkis.trino.catalog | system | No | Catalog |
+| linkis.trino.schema | null | 否 | Schema |
+| linkis.trino.ssl.insecured | false | no | verify SSL certificate |
+| linkis.engineconn.concurrent.limit | 100 | No | Maximum concurrent number of 
engines |
+| linkis.trino.ssl.key.store | null | no | keystore path |
+| linkis.trino.ssl.keystore.password | null | no | keystore password |
+| linkis.trino.ssl.keystore.type | null | no | keystore type |
+| linkis.trino.ssl.truststore | null | 否 | truststore |
+| linkis.trino.ss..truststore.type | null | no | truststore type |
+| linkis.trino.ssl.truststore.password | null | no | truststore password |
+
+### 4.2 Configuration modification
+
+If the default parameters are not satisfied, there are the following ways to 
configure some basic parameters
+
+#### 4.2.1 Management console configuration
+
+![](./images/trino-config.png)
+
+Note: After modifying the configuration under the `IDE` tag, you need to 
specify `-creator IDE` to take effect (other tags are similar), such as:
+
+```shell
+sh ./bin/linkis-cli -creator IDE -submitUser hadoop \
+ -engineType trino-371 -codeType sql \
+ -code 'select * from system.jdbc.schemas limit 10' \
+ -runtimeMap linkis.trino.url=http://127.0.0.1:8080
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface and configure it through the parameter 
`params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+    "executionContent": {"code": "select * from system.jdbc.schemas limit 
10;", "runType":  "sql"},
+    "params": {
+                    "variable": {},
+                    "configuration": {
+                            "runtime": {
+                                "linkis.trino.url":"http://127.0.0.1:8080";,
+                                "linkis.trino.catalog ":"hive",
+                                "linkis.trino.schema ":"default"
+                                }
+                            }
+                    },
+    "labels": {
+        "engineType": "trino-371",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
+
+### 4.3 Engine related data table
+
+`Linkis` is managed through engine tags, and the data table information 
involved is as follows.
+
+```
+linkis_ps_configuration_config_key: Insert the key and default values ​​​​of 
the configuration parameters of the engine
+linkis_cg_manager_label: insert engine label such as: trino-375
+linkis_ps_configuration_category: Insert the directory association of the 
engine
+linkis_ps_configuration_config_value: Insert the configuration that the engine 
needs to display
+linkis_ps_configuration_key_engine_relation: the relationship between 
configuration items and engines
+```
+
+The initial data related to the engine in the table is as follows
+
+
+```sql
+-- set variable
+SET @TRINO_LABEL="trino-371";
+SET @TRINO_IDE=CONCAT('*-IDE,',@TRINO_LABEL);
+SET @TRINO_ALL=CONCAT('*-*,',@TRINO_LABEL);
+
+-- engine label
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, 
`label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES 
('combined_userCreator_engineType', @TRINO_IDE, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, 
`label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES 
('combined_userCreator_engineType', @TRINO_ALL, 'OPTIONAL', 2, now(), now());
+select @label_id := id from `linkis_cg_manager_label` where label_value = 
@TRINO_IDE;
+insert into `linkis_ps_configuration_category` (`label_id`, `level`) VALUES 
(@label_id, 2);
+
+-- configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.default.limit', 'The limit on the number of query result sets 
returned', 'The limit on the number of result sets', '5000', 'None', '', 
'trino', 0, 0, 1, 'Data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.http.connectTimeout', 'Timeout for connecting to Trino server', 
'Connection timeout (seconds)', '60', 'None', '', 'trino', 0, 0, 1 , 'Data 
Source Configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.http.readTimeout', 'Timeout waiting for Trino server to return 
data', 'Transmission timeout (seconds)', '60', 'None', '', 'trino', 0, 0 , 1, 
'Data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.resultSet.cache.max', 'Trino result set buffer size', 'Result 
set buffer', '512k', 'None', '', 'trino', 0, 0, 1 , 'Data Source 
Configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.trino.url', 
'Trino server URL', 'Trino server URL', 'http://127.0.0.1:9401', 'None', '', 
'trino', 0, 0, 1 , 'Data Source Configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.trino.user', 
'username used to connect to Trino query service', 'username', 'null', 'None', 
'', 'trino', 0, 0, 1, 'data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.password', 'Password for connecting Trino query service', 
'password', 'null', 'None', '', 'trino', 0, 0, 1, 'data source configuration ');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.passwordCmd', 'Password callback command for connecting to Trino 
query service', 'Password callback command', 'null', 'None', '', 'trino', 0, 0, 
1, 'Datasource Configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.catalog', 'catalog', 'Catalog', 'system', 'None', '', 'trino', 
0, 0, 1, 'data source configuration' );
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.trino.schema', 
'The default schema for connecting Trino query service', 'Schema', '', 'None', 
'', 'trino', 0, 0, 1, 'Data source configuration') ;
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.insecured', 'Whether to ignore the server's SSL 
certificate', 'Verify SSL certificate', 'false', 'None', '', 'trino', 0, 0, 1, 
'data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.engineconn.concurrent.limit', 'Engine maximum concurrency', 'Engine 
maximum concurrency', '100', 'None', '', 'trino', 0, 0, 1, 'Data source 
configuration' );
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.keystore', 'Trino server SSL keystore path', 'keystore 
path', 'null', 'None', '', 'trino', 0, 0, 1, 'data source configuration ');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.keystore.type', 'Trino server SSL keystore type', 'keystore 
type', 'null', 'None', '', 'trino', 0, 0, 1, 'data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.keystore.password', 'Trino server SSL keystore password', 
'keystore password', 'null', 'None', '', 'trino', 0, 0, 1, 'data source 
configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.truststore', 'Trino server SSL truststore path', 'truststore 
path', 'null', 'None', '', 'trino', 0, 0, 1, 'data source configuration ');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.truststore.type', 'Trino server SSL truststore type', 
'truststore type', 'null', 'None', '', 'trino', 0, 0, 1, 'data source 
configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.truststore.password', 'Trino server SSL truststore 
password', 'truststore password', 'null', 'None', '', 'trino', 0, 0, 1, 'data 
source configuration');
+
+
+-- key engine relation
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, 
`engine_type_label_id`)
+(select config.id as config_key_id, label.id AS engine_type_label_id FROM 
`linkis_ps_configuration_config_key` config
+INNER JOIN `linkis_cg_manager_label` label ON config.engine_conn_type = 
'trino' and label_value = @TRINO_ALL);
+
+-- engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, 
`config_value`, `config_label_id`)
+(select relation.config_key_id AS config_key_id, '' AS config_value, 
relation.engine_type_label_id AS config_label_id FROM 
`linkis_ps_configuration_key_engine_relation` relation
+INNER JOIN `linkis_cg_manager_label` label ON relation.engine_type_label_id = 
label.id AND label.label_value = @TRINO_ALL);
+```
\ No newline at end of file
diff --git a/docs/introduction.md b/docs/introduction.md
index 70d345eafe..c7015644fd 100644
--- a/docs/introduction.md
+++ b/docs/introduction.md
@@ -1,6 +1,6 @@
 ---
 title: Introduction
-sidebar_position: 0
+sidebar_position: 0.1
 ---
 
  Linkis builds a layer of computation middleware between upper applications 
and underlying engines. By using standard interfaces such as REST/WS/JDBC 
provided by Linkis, the upper applications can easily access the underlying 
engines such as MySQL/Spark/Hive/Presto/Flink, etc., and achieve the 
intercommunication of user resources like unified variables, scripts, UDFs, 
functions and resource files,and provides data source and metadata management 
services through REST standard interface. a [...]
diff --git a/docs/release.md b/docs/release.md
index 3ef5ebdae1..aa4807ced6 100644
--- a/docs/release.md
+++ b/docs/release.md
@@ -1,57 +1,38 @@
 ---
 title: Version Overview
-sidebar_position: 0.1
+sidebar_position: 0
 ---
 
-- [Build Linkis Docker Image](/development/build-docker.md)
-- [Linkis Docker LDH Quick Deployment](/deployment/deploy-to-kubernetes.md)
-- [Development & Debugging with 
Kubernetes](development/debug-with-helm-charts.md)
-- [PES Public Service Group Service Merge 
Details](/blog/2022/10/09/linkis-service-merge)
-- [Session supports Redis shared storage](/user-guide/sso-with-redis.md)
+- [Trino engine usage instructions](/engine-usage/trino.md)
+- [Seatunnel Engine Usage Instructions](/engine-usage/seatunnel.md)
+- [Linkis console multi-datasource management]
+- [Multiple data sources use document]
+- [Version Release-Notes](/download/release-notes-1.3.1)
 
 
-## Configuration Item
+## Parameter changes
 
-| Module Name (Service Name) | Type | Parameter Name | Default Value | 
Description |
-| --------------- | ----- | 
-------------------------------------------------------- | ---------------- | 
------------------------------------------------------- |
-| common | ADD |linkis.session.redis.host| 127.0.0.1 | redis connection IP |
-| common | ADD |linkis.session.redis.port| 6379 | redis connection port |
-| common | ADD |linkis.session.redis.password| test123 | redis connection 
password |
-| common | ADD |linkis.session.redis.cache.enabled| false | redis sso switch |
-| ps-cs | ADD | wds.linkis.server.restful.scan.packages | 
org.apache.linkis.cs.server.restful | restful packages scan path |
-| ps-cs | ADD | wds.linkis.server.mybatis.mapperLocations | 
classpath*:org/apache/linkis/cs/persistence/dao/impl/*.xml | mapper scan path |
-| ps-cs | ADD | wds.linkis.server.mybatis.typeAliasesPackage | 
org.apache.linkis.cs.persistence.entity |  table map entity class package path |
-| ps-cs | ADD | wds.linkis.server.mybatis.BasePackage | 
org.apache.linkis.cs.persistence.dao | Mybatis package scan path |
-| ps-cs | ADD | spring.server.port | 9108 | server port |
-| ps-cs | ADD | spring.eureka.instance.metadata-map.route | cs_1_dev | ps-cs 
route prefix(must be start with cs_) |
-| ps-cs | ADD | wds.linkis.cs.deserialize.replace_package_header.enable |  
false | Whether to replace the packet header during deserialization |
-| ps-data-source-manager | ADD | wds.linkis.server.restful.scan.packages | 
org.apache.linkis.datasourcemanager.core.restful | restfu packages Scan path |
-| ps-data-source-manager | ADD | wds.linkis.server.mybatis.mapperLocations | 
classpath:org/apache/linkis/datasourcemanager/core/dao/mapper/*.xml | Mapper 
Scan path |
-| ps-data-source-manager | ADD | wds.linkis.server.mybatis.typeAliasesPackage 
| 
org.apache.linkis.datasourcemanager.common.domain,org.apache.linkis.datasourcemanager.core.vo
 |  table map entity class package path |
-| ps-data-source-manager | ADD | wds.linkis.server.mybatis.BasePackage | 
org.apache.linkis.datasourcemanager.core.dao | Mybatis package scan path |
-| ps-data-source-manager | ADD | hive.meta.url | None | hive connection ip |
-| ps-data-source-manager | ADD | hive.meta.user | None | hive connection user |
-| ps-data-source-manager | ADD | hive.meta.password | None | hive connection 
password |
-| ps-data-source-manager | ADD | wds.linkis.metadata.hive.encode.enabled | 
false | Whether to enable BASE64 codec |
-| ps-data-source-manager | ADD | spring.server.port | 9109 | server port |
-| ps-data-source-manager | ADD | 
spring.spring.main.allow-bean-definition-overriding | true | Whether beans are 
allowed to define overrides |
-| ps-data-source-manager | ADD | 
spring.jackson.serialization.FAIL_ON_EMPTY_BEANS | false | Whether empty beans 
are allowed |
-| ps-data-source-manager | ADD | 
wds.linkis.server.mdm.service.instance.expire-in-seconds | 1800 | server 
instance expire time|
-| ps-data-source-manager | ADD | wds.linkis.server.restful.scan.packages | 
org.apache.linkis.metadata.query.server.restful | restfu packages Scan path |
-| ps-data-source-manager | ADD | wds.linkis.server.dsm.app.name | 
linkis-ps-data-source-manager | server name |
-| ps-data-source-manager | ADD | spring.server.port | 9110 | server port |
-| ps-publicservice | UPDATE | wds.linkis.server.restful.scan.packages | 
org.apache.linkis.cs.server.restful,org.apache.linkis.datasourcemanager.core.restful,org.apache.linkis.metadata.query.server.restful,org.apache.linkis.jobhistory.restful,org.apache.linkis.variable.restful,org.apache.linkis.configuration.restful,org.apache.linkis.udf.api,org.apache.linkis.filesystem.restful,org.apache.linkis.filesystem.restful,org.apache.linkis.instance.label.restful,org.apache.linkis.metadata.restful
 [...]
-|ps-publicservice|UPDATE|wds.linkis.server.mybatis.mapperLocations|classpath*:org/apache/linkis/cs/persistence/dao/impl/*.xml,classpath:org/apache/linkis/datasourcemanager/core/dao/mapper/*.xml,classpath:org/apache/linkis/jobhistory/dao/impl/*.xml,classpath:org/apache/linkis/variable/dao/impl/*.xml,classpath:org/apache/linkis/configuration/dao/impl/*.xml,classpath:org/apache/linkis/udf/dao/impl/*.xml,classpath:org/apache/linkis/instance/label/dao/impl/*.xml,classpath:org/apache/linkis/me
 [...]
-|ps-publicservice|UPDATE|wds.linkis.server.mybatis.typeAliasesPackage|org.apache.linkis.cs.persistence.entity,org.apache.linkis.datasourcemanager.common.domain,org.apache.linkis.datasourcemanager.core.vo,org.apache.linkis.configuration.entity,org.apache.linkis.jobhistory.entity,org.apache.linkis.udf.entity,org.apache.linkis.variable.entity,org.apache.linkis.instance.label.entity,org.apache.linkis.manager.entity,org.apache.linkis.metadata.domain,org.apache.linkis.bml.entity|
  table map en [...]
-|ps-publicservice|UPDATE|wds.linkis.server.mybatis.BasePackage|org.apache.linkis.cs.persistence.dao,org.apache.linkis.datasourcemanager.core.dao,org.apache.linkis.jobhistory.dao,org.apache.linkis.variable.dao,org.apache.linkis.configuration.dao,org.apache.linkis.udf.dao,org.apache.linkis.instance.label.dao,org.apache.linkis.metadata.hive.dao,org.apache.linkis.metadata.dao,org.apache.linkis.bml.dao,org.apache.linkis.errorcode.server.dao,org.apache.linkis.publicservice.common.lock.dao|
  My [...]
-| ps-publicservice | ADD | 
wds.linkis.cs.deserialize.replace_package_header.enable | false | Whether to 
replace the packet header during deserialization |
-| ps-publicservice | ADD | wds.linkis.rpc.conf.enable.local.message | true | 
enable local message |
-| ps-publicservice | ADD | wds.linkis.rpc.conf.local.app.list | 
linkis-ps-publicservice | local app list |
-| ps-publicservice | ADD | spring.server.port | 9105 | server port |
-| ps-publicservice | ADD | spring.spring.main.allow-bean-definition-overriding 
| true | Whether beans are allowed to define overrides |
-| ps-publicservice | ADD | 
spring.spring.jackson.serialization.FAIL_ON_EMPTY_BEANS | false | Whether empty 
beans are allowed |
-| ps-publicservice | ADD | spring.eureka.instance.metadata-map.route | 
cs_1_dev | route prefix(must be start with cs_ |
+| module name (service name) | type | parameter name | default value | 
description |
+| ----------- | ----- | ------------------------------- 
------------------------- | ---------------- | ------- 
--------------------------------------------------- |
+| linkis.trino.url | http://127.0.0.1:8080 | Trino cluster connection URL | 
true |
+| ec-trino | new | linkis.trino.default.limit | 5000 | limit on the number of 
result sets |
+| ec-trino | new | linkis.trino.http.connectTimeout | 60 | connection timeout 
(seconds) |
+| ec-trino | new | linkis.trino.http.readTimeout | 60 | transmission timeout 
(seconds) |
+| ec-trino | new | linkis.trino.resultSet.cache.max | 512k | result set buffer 
|
+| ec-trino | new | linkis.trino.user | null | username |
+| ec-trino | new | linkis.trino.password | null | password |
+| ec-trino | new | linkis.trino.passwordCmd | null | password callback command 
|
+| ec-trino | new | linkis.trino.catalog | system | Catalog |
+| ec-trino | new | linkis.trino.schema | null | Schema |
+| ec-trino | new | linkis.trino.ssl.insecured | false | verify SSL certificate 
|
+| ec-trino | Added | linkis.engineconn.concurrent.limit | 100 | The maximum 
number of concurrent engines |
+| ec-trino | new | linkis.trino.ssl.key.store | null | keystore path |
+| ec-trino | new | linkis.trino.ssl.keystore.password | null | keystore 
password |
+| ec-trino | new | linkis.trino.ssl.keystore.type | null | keystore type |
+| ec-trino | new | linkis.trino.ssl.truststore | null | truststore |
+| ec-trino | new | linkis.trino.ss..truststore.type | null | trustore type |
+| ec-trino | new | linkis.trino.ssl.truststore.password | null | truststore 
password |
+| ec-seatunnel | new | wds.linkis.engine.seatunnel.plugin.home | 
/opt/linkis/seatunnel | Seatunnel installation path |
 
-## DB Table Changes
-For details, see the upgrade schema`db/upgrade/1.3.0_schema` file in the 
corresponding branch of the 
-code repository (https://github.com/apache/incubator-linkis).
\ No newline at end of file
+## Database table changes
+For details, see the upgrade schema `db/upgrade/1.3.1_schema` file in the 
corresponding branch of the code warehouse 
(https://github.com/apache/incubator-linkis)
\ No newline at end of file
diff --git a/download/release-notes-1.3.1.md b/download/release-notes-1.3.1.md
new file mode 100644
index 0000000000..b904d14557
--- /dev/null
+++ b/download/release-notes-1.3.1.md
@@ -0,0 +1,77 @@
+---
+title: Release Notes 1.3.1
+sidebar_position: 0.16
+---
+
+Apache Linkis(incubating) 1.3.1 includes all [Project 
Linkis-1.3.0](https://github.com/apache/incubator-linkis/projects/23).
+
+Linkis 1.3.1 mainly supports Trino engine and SeaTunnel engine. Added 
management console data source management module. And enhanced data sources, 
including oracle, kingbase, postgresql, sqlserver, db2, greenplum, dm.
+
+The main functions are as follows:
+
+* Added support for Trino engine
+* Added support for SeaTunnel engine
+* Added management console data source management
+* Added JDBC engine features to support Trino-driven query progress
+* Data source enhancement oracle, kingbase, postgresql, sqlserver, db2, 
greenplum, dm
+
+abbreviation:
+- COMMON: Linkis Common
+- ENTRANCE: Linkis Entrance
+- EC: Engineconn
+- ECM: EngineConnManager
+- ECP: EngineConnPlugin
+- DMS: Data Source Manager Service
+- MDS: MetaData Manager Service
+- LM: Links Manager
+- PS: Link Public Service
+- PE: Link Public Enhancement
+- RPC: Linkis Common RPC
+- CG: Linkis Computation Governance
+- DEPLOY: Linkis Deployment
+- WEB: Linked Web
+- GATEWAY: Linkis Gateway
+- EP: Engine Plugin
+
+---
+
+## new features
+
++ \[DMS][LINKIS-2961](https://github.com/apache/incubator-linkis/pull/2961) 
Data source management supports multiple environments
++ \[EC][LINKIS-3458](https://github.com/apache/incubator-linkis/pull/3458) Add 
Seatunnel engine
++ \[MDS][LINKIS-3457](https://github.com/apache/incubator-linkis/pull/3457) 
Add doris/clickhouse to Linkis metadata query
++ \[DMS][LINKIS-3839](https://github.com/apache/incubator-linkis/pull/3839) 
Add necessary audit logs for data sources
++ 
\[EC-TRINO][LINKIS-2639](https://github.com/apache/incubator-linkis/pull/2639) 
add Trino engine
++ \[ECP][LINKIS-3836](https://github.com/apache/incubator-linkis/pull/3836) 
merge ECP service into appmanager
++ \[EC][LINKIS-3381](https://github.com/apache/incubator-linkis/pull/3381) The 
GetEngineNode interface supports returning complete EC information
+
+
+## Enhancement points
+
++ \[EC][LINKIS-2663](https://github.com/apache/incubator-linkis/pull/2663) 
remove subtask logic
++ \[COMMON][LINKIS-3697](https://github.com/apache/incubator-linkis/pull/3697) 
optimize Linkis script
++ 
\[MDS/DMS][LINKIS-3613](https://github.com/apache/incubator-linkis/pull/3613) 
Adjust the metadata service architecture and add support for HDFS types in the 
data source
++ \[DMS][LINKIS-3803](https://github.com/apache/incubator-linkis/pull/3803) 
optimize DsmQueryProtocol
++ \[DMS][LINKIS-3505](https://github.com/apache/incubator-linkis/pull/3505) 
Add new interface for Qualitis
++ \[DEPLOY][LINKIS-3500](https://github.com/apache/incubator-linkis/pull/3500) 
support startup script compatible with multiple service names
++ 
\[COMMON/PE][LINKIS-3349](https://github.com/apache/incubator-linkis/pull/3349) 
Add a utility class to determine if OS user exists
+
+## Repair function
++ \[WEB][LINKIS-2921](https://github.com/apache/incubator-linkis/pull/2921) 
batch close tasks
++ \[COMMON][LINKIS-2971](https://github.com/apache/incubator-linkis/pull/2971) 
remove netty-3.6.2.Final.jar dependency
++ 
\[EC-JDBC][LINKIS-3240](https://github.com/apache/incubator-linkis/pull/3240) 
fix JDBC executor directory
++ \[COMMON][LINKIS-3430](https://github.com/apache/incubator-linkis/pull/3430) 
After the repair engine fails to start, reuse the engine configuration when 
restarting
++ \[COMMON][LINKIS-3234](https://github.com/apache/incubator-linkis/pull/3234) 
fix link-storage hadoop checksum issue
++ \[][LINKIS-3347](https://github.com/apache/incubator-linkis/pull/3347) Fix 
StorageResultSetWriter close method does not support repeated calls
++ \[COMMON][LINKIS-3352](https://github.com/apache/incubator-linkis/pull/3352) 
Fix excel export: decimalType cannot be recognized and calculated
++ \[EC][LINKIS-3752] (https://github.com/apache/incubator-linkis/pull/3752) 
Fix the inaccurate query result of EC history list
++ \[DEPLOY][LINKIS-3726](https://github.com/apache/incubator-linkis/pull/3726) 
keep all registered service instances
++ 
\[EC-JDBC][LINKIS-3796](https://github.com/apache/incubator-linkis/pull/3796) 
handle the case where mysql link starts with JDBC
++ 
\[EC-JDBC][LINKIS-3826](https://github.com/apache/incubator-linkis/pull/3826) 
handle mysql connection parameters
++ \[PE/PS][LINKIS-3440](https://github.com/apache/incubator-linkis/pull/3440) 
Refactor some methods to prevent sql injection
++ \[PE][LINKIS-3438](https://github.com/apache/incubator-linkis/pull/3438) 
update error sql and remove redundant method
++ \[EC][LINKIS-3552](https://github.com/apache/incubator-linkis/pull/3552) fix 
ES EC actuator directory
+
+## Acknowledgments
+The release of Apache Linkis(incubating) 1.3.1 is inseparable from the 
contributors of the Linkis community, thanks to all community contributors, 
including but not limited to the following Contributors (in no particular 
order):
+AaronLinOops, Alexkun, jacktao007, legendtkl, peacewong, casionone, 
QuintinTao, cydenghua, jackxu2011, ruY9527, huiyuanjjjjuice, binbinCheng, 
yyuser5201314, Beacontownfc, duhanmin, whiterxine, aiceflower, weipengfei-sj, 
zhaoyun006, CCweixiao, Beacontownfc, mayinrain
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-blog/2022-12-02-material-manage/img/bml.jpg
 
b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-12-02-material-manage/img/bml.jpg
new file mode 100644
index 0000000000..d81b3bb5c9
Binary files /dev/null and 
b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-12-02-material-manage/img/bml.jpg
 differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-blog/2022-12-02-material-manage/index.md 
b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-12-02-material-manage/index.md
new file mode 100644
index 0000000000..01b736e96e
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-blog/2022-12-02-material-manage/index.md
@@ -0,0 +1,53 @@
+---
+title: 引擎物料管理
+authors: [aiceflower]
+tags: [bml,linki1.3.1]
+---
+# 总览
+
+## 背景
+
+引擎物料管理是linkis引擎物料管理系统,主要用来管理Linkis的引擎物料文件,存储用户的各种引擎文件,包括引擎类型、引擎版本等信息。总体流程为压缩文件经前端浏览器上传至物料库(BML),物料压缩文件解压、校验,需要执行时如果发现本地不存在该引擎,则需要去物料库中寻找并下载安装注册从而执行。
+
+具备以下功能点:
+
+1)、支持上传打包好的引擎文件,上传文件大小受nginx的配置影响,文件类型为zip文件类型,在windows环境下自行打包zip压缩文件不支持。
+
+2)、支持对已有的引擎物料进行更新,更新后在BML中新增一个bml引擎物料的存储版本,可以对当前的版本进行回滚和删除。
+
+3)、一个引擎涉及两个引擎物料,分别是lib和conf,可以进行分别管理。
+
+## 架构图
+
+![](./img/bml.jpg)
+
+## 架构说明
+
+1、引擎物料管理在Linkis web管理台中,需要管理员权限,在开发调试时需要设置测试环境下的管理员字段。
+
+2、引擎物料管理涉及引擎物料文件的增加、更新、删除,物料文件分为lib和conf分别存储。文件中涉及两个版本的概念,一个是引擎本身的版本,另一个则是物料版本,在更新操作中物料如果存在修改则会新增一个物料版本并将其存储在BML中,支持物料版本的删除和回滚。
+
+3、利用BML Service对引擎物料文件进行存储,通过RPC调用BML的服务对文件进行存储,得到存储的资源id和版本并保存。
+
+### 核心流程
+
+1. 上传zip类型的引擎插件文件,先存储在引擎插件Home目录中并解压文件,之后进行启动刷新程序。
+2. 
对解压后的引擎文件中的conf、lib目录进行压缩,上传至BML(物料管理系统)中,分别获取对应的BML的资源id和资源版本,读取对应引擎名称和版本信息。
+3. 
在引擎物料资源表中,新增引擎物料的记录,每次上传都会分别产生lib和conf两条数据。除了记录这个引擎的名称和类型信息外,最重要的是记录了该引擎在物料管理系统中的信息,包括引擎的资源id和版本信息,关联至BML中的资源表。
+
+## 数据库设计
+
+引擎物料资源信息表(linkis_cg_engine_conn_plugin_bml_resources)
+
+| 字段名 | 作用  | 备注  |
+| --- | --- | --- |
+| id  | 引擎物料包标识id | Primary key |
+| engine_conn_type | 存放资源的位置 | 如Spark |
+| version | 引擎的版本 | 如Spark的v2.4.3 |
+| file_name | 引擎文件名 | 如lib.zip |
+| file_size | 引擎文件大小 |     |
+| last_modified | 文件最后的修改时间 |     |
+| bml_resource_id | 记录资源在BML(物料管理系统)中的id | 用于在BML中标识引擎文件的id |
+| bml_resource_version | 记录资源在BML中的版本 | 如v000001 |
+| create_time | 资源的创建时间 |     |
+| last_update_time | 资源最后的更新时间 |     |
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.3.1.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.3.1.md
new file mode 100644
index 0000000000..3dedc87a42
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.3.1.md
@@ -0,0 +1,77 @@
+---
+title: Release Notes 1.3.1
+sidebar_position: 0.16
+---
+
+Apache Linkis(incubating) 1.3.1 包括所有 [Project 
Linkis-1.3.0](https://github.com/apache/incubator-linkis/projects/23).
+
+Linkis 1.3.1 版本主要支持 Trino 引擎 和 SeaTunnel 引擎。增加了管理台数据源管理模块。并且对数据源进行了增强,包括 
oracle、kingbase、 postgresql、sqlserver、db2、greenplum、dm. 
+
+主要功能如下:
+
+* 新增对 Trino 引擎的支持 
+* 新增对 SeaTunnel 引擎的支持
+* 新增管理台数据源管理
+* 新增 JDBC 引擎特性,支持 Trino 驱动查询进度
+* 数据源增强 oracle、kingbase、postgresql、sqlserver、db2、greenplum、dm 
+
+缩写:
+- COMMON: Linkis Common
+- ENTRANCE: Linkis Entrance
+- EC: Engineconn
+- ECM: EngineConnManager
+- ECP: EngineConnPlugin
+- DMS: Data Source Manager Service
+- MDS: MetaData Manager Service
+- LM: Linkis Manager
+- PS: Linkis Public Service
+- PE: Linkis Public Enhancement
+- RPC: Linkis Common RPC
+- CG: Linkis Computation Governance
+- DEPLOY: Linkis Deployment
+- WEB: Linkis Web
+- GATEWAY: Linkis Gateway
+- EP: Engine Plugin
+
+---
+
+## 新特性
+
++ \[DMS][LINKIS-2961](https://github.com/apache/incubator-linkis/pull/2961) 
数据源管理支持多环境
++ \[EC][LINKIS-3458](https://github.com/apache/incubator-linkis/pull/3458) 增加 
Seatunnel 引擎
++ \[MDS][LINKIS-3457](https://github.com/apache/incubator-linkis/pull/3457) 
Linkis元数据查询添加doris/clickhouse
++ \[DMS][LINKIS-3839](https://github.com/apache/incubator-linkis/pull/3839) 
为数据源添加必要的审计日志 
++ 
\[EC-TRINO][LINKIS-2639](https://github.com/apache/incubator-linkis/pull/2639) 
增加 Trino 引擎
++ \[ECP][LINKIS-3836](https://github.com/apache/incubator-linkis/pull/3836) 合并 
ECP 服务到 appmanager
++ \[EC][LINKIS-3381](https://github.com/apache/incubator-linkis/pull/3381) 
GetEngineNode 接口支持返回完整的 EC 信息
+
+
+## 增强点
+
++ \[EC][LINKIS-2663](https://github.com/apache/incubator-linkis/pull/2663) 
移除子任务逻辑
++ \[COMMON][LINKIS-3697](https://github.com/apache/incubator-linkis/pull/3697) 
优化 Linkis 脚本
++ 
\[MDS/DMS][LINKIS-3613](https://github.com/apache/incubator-linkis/pull/3613) 
调整元数据服务架构,在数据源中增加对HDFS类型的支持
++ \[DMS][LINKIS-3803](https://github.com/apache/incubator-linkis/pull/3803) 优化 
DsmQueryProtocol
++ \[DMS][LINKIS-3505](https://github.com/apache/incubator-linkis/pull/3505) 
为Qualitis添加新接口
++ \[DEPLOY][LINKIS-3500](https://github.com/apache/incubator-linkis/pull/3500) 
支持与多个服务名兼容的启动脚本
++ 
\[COMMON/PE][LINKIS-3349](https://github.com/apache/incubator-linkis/pull/3349) 
添加一个工具类来确定 OS 用户是否存在
+
+## 修复功能
++ \[WEB][LINKIS-2921](https://github.com/apache/incubator-linkis/pull/2921) 
批量关闭任务
++ \[COMMON][LINKIS-2971](https://github.com/apache/incubator-linkis/pull/2971) 
移除 netty-3.6.2.Final.jar 依赖
++ 
\[EC-JDBC][LINKIS-3240](https://github.com/apache/incubator-linkis/pull/3240) 
修复JDBC执行器目录
++ \[COMMON][LINKIS-3430](https://github.com/apache/incubator-linkis/pull/3430) 
修复引擎启动失败后,再次启动时复用引擎配置
++ \[COMMON][LINKIS-3234](https://github.com/apache/incubator-linkis/pull/3234) 
修复link-storage hadoop 校验和问题
++ \[][LINKIS-3347](https://github.com/apache/incubator-linkis/pull/3347) 
修复StorageResultSetWriter close 方法不支持重复调用
++ \[COMMON][LINKIS-3352](https://github.com/apache/incubator-linkis/pull/3352) 
修复excel导出:decimalType无法识别和计算
++ \[EC][LINKIS-3752] (https://github.com/apache/incubator-linkis/pull/3752) 修复 
EC 历史列表查询结果不准确的问题
++ \[DEPLOY][LINKIS-3726](https://github.com/apache/incubator-linkis/pull/3726) 
保留所有注册服务实例
++ 
\[EC-JDBC][LINKIS-3796](https://github.com/apache/incubator-linkis/pull/3796) 
处理mysql链接以JDBC打头的情况
++ 
\[EC-JDBC][LINKIS-3826](https://github.com/apache/incubator-linkis/pull/3826) 
处理mysql连接参数
++ \[PE/PS][LINKIS-3440](https://github.com/apache/incubator-linkis/pull/3440) 
重构一些方法以防止sql注入
++ \[PE][LINKIS-3438](https://github.com/apache/incubator-linkis/pull/3438) 
更新错误sql并消除冗余方法
++ \[EC][LINKIS-3552](https://github.com/apache/incubator-linkis/pull/3552) 修复 
ES EC 的执行器目录
+
+## 致谢
+Apache Linkis(incubating) 1.3.1 的发布离不开 Linkis 社区的贡献者,感谢所有的社区贡献者,包括但不仅限于以下 
Contributors(排名不发先后):
+AaronLinOops, Alexkun, jacktao007, legendtkl, peacewong, casionone, 
QuintinTao, cydenghua, jackxu2011, ruY9527, huiyuanjjjjuice, binbinCheng, 
yyuser5201314, Beacontownfc, duhanmin, whiterxine, aiceflower, weipengfei-sj, 
zhaoyun006, CCweixiao, Beacontownfc, mayinrain
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/debug.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/debug.md
index 677051cc6e..87184e4394 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/debug.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/debug.md
@@ -515,4 +515,4 @@ sh linkis-daemon.sh restart ps-publicservice
 ### 4.5 开始调试
 
 点击调试按钮,出现如下信息代表可以开始调试  
-![debug](https://user-images.githubusercontent.com/29391030/163559920-05aba3c3-b146-4f62-8e20-93f94a65158d.png)
\ No newline at end of file
+![debug](https://user-images.githubusercontent.com/29391030/163559920-05aba3c3-b146-4f62-8e20-93f94a65158d.png)
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/images/check-seatunnel.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/images/check-seatunnel.png
new file mode 100644
index 0000000000..982c227195
Binary files /dev/null and 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/images/check-seatunnel.png
 differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/images/trino-config.png
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/images/trino-config.png
new file mode 100644
index 0000000000..0f7cc9c94a
Binary files /dev/null and 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/images/trino-config.png
 differ
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/seatunnel.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/seatunnel.md
new file mode 100644
index 0000000000..3b1d8eb9df
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/seatunnel.md
@@ -0,0 +1,254 @@
+---
+title: Seatunnel 引擎
+sidebar_position: 14
+---
+
+本文主要介绍在 `Linkis` 中,`Seatunnel` 引擎插件的安装、使用和配置。
+
+## 1. 前置工作
+
+### 1.1 引擎安装
+
+如果您希望在您的 `Linkis` 服务上使用 `Seatunnel` 引擎,您需要安装 `Seatunnel` 引擎。而且 `Seatunnel` 是依赖 
`Spark` 或 `Flink` 环境,使用 `linkis-seatunnel` 引擎前,强烈建议本地跑通 `Seatunnel` 环境。
+
+`Seatunnel 2.1.2` 
下载地址:https://dlcdn.apache.org/incubator/seatunnel/2.1.2/apache-seatunnel-incubating-2.1.2-bin.tar.gz
+
+| 环境变量名称 | 环境变量内容 | 是否需要                             |
+|-----------------|----------------|----------------------------------------|
+| JAVA_HOME       | JDK安装路径 | 需要                           |
+| SEATUNNEL_HOME     | Seatunnel安装路径 | 需要                           |
+|SPARK_HOME| Spark安装路径 | Seatunnel执行基于Spark就需要 |
+|FLINK_HOME| Flink安装路径 | Seatunnel执行基于Flink就需要 |
+
+表1-1 环境配置清单
+
+| Linkis变量名称      | 变量内容                                       | 是否必须          
                                             |
+| --------------------------- | 
---------------------------------------------------------- | 
------------------------------------------------------------ |
+| wds.linkis.engine.seatunnel.plugin.home | Seatunnel安装路径           | 是 |
+
+### 1.2 引擎环境验证
+
+以执行 `Spark` 任务为例
+
+```shell
+cd $SEATUNNEL_HOME
+./bin/start-seatunnel-spark.sh --master local[4] --deploy-mode client --config 
./config/spark.batch.conf.template
+```
+输出结果如下:
+
+![](./images/check-seatunnel.png)
+
+## 2. 引擎插件部署
+
+### 2.1 引擎插件准备(二选一)[非默认引擎](./overview.md)
+
+方式一:直接下载引擎插件包
+
+[Linkis 
引擎插件下载](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+方式二:单独编译引擎插件(需要有 `maven` 环境)
+
+```
+# 编译
+cd ${linkis_code_dir}/linkis-engineconn-plugins/seatunnel/
+mvn clean install
+# 编译出来的引擎插件包,位于如下目录中
+${linkis_code_dir}/linkis-engineconn-plugins/seatunnel/target/out/
+```
+[EngineConnPlugin 引擎插件安装](../deployment/install-engineconn.md)
+
+### 2.2 引擎插件的上传和加载
+
+将 2.1 中的引擎包上传到服务器的引擎目录下
+```bash 
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+上传后目录结构如下所示
+```
+linkis-engineconn-plugins/
+├── seatunnel
+│   ├── dist
+│   │   └── v2.1.2
+│   │       ├── conf
+│   │       └── lib
+│   └── plugin
+│       └── 2.1.2
+```
+
+### 2.3 引擎刷新
+
+#### 2.3.1 重启刷新
+通过重启 `linkis-cg-linkismanager` 服务刷新引擎
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 检查引擎是否刷新成功
+可以查看数据库中的 `linkis_engine_conn_plugin_bml_resources` 这张表的`last_update_time` 
是否为触发刷新的时间。
+
+```sql
+#登陆到 `linkis` 的数据库 
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3. 引擎的使用
+
+### 3.1 通过 `Linkis-cli` 提交任务 
+
+
+```shell
+sh ./bin/linkis-cli --mode once -code 'test'  -engineType seatunnel-2.1.2 
-codeType sspark  -labelMap userCreator=hadoop-seatunnel -labelMap 
engineConnMode=once -jobContentMap code='env {
+   spark.app.name = "SeaTunnel"
+   spark.executor.instances = 2
+   spark.executor.cores = 1
+   spark.executor.memory = "1g"
+   }
+   source { 
+     Fake {
+       result_table_name = "my_dataset"
+     }
+   }
+   transform {}
+   sink {Console {}}' -jobContentMap master=local[4] -jobContentMap 
deploy-mode=client -sourceMap jobName=OnceJobTest  -submitUser hadoop 
-proxyUser hadoop
+```
+
+### 3.2 通过 OnceEngineConn 提交任务
+
+OnceEngineConn 通过 LinkisManagerClient 调用 LinkisManager 的 createEngineConn 
接口,并将代码发送到创建的 Seatunnel 引擎,然后 Seatunnel 引擎开始执行。 Client 的使用也非常简单,首先创建一个新的 maven 
项目,或者在项目中引入以下依赖项
+
+```xml
+<dependency>
+    <groupId>org.apache.linkis</groupId>
+    <artifactId>linkis-computation-client</artifactId>
+    <version>${linkis.version}</version>
+</dependency>
+```
+
+**示例代码**
+```java
+package org.apache.linkis.computation.client;
+import org.apache.linkis.common.conf.Configuration;
+import 
org.apache.linkis.computation.client.once.simple.SubmittableSimpleOnceJob;
+import org.apache.linkis.computation.client.utils.LabelKeyUtils;
+public class SeatunnelOnceJobTest {
+    public static void main(String[] args) {
+        LinkisJobClient.config().setDefaultServerUrl("http://ip:9001";);
+        String code =
+                "\n"
+                        + "env {\n"
+                        + "  spark.app.name = \"SeaTunnel\"\n"
+                        + "  spark.executor.instances = 2\n"
+                        + "  spark.executor.cores = 1\n"
+                        + "  spark.executor.memory = \"1g\"\n"
+                        + "}\n"
+                        + "\n"
+                        + "source {\n"
+                        + "  Fake {\n"
+                        + "    result_table_name = \"my_dataset\"\n"
+                        + "  }\n"
+                        + "\n"
+                        + "}\n"
+                        + "\n"
+                        + "transform {\n"
+                        + "}\n"
+                        + "\n"
+                        + "sink {\n"
+                        + "  Console {}\n"
+                        + "}";
+        SubmittableSimpleOnceJob onceJob =
+                LinkisJobClient.once()
+                        .simple()
+                        .builder()
+                        .setCreateService("seatunnel-Test")
+                        .setMaxSubmitTime(300000)
+                        .addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY(), 
"seatunnel-2.1.2")
+                        .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY(), 
"hadoop-seatunnel")
+                        .addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY(), 
"once")
+                        .addStartupParam(Configuration.IS_TEST_MODE().key(), 
true)
+                        .addExecuteUser("hadoop")
+                        .addJobContent("runType", "sspark")
+                        .addJobContent("code", code)
+                        .addJobContent("master", "local[4]")
+                        .addJobContent("deploy-mode", "client")
+                        .addSource("jobName", "OnceJobTest")
+                        .build();
+        onceJob.submit();
+        System.out.println(onceJob.getId());
+        onceJob.waitForCompleted();
+        System.out.println(onceJob.getStatus());
+        LinkisJobMetrics jobMetrics = onceJob.getJobMetrics();
+        System.out.println(jobMetrics.getMetrics());
+    }
+}
+```
+## 4. 引擎配置说明
+
+### 4.1 默认配置说明
+
+| 配置                                   | 默认值                | 说明               
                         | 是否必须 |
+| -------------------------------------- | --------------------- | 
------------------------------------------- | -------- |
+| wds.linkis.engine.seatunnel.plugin.home                  | 
/opt/linkis/seatunnel  | Seatunnel安装路径                          | true |
+### 4.2 配置修改
+
+如果默认参数不满足时,有如下几中方式可以进行一些基础参数配置
+
+#### 4.2.1 客户端配置参数
+
+```shell
+sh ./bin/linkis-cli --mode once -code 'test'  \
+-engineType seatunnel-2.1.2 -codeType sspark  \
+-labelMap userCreator=hadoop-seatunnel -labelMap engineConnMode=once \
+-jobContentMap code='env {
+   spark.app.name = "SeaTunnel"
+   spark.executor.instances = 2
+   spark.executor.cores = 1
+   spark.executor.memory = "1g"
+   }
+   source { 
+     Fake {
+       result_table_name = "my_dataset"
+     }
+   }
+   transform {}
+   sink {Console {}}' -jobContentMap master=local[4] \
+   -jobContentMap deploy-mode=client \
+   -sourceMap jobName=OnceJobTest  \
+   -runtimeMap wds.linkis.engine.seatunnel.plugin.home=/opt/linkis/seatunnel \
+   -submitUser hadoop -proxyUser hadoop 
+```
+
+#### 4.2.2 任务接口配置
+提交任务接口,通过参数 `params.configuration.runtime` 进行配置
+
+```shell
+http 请求参数示例 
+{
+    "executionContent": {"code": 'env {
+    spark.app.name = "SeaTunnel"
+    spark.executor.instances = 2
+    spark.executor.cores = 1
+    spark.executor.memory = "1g"
+    }
+    source { 
+        Fake {
+            result_table_name = "my_dataset"
+        }
+    }
+    transform {}
+    sink {Console {}}', 
+    "runType":  "sql"},
+    "params": {
+        "variable": {},
+        "configuration": {
+                "runtime": {
+                    
"wds.linkis.engine.seatunnel.plugin.home":"/opt/linkis/seatunnel"
+                    }
+                }
+        },
+    "labels": {
+        "engineType": "seatunnel-2.1.2",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/trino.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/trino.md
new file mode 100644
index 0000000000..85068fd529
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/trino.md
@@ -0,0 +1,243 @@
+---
+title: Trino 引擎
+sidebar_position: 13
+---
+
+本文主要介绍在 `Linkis` 中,`Trino` 引擎插件的安装、使用和配置。
+
+
+## 1. 前置工作
+
+### 1.1 引擎安装
+
+如果您希望在您的 `Linkis` 服务上使用 `Trino` 引擎,您需要安装 `Trino` 服务并保证服务可用。
+
+### 1.2 服务验证
+
+```shell
+# 准备 trino-cli
+wget 
https://repo1.maven.org/maven2/io/trino/trino-cli/374/trino-cli-374-executable.jar
+mv trino-cli-374-executable.jar trino-cli
+chmod +x trino-cli
+
+#  执行任务
+./trino-cli --server localhost:8080 --execute 'show tables from system.jdbc'
+
+# 得到如下输出代表服务可用
+"attributes"
+"catalogs"
+"columns"
+"procedure_columns"
+"procedures"
+"pseudo_columns"
+"schemas"
+"super_tables"
+"super_types"
+"table_types"
+"tables"
+"types"
+"udts"
+```
+
+## 2. 引擎插件部署
+
+### 2.1 引擎插件准备(二选一)[非默认引擎](./overview.md)
+
+方式一:直接下载引擎插件包
+
+[Linkis 
引擎插件下载](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+方式二:单独编译引擎插件(需要有 `maven` 环境)
+
+```
+# 编译
+cd ${linkis_code_dir}/linkis-engineconn-plugins/trino/
+mvn clean install
+# 编译出来的引擎插件包,位于如下目录中
+${linkis_code_dir}/linkis-engineconn-plugins/trino/target/out/
+```
+[EngineConnPlugin 引擎插件安装](../deployment/install-engineconn.md)
+
+### 2.2 引擎插件的上传和加载
+
+将 2.1 中的引擎包上传到服务器的引擎目录下
+```bash 
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+上传后目录结构如下所示
+```
+linkis-engineconn-plugins/
+├── trino
+│   ├── dist
+│   │   └── v371
+│   │       ├── conf
+│   │       └── lib
+│   └── plugin
+│       └── 371
+```
+
+### 2.3 引擎刷新
+
+#### 2.3.1 重启刷新
+通过重启 `linkis-cg-linkismanager` 服务刷新引擎
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 检查引擎是否刷新成功
+可以查看数据库中的 `linkis_engine_conn_plugin_bml_resources` 这张表的`last_update_time` 
是否为触发刷新的时间。
+
+```sql
+#登陆到 `linkis` 的数据库 
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3 引擎的使用
+
+### 3.1 通过 `Linkis-cli` 提交任务 
+
+```shell
+ sh ./bin/linkis-cli -submitUser hadoop \
+ -engineType trino-371 -codeType sql \
+ -code 'select * from system.jdbc.schemas limit 10' \
+ -runtimeMap linkis.trino.url=http://127.0.0.1:8080
+```
+
+如果管理台,任务接口,配置文件,均未配置(配置方式见 4.2 )时可在 `Linkis-cli` 客户端中通过 `-runtimeMap` 属性进行配置
+
+```shell
+sh ./bin/linkis-cli -engineType trino-371 \
+-codeType  sql -code 'select * from system.jdbc.schemas limit 10;'  \
+-runtimeMap linkis.trino.urll=http://127.0.0.1:8080 \
+-runtimeMap linkis.trino.catalog=hive \
+-runtimeMap linkis.trino.schema=default \
+-submitUser hadoop -proxyUser hadoop
+```
+
+更多 `Linkis-Cli` 命令参数参考: [Linkis-Cli 使用](../user-guide/linkiscli-manual.md)
+
+## 4. 引擎配置说明
+
+### 4.1 默认配置说明
+
+| 配置                                   | 默认值                | 说明               
                         | 是否必须 |
+| -------------------------------------- | --------------------- | 
------------------------------------------- | -------- |
+| linkis.trino.url                  | http://127.0.0.1:8080 | Trino 集群连接 URL   
                          | true |
+| linkis.trino.default.limit | 5000 | 否 | 结果集条数限制 |
+| linkis.trino.http.connectTimeout | 60 | 否 | 连接超时时间(秒) |
+| linkis.trino.http.readTimeout | 60 | 否 | 传输超时时间(秒)|
+| linkis.trino.resultSet.cache.max | 512k | 否 | 结果集缓冲区 |
+| linkis.trino.user | null | 否 | 用户名 |
+| linkis.trino.password | null | 否 | 密码 |
+| linkis.trino.passwordCmd | null | 否 | 密码回调命令 |
+| linkis.trino.catalog | system | 否 | Catalog |
+| linkis.trino.schema | null | 否 | Schema |
+| linkis.trino.ssl.insecured | false | 否 | 验证SSL证书 |
+| linkis.engineconn.concurrent.limit | 100 | 否 | 引擎最大并发数 |
+| linkis.trino.ssl.key.store | null | 否 | keystore路径 |
+| linkis.trino.ssl.keystore.password | null | 否 | keystore密码 |
+| linkis.trino.ssl.keystore.type | null | 否 | keystore类型 |
+| linkis.trino.ssl.truststore | null | 否 | truststore |
+| linkis.trino.ss..truststore.type | null  | 否 | trustore类型 |
+| linkis.trino.ssl.truststore.password | null | 否 | truststore密码 |
+
+### 4.2 配置修改
+
+如果默认参数不满足时,有如下几中方式可以进行一些基础参数配置
+
+#### 4.2.1 管理台配置
+
+![](./images/trino-config.png)
+
+注意: 修改 `IDE` 标签下的配置后需要指定 `-creator IDE` 才会生效(其它标签类似),如:
+
+```shell
+sh ./bin/linkis-cli -creator IDE -submitUser hadoop \
+ -engineType trino-371 -codeType sql \
+ -code 'select * from system.jdbc.schemas limit 10' \
+ -runtimeMap linkis.trino.url=http://127.0.0.1:8080
+```
+
+#### 4.2.2 任务接口配置
+提交任务接口,通过参数 `params.configuration.runtime` 进行配置
+
+```shell
+http 请求参数示例 
+{
+    "executionContent": {"code": "select * from system.jdbc.schemas limit 
10;", "runType":  "sql"},
+    "params": {
+                    "variable": {},
+                    "configuration": {
+                            "runtime": {
+                                "linkis.trino.url":"http://127.0.0.1:8080";,
+                                "linkis.trino.catalog ":"hive",
+                                "linkis.trino.schema ":"default"
+                                }
+                            }
+                    },
+    "labels": {
+        "engineType": "trino-371",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
+
+### 4.3 引擎相关数据表
+
+`Linkis` 是通过引擎标签来进行管理的,所涉及的数据表信息如下所示。
+
+```
+linkis_ps_configuration_config_key:  插入引擎的配置参数的key和默认values
+linkis_cg_manager_label:插入引擎label如:trino-375
+linkis_ps_configuration_category: 插入引擎的目录关联关系
+linkis_ps_configuration_config_value: 插入引擎需要展示的配置
+linkis_ps_configuration_key_engine_relation:配置项和引擎的关联关系
+```
+
+表中与引擎相关的初始数据如下
+
+
+```sql
+-- set variable
+SET @TRINO_LABEL="trino-371";
+SET @TRINO_IDE=CONCAT('*-IDE,',@TRINO_LABEL);
+SET @TRINO_ALL=CONCAT('*-*,',@TRINO_LABEL);
+
+-- engine label
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, 
`label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES 
('combined_userCreator_engineType', @TRINO_IDE, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, 
`label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES 
('combined_userCreator_engineType', @TRINO_ALL, 'OPTIONAL', 2, now(), now());
+select @label_id := id from `linkis_cg_manager_label` where label_value = 
@TRINO_IDE;
+insert into `linkis_ps_configuration_category` (`label_id`, `level`) VALUES 
(@label_id, 2);
+
+-- configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.default.limit', '查询的结果集返回条数限制', '结果集条数限制', '5000', 'None', '', 
'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.http.connectTimeout', '连接Trino服务器的超时时间', '连接超时时间(秒)', '60', 
'None', '', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.http.readTimeout', '等待Trino服务器返回数据的超时时间', '传输超时时间(秒)', '60', 
'None', '', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.resultSet.cache.max', 'Trino结果集缓冲区大小', '结果集缓冲区', '512k', 'None', 
'', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.trino.url', 
'Trino服务器URL', 'Trino服务器URL', 'http://127.0.0.1:9401', 'None', '', 'trino', 0, 
0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.trino.user', 
'用于连接Trino查询服务的用户名', '用户名', 'null', 'None', '', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.password', '用于连接Trino查询服务的密码', '密码', 'null', 'None', '', 
'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.passwordCmd', '用于连接Trino查询服务的密码回调命令', '密码回调命令', 'null', 'None', 
'', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.catalog', '连接Trino查询时使用的catalog', 'Catalog', 'system', 'None', 
'', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES ('linkis.trino.schema', 
'连接Trino查询服务的默认schema', 'Schema', '', 'None', '', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.insecured', '是否忽略服务器的SSL证书', '验证SSL证书', 'false', 'None', '', 
'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.engineconn.concurrent.limit', '引擎最大并发', '引擎最大并发', '100', 'None', '', 
'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.keystore', 'Trino服务器SSL keystore路径', 'keystore路径', 'null', 
'None', '', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.keystore.type', 'Trino服务器SSL keystore类型', 'keystore类型', 
'null', 'None', '', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.keystore.password', 'Trino服务器SSL keystore密码', 'keystore密码', 
'null', 'None', '', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.truststore', 'Trino服务器SSL truststore路径', 'truststore路径', 
'null', 'None', '', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.truststore.type', 'Trino服务器SSL truststore类型', 
'truststore类型', 'null', 'None', '', 'trino', 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.trino.ssl.truststore.password', 'Trino服务器SSL truststore密码', 
'truststore密码', 'null', 'None', '', 'trino', 0, 0, 1, '数据源配置');
+
+
+-- key engine relation
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, 
`engine_type_label_id`)
+(select config.id as config_key_id, label.id AS engine_type_label_id FROM 
`linkis_ps_configuration_config_key` config
+INNER JOIN `linkis_cg_manager_label` label ON config.engine_conn_type = 
'trino' and label_value = @TRINO_ALL);
+
+-- engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, 
`config_value`, `config_label_id`)
+(select relation.config_key_id AS config_key_id, '' AS config_value, 
relation.engine_type_label_id AS config_label_id FROM 
`linkis_ps_configuration_key_engine_relation` relation
+INNER JOIN `linkis_cg_manager_label` label ON relation.engine_type_label_id = 
label.id AND label.label_value = @TRINO_ALL);
+```
\ No newline at end of file
diff --git a/i18n/zh-CN/docusaurus-plugin-content-docs/current/release.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/release.md
index bb690bd4c8..ff002d3057 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/release.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/release.md
@@ -2,55 +2,36 @@
 title: 版本总览
 sidebar_position: 0.1
 --- 
-- [Linkis 容器化构建流程](/development/build-docker.md)
-- [Linkis 容器化体验版 LDH 快速部署](/deployment/deploy-to-kubernetes.md)
-- [Linkis 容器化开发调试](/development/debug-with-helm-charts.md)
-- [PES 公共服务组服务合并详情](/blog/2022/10/09/linkis-service-merge)
-- [Session 支持 Redis 共享存储](/user-guide/sso-with-redis.md)
+- [Trino 引擎使用说明](/engine-usage/trino.md)
+- [Seatunnel 引擎使用说明](/engine-usage/seatunnel.md)
+- [Linkis 管理台多数据源管理]
+- [多数据源使用文档]
+- [版本的 Release-Notes](/download/release-notes-1.3.1)
 
 
 ## 参数变化 
 
 | 模块名(服务名)| 类型  |     参数名                                                | 默认值 
            | 描述                                                    |
 | ----------- | ----- | 
-------------------------------------------------------- | ---------------- | 
------------------------------------------------------- |
-| common | 新增 |linkis.session.redis.host| 127.0.0.1 | redis连接地址 |
-| common | 新增 |linkis.session.redis.port| 6379 | redis连接端口 |
-| common | 新增 |linkis.session.redis.password| test123 | redis连接密码 |
-| common | 新增 |linkis.session.redis.cache.enabled| false | redis sso 开关 |
-| ps-cs | 新增 | wds.linkis.server.restful.scan.packages | 
org.apache.linkis.cs.server.restful | restful包扫描路径 |
-| ps-cs | 新增 | wds.linkis.server.mybatis.mapperLocations | 
classpath*:org/apache/linkis/cs/persistence/dao/impl/*.xml | mapper扫描路径 |
-| ps-cs | 新增 | wds.linkis.server.mybatis.typeAliasesPackage | 
org.apache.linkis.cs.persistence.entity | 数据表映射实体类包路径 |
-| ps-cs | 新增 | wds.linkis.server.mybatis.BasePackage | 
org.apache.linkis.cs.persistence.dao | Mybatis 包扫描路径 |
-| ps-cs | 新增 | spring.server.port | 9108 | 服务端口 |
-| ps-cs | 新增 | spring.eureka.instance.metadata-map.route | cs_1_dev | 
ps-cs路由前缀(必须以cs_打头) |
-| ps-cs | 新增 | wds.linkis.cs.deserialize.replace_package_header.enable |  
false | 反序列化时是否替换包头部 |
-| ps-data-source-manager | 新增 | wds.linkis.server.restful.scan.packages | 
org.apache.linkis.datasourcemanager.core.restful | restful包扫描路径 |
-| ps-data-source-manager | 新增 | wds.linkis.server.mybatis.mapperLocations | 
classpath:org/apache/linkis/datasourcemanager/core/dao/mapper/*.xml | 
mapper扫描路径 |
-| ps-data-source-manager | 新增 | wds.linkis.server.mybatis.typeAliasesPackage | 
org.apache.linkis.datasourcemanager.common.domain,org.apache.linkis.datasourcemanager.core.vo
 | 数据表映射实体类包路径 |
-| ps-data-source-manager | 新增 | wds.linkis.server.mybatis.BasePackage | 
org.apache.linkis.datasourcemanager.core.dao | Mybatis 包扫描路径 |
-| ps-data-source-manager | 新增 | hive.meta.url | None | hive连接地址 |
-| ps-data-source-manager | 新增 | hive.meta.user | None | hive连接用户 |
-| ps-data-source-manager | 新增 | hive.meta.password | None | hive连接密码 |
-| ps-data-source-manager | 新增 | wds.linkis.metadata.hive.encode.enabled | 
false | 是否启用BASE64编解码 |
-| ps-data-source-manager | 新增 | spring.server.port | 9109 | 服务端口 |
-| ps-data-source-manager | 新增 | 
spring.spring.main.allow-bean-definition-overriding | true | 是否允许Bean定义覆盖 |
-| ps-data-source-manager | 新增 | 
spring.jackson.serialization.FAIL_ON_EMPTY_BEANS | false | 是否允许空beans |
-| ps-data-source-manager | 新增 | 
wds.linkis.server.mdm.service.instance.expire-in-seconds | 1800 | 服务实例过期时间 |
-| ps-data-source-manager | 新增 | wds.linkis.server.restful.scan.packages | 
org.apache.linkis.metadata.query.server.restful | restful包扫描路径 |
-| ps-data-source-manager | 新增 | wds.linkis.server.dsm.app.name | 
linkis-ps-data-source-manager | 服务名称 |
-| ps-data-source-manager | 新增 | spring.server.port | 9110 | 服务端口 |
-| ps-publicservice | 修改 | wds.linkis.server.restful.scan.packages | 
org.apache.linkis.cs.server.restful,org.apache.linkis.datasourcemanager.core.restful,org.apache.linkis.metadata.query.server.restful,org.apache.linkis.jobhistory.restful,org.apache.linkis.variable.restful,org.apache.linkis.configuration.restful,org.apache.linkis.udf.api,org.apache.linkis.filesystem.restful,org.apache.linkis.filesystem.restful,org.apache.linkis.instance.label.restful,org.apache.linkis.metadata.restful.api
 [...]
-|ps-publicservice|修改|wds.linkis.server.mybatis.mapperLocations|classpath*:org/apache/linkis/cs/persistence/dao/impl/*.xml,classpath:org/apache/linkis/datasourcemanager/core/dao/mapper/*.xml,classpath:org/apache/linkis/jobhistory/dao/impl/*.xml,classpath:org/apache/linkis/variable/dao/impl/*.xml,classpath:org/apache/linkis/configuration/dao/impl/*.xml,classpath:org/apache/linkis/udf/dao/impl/*.xml,classpath:org/apache/linkis/instance/label/dao/impl/*.xml,classpath:org/apache/linkis/metada
 [...]
-|ps-publicservice|修改|wds.linkis.server.mybatis.typeAliasesPackage|org.apache.linkis.cs.persistence.entity,org.apache.linkis.datasourcemanager.common.domain,org.apache.linkis.datasourcemanager.core.vo,org.apache.linkis.configuration.entity,org.apache.linkis.jobhistory.entity,org.apache.linkis.udf.entity,org.apache.linkis.variable.entity,org.apache.linkis.instance.label.entity,org.apache.linkis.manager.entity,org.apache.linkis.metadata.domain,org.apache.linkis.bml.entity|
 数据表映射实体类包路径 |
-|ps-publicservice|修改|wds.linkis.server.mybatis.BasePackage|org.apache.linkis.cs.persistence.dao,org.apache.linkis.datasourcemanager.core.dao,org.apache.linkis.jobhistory.dao,org.apache.linkis.variable.dao,org.apache.linkis.configuration.dao,org.apache.linkis.udf.dao,org.apache.linkis.instance.label.dao,org.apache.linkis.metadata.hive.dao,org.apache.linkis.metadata.dao,org.apache.linkis.bml.dao,org.apache.linkis.errorcode.server.dao,org.apache.linkis.publicservice.common.lock.dao|
  Mybati [...]
-| ps-publicservice | 新增 | 
wds.linkis.cs.deserialize.replace_package_header.enable | false | 反序列化时是否替换包头部 |
-| ps-publicservice | 新增 | wds.linkis.rpc.conf.enable.local.message | true | 
是否启用本地消息 |
-| ps-publicservice | 新增 | wds.linkis.rpc.conf.local.app.list | 
linkis-ps-publicservice | 本地应用列表 |
-| ps-publicservice | 新增 | spring.server.port | 9105 | 服务端口 |
-| ps-publicservice | 新增 | spring.spring.main.allow-bean-definition-overriding 
| true | 是否允许Bean定义覆盖 |
-| ps-publicservice | 新增 | 
spring.spring.jackson.serialization.FAIL_ON_EMPTY_BEANS | false | 是否允许空beans |
-| ps-publicservice | 新增 | spring.eureka.instance.metadata-map.route | cs_1_dev 
| 路由前缀(必须以cs_打头 |
-
+| linkis.trino.url                  | http://127.0.0.1:8080 | Trino 集群连接 URL   
                          | true |
+| ec-trino | 新增 | linkis.trino.default.limit | 5000 | 结果集条数限制 |
+| ec-trino | 新增 | linkis.trino.http.connectTimeout | 60 | 连接超时时间(秒) |
+| ec-trino | 新增 | linkis.trino.http.readTimeout | 60 | 传输超时时间(秒)|
+| ec-trino | 新增 | linkis.trino.resultSet.cache.max | 512k | 结果集缓冲区 |
+| ec-trino | 新增 | linkis.trino.user | null | 用户名 |
+| ec-trino | 新增 | linkis.trino.password | null | 密码 |
+| ec-trino | 新增 | linkis.trino.passwordCmd | null | 密码回调命令 |
+| ec-trino | 新增 | linkis.trino.catalog | system | Catalog |
+| ec-trino | 新增 | linkis.trino.schema | null | Schema |
+| ec-trino | 新增 | linkis.trino.ssl.insecured | false | 验证SSL证书 |
+| ec-trino | 新增 | linkis.engineconn.concurrent.limit | 100 | 引擎最大并发数 |
+| ec-trino | 新增 | linkis.trino.ssl.key.store | null | keystore路径 |
+| ec-trino | 新增 | linkis.trino.ssl.keystore.password | null | keystore密码 |
+| ec-trino | 新增 | linkis.trino.ssl.keystore.type | null | keystore类型 |
+| ec-trino | 新增 | linkis.trino.ssl.truststore | null | truststore |
+| ec-trino | 新增 | linkis.trino.ss..truststore.type | null  | trustore类型 |
+| ec-trino | 新增 | linkis.trino.ssl.truststore.password | null | truststore密码 |
+| ec-seatunnel | 新增 | wds.linkis.engine.seatunnel.plugin.home | 
/opt/linkis/seatunnel | Seatunnel安装路径 |
 
 ## 数据库表变化 
-详细见代码仓库(https://github.com/apache/incubator-linkis) 
对应分支中的升级schema`db/upgrade/1.3.0_schema`文件
+详细见代码仓库(https://github.com/apache/incubator-linkis) 
对应分支中的升级schema`db/upgrade/1.3.1_schema`文件


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to