This is an automated email from the ASF dual-hosted git repository.

casion pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new 2de1ec8700a feat-4880: add upgrade from 1.3.2 to 1.4.0 guide (#742)
2de1ec8700a is described below

commit 2de1ec8700ad24e29ee363ce03dc0945fbff957e
Author: aiceflower <[email protected]>
AuthorDate: Sat Jul 29 11:07:23 2023 +0800

    feat-4880: add upgrade from 1.3.2 to 1.4.0 guide (#742)
    
    * upgrade guide
    
    * update doc
    
    * fix error link
    
    ---------
    
    Co-authored-by: Casion <[email protected]>
---
 docs/deployment/deploy-quick.md                    | 140 +++++++--------
 docs/feature/overview.md                           |   7 +-
 docs/upgrade/upgrade-to-1.4.0-guide.md             | 193 +++++++++++++++++++++
 docs/user-guide/udf-function.md                    |  25 ++-
 .../current/deployment/deploy-quick.md             | 151 ++++++++--------
 .../current/feature/overview.md                    |   7 +-
 .../current/upgrade/upgrade-to-1.4.0-guide.md      | 193 +++++++++++++++++++++
 .../current/user-guide/udf-function.md             |  24 ++-
 .../version-1.3.2/user-guide/udf-function.md       |  24 ++-
 .../version-1.3.2/user-guide/udf-function.md       |  26 ++-
 10 files changed, 641 insertions(+), 149 deletions(-)

diff --git a/docs/deployment/deploy-quick.md b/docs/deployment/deploy-quick.md
index d67c041f8fe..93d959146f8 100644
--- a/docs/deployment/deploy-quick.md
+++ b/docs/deployment/deploy-quick.md
@@ -230,71 +230,6 @@ HADOOP_KERBEROS_ENABLE=true
 HADOOP_KEYTAB_PATH=/appcom/keytab/
 ```
 
-#### S3 mode (optional)
-> Currently, it is possible to store engine execution logs and results to S3 
in Linkis.
->
-> Note: Linkis has not adapted permissions for S3, so it is not possible to 
grant authorization for it.
-
-`vim $LINKIS_HOME/conf/linkis.properties`
-```shell script
-# s3 file system
-linkis.storage.s3.access.key=xxx
-linkis.storage.s3.secret.key=xxx
-linkis.storage.s3.endpoint=http://xxx.xxx.xxx.xxx:xxx
-linkis.storage.s3.region=xxx
-linkis.storage.s3.bucket=xxx
-```
-
-`vim $LINKIS_HOME/conf/linkis-cg-entrance.properties`
-```shell script
-wds.linkis.entrance.config.log.path=s3:///linkis/logs
-wds.linkis.resultSet.store.path=s3:///linkis/results
-```
-
-### 2.4 Configure Token
-
-The original default Token of Linkis is fixed and the length is too short, 
which has security risks. Therefore, Linkis 1.3.2 changes the original fixed 
Token to random generation and increases the Token length.
-
-New Token format: application abbreviation - 32-bit random number, such as 
BML-928a721518014ba4a28735ec2a0da799.
-
-Token may be used in the Linkis service itself, such as executing tasks 
through Shell, uploading BML, etc., or it may be used in other applications, 
such as DSS, Qualitis and other applications to access Linkis.
-
-#### View Token
-**View via SQL statement**
-```sql
-select * from linkis_mg_gateway_auth_token;
-```
-**View via Admin Console**
-
-Log in to the management console -> basic data management -> token management
-![](/Images/deployment/token-list.png)
-
-#### Check Token configuration
-
-When the Linkis service itself uses Token, the Token in the configuration file 
must be consistent with the Token in the database. Match by applying the short 
name prefix.
-
-$LINKIS_HOME/conf/linkis.properites file Token configuration
-
-```
-linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.bml.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.context.client.auth.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.errorcode.auth.token=BML-928a721518014ba4a28735ec2a0da799
-
-wds.linkis.client.test.common.tokenValue=LINKIS_CLI-215af9e265ae437ca1f070b17d6a540d
-
-wds.linkis.filesystem.token.value=WS-52bce72ed51741c7a2a9544812b45725
-wds.linkis.gateway.access.token=WS-52bce72ed51741c7a2a9544812b45725
-
-wds.linkis.server.dsm.auth.token.value=DSM-65169e8e1b564c0d8a04ee861ca7df6e
-```
-
-$LINKIS_HOME/conf/linkis-cli/linkis-cli.properties file Token configuration
-```
-wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
-```
-
 #### Notice
 
 **Full installation**
@@ -339,12 +274,12 @@ Because the mysql-connector-java driver is under the 
GPL2.0 protocol, it does no
 
 :::
 
-To download the mysql driver, take version 5.1.49 as an example: [download 
link](https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.49/mysql-connector-java-5.1.49.jar)
+To download the mysql driver, take version 8.0.28 as an example: [download 
link](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar)
 
 Copy the mysql driver package to the lib package
 ````
-cp mysql-connector-java-5.1.49.jar 
${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
-cp mysql-connector-java-5.1.49.jar 
${LINKIS_HOME}/lib/linkis-commons/public-module/
+cp mysql-connector-java-8.0.28.jar 
${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
+cp mysql-connector-java-8.0.28.jar 
${LINKIS_HOME}/lib/linkis-commons/public-module/
 ````
 
 ### 3.3 Add postgresql driver package (Optional)
@@ -381,6 +316,27 @@ If you are upgrading to Linkis. Deploy DSS or other 
projects at the same time, b
 echo "wds.linkis.session.ticket.key=bdp-user-ticket-id" >> linkis.properties
 ````
 
+#### S3 mode (optional)
+> Currently, it is possible to store engine execution logs and results to S3 
in Linkis.
+>
+> Note: Linkis has not adapted permissions for S3, so it is not possible to 
grant authorization for it.
+
+`vim linkis.properties`
+```shell script
+# s3 file system
+linkis.storage.s3.access.key=xxx
+linkis.storage.s3.secret.key=xxx
+linkis.storage.s3.endpoint=http://xxx.xxx.xxx.xxx:xxx
+linkis.storage.s3.region=xxx
+linkis.storage.s3.bucket=xxx
+```
+
+`vim linkis-cg-entrance.properties`
+```shell script
+wds.linkis.entrance.config.log.path=s3:///linkis/logs
+wds.linkis.resultSet.store.path=s3:///linkis/results
+```
+
 ### 3.5 Start the service
 ```shell script
 sh sbin/linkis-start-all.sh
@@ -409,6 +365,52 @@ Note: LINKIS-PS-CS, 
LINKIS-PS-DATA-SOURCE-MANAGER、LINKIS-PS-METADATAMANAGER se
 If any services are not started, you can view detailed exception logs in the 
corresponding log/${service name}.log file.
 
 
+### 3.8 Configure Token
+
+The original default Token of Linkis is fixed and the length is too short, 
which has security risks. Therefore, Linkis 1.3.2 changes the original fixed 
Token to random generation and increases the Token length.
+
+New Token format: application abbreviation - 32-bit random number, such as 
BML-928a721518014ba4a28735ec2a0da799.
+
+Token may be used in the Linkis service itself, such as executing tasks 
through Shell, uploading BML, etc., or it may be used in other applications, 
such as DSS, Qualitis and other applications to access Linkis.
+
+#### View Token
+**View via SQL statement**
+```sql
+select * from linkis_mg_gateway_auth_token;
+```
+**View via Admin Console**
+
+Log in to the management console -> basic data management -> token management
+![](/Images/deployment/token-list.png)
+
+#### Check Token configuration
+
+When the Linkis service itself uses Token, the Token in the configuration file 
must be consistent with the Token in the database. Match by applying the short 
name prefix.
+
+$LINKIS_HOME/conf/linkis.properites file Token configuration
+
+```
+linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.bml.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.context.client.auth.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.errorcode.auth.token=BML-928a721518014ba4a28735ec2a0da799
+
+wds.linkis.client.test.common.tokenValue=LINKIS_CLI-215af9e265ae437ca1f070b17d6a540d
+
+wds.linkis.filesystem.token.value=WS-52bce72ed51741c7a2a9544812b45725
+wds.linkis.gateway.access.token=WS-52bce72ed51741c7a2a9544812b45725
+
+wds.linkis.server.dsm.auth.token.value=DSM-65169e8e1b564c0d8a04ee861ca7df6e
+```
+
+$LINKIS_HOME/conf/linkis-cli/linkis-cli.properties file Token configuration
+```
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+```
+
+When other applications use Token, they need to modify their Token 
configuration to be consistent with the Token in the database.
+
 ## 4. Install the web frontend
 The web side uses nginx as the static resource server, and the access request 
process is:
 `Linkis console request->nginx ip:port->linkis-gateway ip:port->other services`
@@ -738,7 +740,7 @@ For details, please refer to the CDH adaptation blog post
 Cookie: bdp-user-ticket-id=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 ````
 - Method 3 Add a static Token to the http request header
-  Token is configured in conf/token.properties
+  Token is configured in conf/linkis.properties
   Such as: TEST-AUTH=hadoop,root,user01
 ```shell script
 Token-Code: TEST-AUTH
diff --git a/docs/feature/overview.md b/docs/feature/overview.md
index 6937df5ecbe..e7a06984fd0 100644
--- a/docs/feature/overview.md
+++ b/docs/feature/overview.md
@@ -5,9 +5,10 @@ sidebar_position: 0.1
 
 - [Base engine dependencies, compatibility, default version 
optimization](./base-engine-compatibilty.md)
 - [Hive engine connector supports concurrent 
tasks](./hive-engine-support-concurrent.md)
-- [add Impala plugin support](../engine-usage/impala.md)
-- [linkis-storage supports S3 file 
system](../deployment/deploy-quick#s3-mode-optional)
-- [Add postgresql database 
support](../deployment/deploy-quick#33-add-postgresql-driver-package-optional)
+- [Support more data sources](./spark-etl.md)
+- [linkis-storage supports S3 file systems (Experimental 
version)](../deployment/deploy-quick#s3-mode-optional)
+- [Add postgresql database support (Experimental 
version)](../deployment/deploy-quick#22-configure-database)
+- [Add impala engine support(Experimental version)](../engine-usage/impala.md)
 - [Spark ETL enhancements](./spark-etl.md)
 - [Generate SQL from data source](./datasource-generate-sql.md)
 - [Other feature description](./other.md)
diff --git a/docs/upgrade/upgrade-to-1.4.0-guide.md 
b/docs/upgrade/upgrade-to-1.4.0-guide.md
new file mode 100644
index 00000000000..0c1761d1aaf
--- /dev/null
+++ b/docs/upgrade/upgrade-to-1.4.0-guide.md
@@ -0,0 +1,193 @@
+---
+title: Upgrade Guide for 1.4.0
+sidebar_position: 3
+---
+
+> Linkis1.4.0 has made many adjustments to Linkis services and codes. This 
article introduces the relevant precautions for upgrading to Linkis 1.4.0.
+
+## 1. Precautions
+
+**1) If you are using Linkis for the first time, you can ignore this chapter 
and refer to the [Single-machine deployment](../deployment/deploy-quick.md) 
guide to deploy Linkis. **
+
+**2) If you have installed a version before Likis 1.4.0 but do not want to 
keep the original data, you can also refer to the [Stand-alone 
Deployment](../deployment/deploy-quick.md) guide to redeploy, and select 2 to 
clean up during installation All the data and rebuild the table (see the code 
below). **
+```
+Do you want to clear Linkis table information in the database?
+ 1: Do not execute table-building statements
+ 2: Dangerous! Clear all data and rebuild the tables
+ other: exit
+
+Please input the choice: ## choice 2
+```
+**3) If you have installed a version of Likis earlier than 1.4.0 but need to 
keep the original version data, you can refer to this document to upgrade. **
+
+****
+
+## 2. Environment upgrade
+
+Linkis 1.4.0 upgrades the default dependent environments Hadoop, Hive, and 
Spark to 3.x. Hadoop was upgraded to 3.3.4, Hive was upgraded to 3.1.3, and 
Spark was upgraded to 3.2.1. Please upgrade these environments before 
performing subsequent operations.
+
+Verify the upgraded version by the following command
+```
+echo $HADOOP_HOME
+/data/hadoop-3.3.4
+echo $HIVE_HOME
+/data/apache-hive-3.1.3-bin
+echo $SPARK_HOME
+/data/spark-3.2.1-bin-hadoop3.2
+```
+
+Before installation, please modify the relevant configurations of Hadoop, 
Hive, and Spark in the deploy-config/linkis-env.sh file to the upgraded 
directory. The specific modification items are as follows:
+
+```
+#HADOOP
+HADOOP_HOME=${HADOOP_HOME:-"/appcom/Install/hadoop"}
+HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/appcom/config/hadoop-config"}
+
+## Hadoop env version
+HADOOP_VERSION=${HADOOP_VERSION:-"3.3.4"}
+
+#Hive
+HIVE_HOME=/appcom/Install/hive
+HIVE_CONF_DIR=/appcom/config/hive-config
+
+#Spark
+SPARK_HOME=/appcom/Install/spark
+SPARK_CONF_DIR=/appcom/config/spark-config
+
+```
+
+## 3. Service upgrade installation
+
+Because the 1.4.0 version has changed a lot, the service needs to be 
reinstalled when the old version is upgraded to 1.4.0.
+
+If you need to keep the old version of data during installation, be sure to 
choose 1 to skip the table creation statement (see the code below).
+
+Linkis 1.4.0 installation can refer to [How to install 
quickly](../deployment/deploy-quick.md)
+
+```
+Do you want to clear Linkis table information in the database?
+ 1: Do not execute table-building statements
+ 2: Dangerous! Clear all data and rebuild the tables
+ other: exit
+
+Please input the choice: ## choice 1
+```
+
+## 4. Database upgrade
+  After the service installation is complete, the data tables of the database 
need to be modified, including table structure changes and table data updates. 
Execute the DDL and DML scripts corresponding to the upgraded version.
+  ```
+  # table structure changes
+  linkis-dist\package\db\upgrade\${version}_schema\mysql\linkis_ddl.sql
+  # table data changes
+  linkis-dist\package\db\upgrade\${version}_schema\mysql\linkis_dml.sql
+  ```
+Note that when upgrading, please execute the upgrade script in sequence, such 
as upgrading from the current version 1.3.1 to version 1.4.0. You need to 
execute the DDL and DML scripts of 1.3.2 upgrade first, and then execute the 
DDL and DML scripts of 1.4.0 upgrade. This article takes the upgrade from 1.3.2 
to 1.4.0 as an example to illustrate
+
+### 4.1 Table structure modification part:
+
+Connect to the mysql database and execute the 
linkis-dist\package\db\upgrade\1.3.2_schema\mysql\linkis_ddl.sql script 
content, the specific content is as follows:
+
+```mysql-sql
+ALTER TABLE `linkis_cg_manager_service_instance` ADD COLUMN `identifier` 
varchar(32) COLLATE utf8_bin DEFAULT NULL;
+ALTER TABLE `linkis_cg_manager_service_instance` ADD COLUMN `ticketId` 
varchar(255) COLLATE utf8_bin DEFAULT NULL;
+ALTER TABLE `linkis_cg_ec_resource_info_record` MODIFY COLUMN metrics TEXT 
DEFAULT NULL COMMENT 'ec metrics';
+```
+
+### 4.2 Newly executed sql is required:
+
+Connect to the mysql database and execute the 
linkis-dist\package\db\upgrade\1.3.2_schema\mysql\linkis_dml.sql script 
content, the specific content is as follows:
+```sql
+-- Default version upgrade
+UPDATE linkis_ps_configuration_config_key SET default_value = 'python3' WHERE 
`key` = 'spark.python.version';
+UPDATE linkis_cg_manager_label SET label_value = '*-*,hive-3.1.3' WHERE 
label_value = '*-*,hive-2.3.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-IDE,hive-3.1.3' WHERE 
label_value = '*-IDE,hive-2.3.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-nodeexecution,hive-3.1.3' 
WHERE label_value = '*-nodeexecution,hive-2.3.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-*,spark-3.2.1' WHERE 
label_value = '*-*,spark-2.4.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-IDE,spark-3.2.1' WHERE 
label_value = '*-IDE,spark-2.4.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-Visualis,spark-3.2.1' 
WHERE label_value = '*-Visualis,spark-2.4.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-nodeexecution,spark-3.2.1' 
WHERE label_value = '*-nodeexecution,spark-2.4.3';
+
+-- Support for different data sources
+INSERT INTO `linkis_ps_dm_datasource_type` (`name`, `description`, `option`, 
`classifier`, `icon`, `layers`, `description_en`, `option_en`, `classifier_en`) 
VALUES ('tidb', 'tidb Database', 'tidb', 'Relational Database', '', 3, 'TiDB 
Database', 'TiDB', 'Relational Database');
+
+select @data_source_type_id := id from `linkis_ps_dm_datasource_type` where 
`name` = 'tidb';
+INSERT INTO `linkis_ps_dm_datasource_type_key`
+(`data_source_type_id`, `key`, `name`, `name_en`, `default_value`, 
`value_type`, `scope`, `require`, `description`, `description_en`, 
`value_regex`, `ref_id`, ` ref_value`, `data_source`, `update_time`, 
`create_time`)
+VALUES (@data_source_type_id, 'address', 'Address', 'Address', NULL, 'TEXT', 
NULL, 0, 'Address(host1:port1,host2:port2...)', 'Address(host1:port1, 
host2:port2...)', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'host', 'Host', 'Host', NULL, 'TEXT', NULL, 1, 
'Host', 'Host', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'port', 'Port', 'Port', NULL, 'TEXT', NULL, 1, 
'Port', 'Port', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'driverClassName', 'Driver class name', 'Driver 
class name', 'com.mysql.jdbc.Driver', 'TEXT', NULL, 1, 'Driver class name 
(Driver class name)', 'Driver class name', NULL, NULL, NULL, NULL, now(), 
now()),
+       (@data_source_type_id, 'params', 'Connection params', 'Connection 
params', NULL, 'TEXT', NULL, 0, 'Input JSON format): {"param":"value" }', 
'Input JSON format: {"param":"value"}', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'username', 'Username', 'Username', NULL, 
'TEXT', NULL, 1, 'Username', 'Username', '^[0-9A-Za -z_-]+$', NULL, NULL, NULL, 
now(), now()),
+       (@data_source_type_id, 'password', 'Password', 'Password', NULL, 
'PASSWORD', NULL, 0, 'Password', 'Password', '', NULL, NULL, NULL, now (), 
now()),
+       (@data_source_type_id, 'instance', 'Instance name (instance)', 
'Instance', NULL, 'TEXT', NULL, 1, 'Instance name (instance)', 'Instance', 
NULL, NULL, NULL, NULL, now(), now());
+
+INSERT INTO `linkis_ps_dm_datasource_type` (`name`, `description`, `option`, 
`classifier`, `icon`, `layers`, `description_en`, `option_en`, `classifier_en`) 
VALUES ('starrocks', 'starrocks` Database', 'starrocks', 'olap', '', 4, 
'StarRocks Database', 'StarRocks', 'Olap');
+
+select @data_source_type_id := id from `linkis_ps_dm_datasource_type` where 
`name` = 'starrocks';
+INSERT INTO `linkis_ps_dm_datasource_type_key`
+(`data_source_type_id`, `key`, `name`, `name_en`, `default_value`, 
`value_type`, `scope`, `require`, `description`, `description_en`, 
`value_regex`, `ref_id`, ` ref_value`, `data_source`, `update_time`, 
`create_time`)
+VALUES (@data_source_type_id, 'address', 'Address', 'Address', NULL, 'TEXT', 
NULL, 0, 'Address(host1:port1,host2:port2...)', 'Address(host1:port1, 
host2:port2...)', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'host', 'Host', 'Host', NULL, 'TEXT', NULL, 1, 
'Host', 'Host', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'port', 'Port', 'Port', NULL, 'TEXT', NULL, 1, 
'Port', 'Port', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'driverClassName', 'Driver class name', 'Driver 
class name', 'com.mysql.jdbc.Driver', 'TEXT', NULL, 1, 'Driver class name 
(Driver class name)', 'Driver class name', NULL, NULL, NULL, NULL, now(), 
now()),
+       (@data_source_type_id, 'params', 'Connection params', 'Connection 
params', NULL, 'TEXT', NULL, 0, 'Input JSON format): {"param":"value" }', 
'Input JSON format: {"param":"value"}', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'username', 'Username', 'Username', NULL, 
'TEXT', NULL, 1, 'Username', 'Username', '^[0-9A-Za -z_-]+$', NULL, NULL, NULL, 
now(), now()),
+       (@data_source_type_id, 'password', 'Password', 'Password', NULL, 
'PASSWORD', NULL, 0, 'Password', 'Password', '', NULL, NULL, NULL, now (), 
now()),
+       (@data_source_type_id, 'instance', 'Instance name (instance)', 
'Instance', NULL, 'TEXT', NULL, 1, 'Instance name (instance)', 'Instance', 
NULL, NULL, NULL, NULL, now(), now());
+
+INSERT INTO `linkis_ps_dm_datasource_type` (`name`, `description`, `option`, 
`classifier`, `icon`, `layers`, `description_en`, `option_en`, `classifier_en`) 
VALUES ('gaussdb', 'gaussdb Database', 'gaussdb', 'Relational Database', '', 3, 
'GaussDB Database', 'GaussDB', 'Relational Database');
+
+select @data_source_type_id := id from `linkis_ps_dm_datasource_type` where 
`name` = 'gaussdb';
+INSERT INTO `linkis_ps_dm_datasource_type_key`
+(`data_source_type_id`, `key`, `name`, `name_en`, `default_value`, 
`value_type`, `scope`, `require`, `description`, `description_en`, 
`value_regex`, `ref_id`, ` ref_value`, `data_source`, `update_time`, 
`create_time`)
+VALUES (@data_source_type_id, 'address', 'Address', 'Address', NULL, 'TEXT', 
NULL, 0, 'Address(host1:port1,host2:port2...)', 'Address(host1:port1, 
host2:port2...)', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'host', 'Host', 'Host', NULL, 'TEXT', NULL, 1, 
'Host', 'Host', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'port', 'Port', 'Port', NULL, 'TEXT', NULL, 1, 
'Port', 'Port', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'driverClassName', 'Driver class name', 'Driver 
class name', 'org.postgresql.Driver', 'TEXT', NULL, 1, 'Driver class name) ', 
'Driver class name', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'params', 'Connection params', 'Connection 
params', NULL, 'TEXT', NULL, 0, 'Input JSON format): {"param":"value" }', 
'Input JSON format: {"param":"value"}', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'username', 'Username', 'Username', NULL, 
'TEXT', NULL, 1, 'Username', 'Username', '^[0-9A-Za -z_-]+$', NULL, NULL, NULL, 
now(), now()),
+       (@data_source_type_id, 'password', 'Password', 'Password', NULL, 
'PASSWORD', NULL, 1, 'Password', 'Password', '', NULL, NULL, NULL, now (), 
now()),
+       (@data_source_type_id, 'instance', 'Instance name (instance)', 
'Instance', NULL, 'TEXT', NULL, 1, 'Instance name (instance)', 'Instance', 
NULL, NULL, NULL, NULL, now(), now());
+
+INSERT INTO `linkis_ps_dm_datasource_type` (`name`, `description`, `option`, 
`classifier`, `icon`, `layers`, `description_en`, `option_en`, `classifier_en`) 
VALUES ('oceanbase', 'oceanbase` Database', 'oceanbase', 'olap', '', 4, 
'oceanbase Database', 'oceanbase', 'Olap');
+
+select @data_source_type_id := id from `linkis_ps_dm_datasource_type` where 
`name` = 'oceanbase';
+INSERT INTO `linkis_ps_dm_datasource_type_key`
+(`data_source_type_id`, `key`, `name`, `name_en`, `default_value`, 
`value_type`, `scope`, `require`, `description`, `description_en`, 
`value_regex`, `ref_id`, ` ref_value`, `data_source`, `update_time`, 
`create_time`)
+VALUES (@data_source_type_id, 'address', 'Address', 'Address', NULL, 'TEXT', 
NULL, 0, 'Address(host1:port1,host2:port2...)', 'Address(host1:port1, 
host2:port2...)', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'host', 'Host', 'Host', NULL, 'TEXT', NULL, 1, 
'Host', 'Host', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'port', 'Port', 'Port', NULL, 'TEXT', NULL, 1, 
'Port', 'Port', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'driverClassName', 'Driver class name', 'Driver 
class name', 'com.mysql.jdbc.Driver', 'TEXT', NULL, 1, 'Driver class name 
(Driver class name)', 'Driver class name', NULL, NULL, NULL, NULL, now(), 
now()),
+       (@data_source_type_id, 'params', 'Connection params', 'Connection 
params', NULL, 'TEXT', NULL, 0, 'Input JSON format): {"param":"value" }', 
'Input JSON format: {"param":"value"}', NULL, NULL, NULL, NULL, now(), now()),
+       (@data_source_type_id, 'username', 'Username', 'Username', NULL, 
'TEXT', NULL, 1, 'Username', 'Username', '^[0-9A-Za -z_-]+$', NULL, NULL, NULL, 
now(), now()),
+       (@data_source_type_id, 'password', 'Password', 'Password', NULL, 
'PASSWORD', NULL, 1, 'Password', 'Password', '', NULL, NULL, NULL, now (), 
now()),
+       (@data_source_type_id, 'instance', 'Instance name (instance)', 
'Instance', NULL, 'TEXT', NULL, 1, 'Instance name (instance)', 'Instance', 
NULL, NULL, NULL, NULL, now(), now());
+```
+
+## 4. Add mysql driver package
+When linkis is upgraded to version 1.4.0, the mysql driver package needs to 
use version 8.x. Take version 8.0.28 as an example: [Download 
link](https://repo1.maven.org/maven2/mysql/mysql-connector-java 
/8.0.28/mysql-connector-java-8.0.28.jar) copy the driver package to the lib 
package
+
+```
+cp mysql-connector-java-8.0.28.jar 
${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
+cp mysql-connector-java-8.0.28.jar 
${LINKIS_HOME}/lib/linkis-commons/public-module/
+```
+
+## 5. Start the service
+
+```shell
+sh linkis-start-all.sh
+```
+
+## 6. Precautions
+
+1. After Spark is upgraded to 3.x, it is not compatible with python2, so you 
need to install python3 when executing pyspark tasks, and perform the following 
operations
+```shell
+sudo ln -snf /usr/bin/python3 /usr/bin/python2
+```
+And add the following configuration in the spark engine configuration 
$LINKIS_HOME/lib/linkis-engineconn-plugins/spark/dist/3.2.1/conf/linkis-engineconn.properties,
 specify the python installation path
+```
+pyspark.python3.path=/usr/bin/python3
+```
+2. The Token value in the configuration file cannot be automatically unified 
with the original database Token value during upgrade. You need to manually 
modify the Token value in the `linkis.properties` and 
`linkis-cli/linkis-cli.properties` files to the Token value corresponding to 
the data table `linkis_mg_gateway_auth_token`.
+3. When upgrading from a lower version to a higher version, execute the 
database upgrade script step by step.
\ No newline at end of file
diff --git a/docs/user-guide/udf-function.md b/docs/user-guide/udf-function.md
index b6b46c06f25..9c83958ebf6 100644
--- a/docs/user-guide/udf-function.md
+++ b/docs/user-guide/udf-function.md
@@ -17,7 +17,30 @@ Overall step description
 
 **Step1 Writing jar packages locally**
 
-UDF 
Example:https://help.aliyun.com/apsara/agile/v_3_6_0_20210705/odps/ase-user-guide/udf-example.html
+Hive UDF Example:
+1. add hive dependency
+```xml
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-exec</artifactId>
+    <version>3.1.3</version>
+</dependency>
+```
+2. create UDF class
+```java
+import org.apache.hadoop.hive.ql.exec.UDF;
+
+public class UDFExample extends UDF {
+    public Integer evaluate(Integer value) {
+        return value == null ? null : value + 1;
+    }
+}
+```
+
+3. package
+```shell
+mvn package
+```
 
 **Step2【Scriptis >> Workspace】Upload jar package**
 Select the corresponding folder and right-click to select Upload
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
index f7dd89594a3..328c4429d25 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/deployment/deploy-quick.md
@@ -229,72 +229,6 @@ HADOOP_KERBEROS_ENABLE=true
 HADOOP_KEYTAB_PATH=/appcom/keytab/
 ```
 
-
-#### S3模式(可选)
-> 目前支持将引擎执行日志和结果存储到S3 
-> 
-> 注意: linkis没有对S3做权限适配,所以无法对其做赋权操作
-
-`vim $LINKIS_HOME/conf/linkis.properties`
-```shell script
-# s3 file system
-linkis.storage.s3.access.key=xxx
-linkis.storage.s3.secret.key=xxx
-linkis.storage.s3.endpoint=http://xxx.xxx.xxx.xxx:xxx
-linkis.storage.s3.region=xxx
-linkis.storage.s3.bucket=xxx
-```
-
-`vim $LINKIS_HOME/conf/linkis-cg-entrance.properties`
-```shell script
-wds.linkis.entrance.config.log.path=s3:///linkis/logs
-wds.linkis.resultSet.store.path=s3:///linkis/results
-```
-
-### 2.4 配置 Token
-
-Linkis 原有默认 Token 固定且长度太短存在安全隐患。因此 Linkis 1.3.2 将原有固定 Token 改为随机生成,并增加 Token 
长度。
-
-新 Token 格式:应用简称-32 位随机数,如BML-928a721518014ba4a28735ec2a0da799。
-
-Token 可能在 Linkis 服务自身使用,如通过 Shell 方式执行任务、BML 上传等,也可能在其它应用中使用,如 DSS、Qualitis 
等应用访问 Linkis。
-
-#### 查看 Token
-**通过 SQL 语句查看**
-```sql
-select * from linkis_mg_gateway_auth_token;
-```
-**通过管理台查看**
-
-登录管理台 -> 基础数据管理 -> 令牌管理 
-![](/Images-zh/deployment/token-list.png)
-
-#### 检查 Token 配置
-
-Linkis 服务本身使用 Token 时,配置文件中 Token 需与数据库中 Token 一致。通过应用简称前缀匹配。
-
-$LINKIS_HOME/conf/linkis.properties文件 Token 配置
-
-```
-linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.bml.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.context.client.auth.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.errorcode.auth.token=BML-928a721518014ba4a28735ec2a0da799
-
-wds.linkis.client.test.common.tokenValue=LINKIS_CLI-215af9e265ae437ca1f070b17d6a540d
-
-wds.linkis.filesystem.token.value=WS-52bce72ed51741c7a2a9544812b45725
-wds.linkis.gateway.access.token=WS-52bce72ed51741c7a2a9544812b45725
-
-wds.linkis.server.dsm.auth.token.value=DSM-65169e8e1b564c0d8a04ee861ca7df6e
-```
-
-$LINKIS_HOME/conf/linkis-cli/linkis-cli.properties文件 Token 配置
-```
-wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
-```
-
 #### 注意事项
 
 **全量安装**
@@ -309,6 +243,17 @@ 
wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
 
 当遇到 Token 令牌无效或已过期问题时可以检查 Token 是否配置正确,可通过管理台查询 Token。
 
+**Python 版本问题**
+Linkis 升级为 1.4.0 后默认 Spark 版本升级为 3.x,无法兼容 python2。因此如果需要使用 pyspark 功能需要做如下修改。
+1. 映射 python2 命令为 python3
+```
+sudo ln -snf /usr/bin/python3 /usr/bin/python2
+```
+2. spark 引擎连接器配置 
$LINKIS_HOME/lib/linkis-engineconn-plugins/spark/dist/3.2.1/conf/linkis-engineconn.properties
 中添加如下配置,指定python安装路径
+```
+pyspark.python3.path=/usr/bin/python3
+```
+
 ## 3. 安装和启动
 
 ### 3.1 执行安装脚本:
@@ -339,12 +284,12 @@ Your default account password is [hadoop/5e8e312b4]`
 
 :::
 
-下载mysql驱动 
以5.1.49版本为例:[下载链接](https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.49/mysql-connector-java-5.1.49.jar)
 
https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.49/mysql-connector-java-5.1.49.jar
+下载mysql驱动 以 8.0.28 
版本为例:[下载链接](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar)
 
 拷贝mysql 驱动包至lib包下 
 ```
-cp mysql-connector-java-5.1.49.jar  
${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
-cp mysql-connector-java-5.1.49.jar  
${LINKIS_HOME}/lib/linkis-commons/public-module/
+cp mysql-connector-java-8.0.28.jar  
${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
+cp mysql-connector-java-8.0.28.jar  
${LINKIS_HOME}/lib/linkis-commons/public-module/
 ```
 ### 3.3 添加postgresql驱动包 (可选)
 如果选择使用postgresql作为业务数据库,需要手动添加postgresql驱动
@@ -370,6 +315,27 @@ cp postgresql-42.5.4.jar  
${LINKIS_HOME}/lib/linkis-commons/public-module/
 echo "wds.linkis.session.ticket.key=bdp-user-ticket-id" >> linkis.properties
 ```
 
+#### 3.4.3 S3 模式
+> 目前支持将引擎执行日志和结果存储到 S3 文件系统 
+> 
+> 注意: linkis没有对 S3 做权限适配,所以无法对其做赋权操作
+
+`vim $LINKIS_HOME/conf/linkis.properties`
+```shell script
+# s3 file system
+linkis.storage.s3.access.key=xxx
+linkis.storage.s3.secret.key=xxx
+linkis.storage.s3.endpoint=http://xxx.xxx.xxx.xxx:xxx
+linkis.storage.s3.region=xxx
+linkis.storage.s3.bucket=xxx
+```
+
+`vim $LINKIS_HOME/conf/linkis-cg-entrance.properties`
+```shell script
+wds.linkis.entrance.config.log.path=s3:///linkis/logs
+wds.linkis.resultSet.store.path=s3:///linkis/results
+```
+
 ### 3.5 启动服务
 ```shell script
 sh sbin/linkis-start-all.sh
@@ -397,6 +363,51 @@ LINKIS-PS-PUBLICSERVICE 公共服务
 
 如果有服务未启动,可以在对应的log/${服务名}.log文件中查看详细异常日志。
 
+### 3.8 配置 Token
+
+Linkis 原有默认 Token 固定且长度太短存在安全隐患。因此 Linkis 1.3.2 将原有固定 Token 改为随机生成,并增加 Token 
长度。
+
+新 Token 格式:应用简称-32 位随机数,如BML-928a721518014ba4a28735ec2a0da799。
+
+Token 可能在 Linkis 服务自身使用,如通过 Shell 方式执行任务、BML 上传等,也可能在其它应用中使用,如 DSS、Qualitis 
等应用访问 Linkis。
+
+#### 查看 Token
+**通过 SQL 语句查看**
+```sql
+select * from linkis_mg_gateway_auth_token;
+```
+**通过管理台查看**
+
+登录管理台 -> 基础数据管理 -> 令牌管理 
+![](/Images-zh/deployment/token-list.png)
+
+#### 检查 Token 配置
+
+Linkis 服务本身使用 Token 时,配置文件中 Token 需与数据库中 Token 一致。通过应用简称前缀匹配。
+
+$LINKIS_HOME/conf/linkis.properties文件 Token 配置
+
+```
+linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.bml.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.context.client.auth.value=BML-928a721518014ba4a28735ec2a0da799
+wds.linkis.errorcode.auth.token=BML-928a721518014ba4a28735ec2a0da799
+
+wds.linkis.client.test.common.tokenValue=LINKIS_CLI-215af9e265ae437ca1f070b17d6a540d
+
+wds.linkis.filesystem.token.value=WS-52bce72ed51741c7a2a9544812b45725
+wds.linkis.gateway.access.token=WS-52bce72ed51741c7a2a9544812b45725
+
+wds.linkis.server.dsm.auth.token.value=DSM-65169e8e1b564c0d8a04ee861ca7df6e
+```
+
+$LINKIS_HOME/conf/linkis-cli/linkis-cli.properties文件 Token 配置
+```
+wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
+```
+
+其它应用使用 Token 时,需要修改其 Token 配置与数据库中 Token 一致。
 
 ## 4. 安装web前端
 web端是使用nginx作为静态资源服务器的,访问请求流程是:
@@ -730,7 +741,7 @@ CDH本身不是使用的官方标准的hive/spark包,进行适配时,最好修
 Cookie: bdp-user-ticket-id=xxxxxxxxxxxxxxxxxxxxxxxxxxx
 ```
 - 方式3 http请求头添加静态的Token令牌  
-  Token在conf/token.properties进行配置
+  Token在conf/linkis.properties进行配置
   如:TEST-AUTH=hadoop,root,user01
 ```shell script
 Token-Code:TEST-AUTH
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md
index ba393a2fdc6..b2ad3c08c61 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md
@@ -5,9 +5,10 @@ sidebar_position: 0.1
 
 - [基础引擎依赖性、兼容性、默认版本优化](./base-engine-compatibilty.md)
 - [Hive 引擎连接器支持并发任务](./hive-engine-support-concurrent.md)
-- [新增 Impala 引擎支持](../engine-usage/impala.md)
-- [linkis-storage 支持 S3 文件系统](../deployment/deploy-quick#s3模式可选)
-- [增加 postgresql 数据库支持](../deployment/deploy-quick.md#33-添加postgresql驱动包-可选)
+- [支持更多的数据源](./spark-etl.md)
+- [linkis-storage 支持 S3 文件系统(实验版本)](../deployment/deploy-quick#343-s3-模式)
+- [增加 postgresql 数据库支持(实验版本)](../deployment/deploy-quick#22-配置数据库信息)
+- [增加 impala 引擎支持(实验版本)](../engine-usage/impala.md)
 - [Spark ETL 功能增强](./spark-etl.md)
 - [根据数据源生成SQL](./datasource-generate-sql.md)
 - [其它特性说明](./other.md)
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/upgrade/upgrade-to-1.4.0-guide.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/upgrade/upgrade-to-1.4.0-guide.md
new file mode 100644
index 00000000000..6f8cdb01da2
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/upgrade/upgrade-to-1.4.0-guide.md
@@ -0,0 +1,193 @@
+---
+title: 1.4.0 的升级指南
+sidebar_position: 3
+---
+
+> Linkis1.4.0 对 Linkis 的服务及代码进行了较多调整,本文介绍升级到 Linkis 1.4.0 的相关注意事项。
+
+## 1. 注意事项
+
+**1) 如果您是首次接触并使用Linkis,您可以忽略该章节,参考[单机部署](../deployment/deploy-quick.md)指南部署 
Linkis 即可。**
+
+**2) 如果您已安装 Likis 1.4.0 
之前的版本但不想保留原有数据,也可参考[单机部署](../deployment/deploy-quick.md)指南重新部署,安装时选择 2 
清理所有数据并重建表即可(见下面代码)。**
+```
+Do you want to clear Linkis table information in the database?
+ 1: Do not execute table-building statements
+ 2: Dangerous! Clear all data and rebuild the tables
+ other: exit
+
+Please input the choice: ## choice 2
+```
+**3) 如果您已安装 Likis 1.4.0 之前的版本但需要保留原有版本数据,可参考本文档指引进行升级。**
+
+****
+
+## 2. 环境升级 
+
+Linkis 1.4.0 将默认的依赖环境 Hadoop、Hive、Spark 版本升级为 3.x。分别为 Hadoop 升级为 3.3.4、Hive 
升级为 3.1.3、Spark升级为 3.2.1。请将这些环境进行升级后再进行后续操作。
+
+通过如下命令验证升级后版本
+```
+echo $HADOOP_HOME
+/data/hadoop-3.3.4
+echo $HIVE_HOME
+/data/apache-hive-3.1.3-bin
+echo $SPARK_HOME
+/data/spark-3.2.1-bin-hadoop3.2
+```
+
+安装前请修改 deploy-config/linkis-env.sh 文件中 Hadoop、Hive、Spark 相关配置为升级后目录,具体修改项如下:
+
+```
+#HADOOP
+HADOOP_HOME=${HADOOP_HOME:-"/appcom/Install/hadoop"}
+HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/appcom/config/hadoop-config"}
+
+## Hadoop env version
+HADOOP_VERSION=${HADOOP_VERSION:-"3.3.4"}
+
+#Hive
+HIVE_HOME=/appcom/Install/hive
+HIVE_CONF_DIR=/appcom/config/hive-config
+
+#Spark
+SPARK_HOME=/appcom/Install/spark
+SPARK_CONF_DIR=/appcom/config/spark-config
+
+```
+
+## 3. 服务升级安装
+
+因为 1.4.0 版本量改动较大,所以旧版本到 1.4.0 版本升级时服务需要进行重新安装。
+
+在安装时如果需要保留旧版本的数据,一定要选择 1 跳过建表语句(见下面代码)。
+
+Linkis 1.4.0 的安装可以参考[如何快速安装](../deployment/deploy-quick.md)
+
+```
+Do you want to clear Linkis table information in the database?
+ 1: Do not execute table-building statements
+ 2: Dangerous! Clear all data and rebuild the tables
+ other: exit
+
+Please input the choice: ## choice 1
+```
+
+## 4. 数据库升级
+  服务安装完成后,需要对数据库的数据表进行修改,包括表结构变更和表数据更新。执行对应升级版本的 DDL 和 DML 脚本即可。
+  ```
+  #表结构变更
+  linkis-dist\package\db\upgrade\${version}_schema\mysql\linkis_ddl.sql
+  #表数据变更
+  linkis-dist\package\db\upgrade\${version}_schema\mysql\linkis_dml.sql 
+  ```
+注意升级时请依次往上执行升级脚本,如从当前版本 1.3.1,升级到 1.4.0 版本。需要先执行 1.3.2 升级的 DDL 和 DML 脚本,再执行 
1.4.0 升级的 DDL 和 DML脚本。本文以 1.3.2 升级到 1.4.0 为例进行说明 
+
+### 4.1 表结构修改部分:
+
+连接 mysql 数据库执行 
linkis-dist\package\db\upgrade\1.3.2_schema\mysql\linkis_ddl.sql 脚本内容,具体内容如下:
+
+```mysql-sql
+ALTER TABLE `linkis_cg_manager_service_instance` ADD COLUMN `identifier` 
varchar(32) COLLATE utf8_bin DEFAULT NULL;
+ALTER TABLE `linkis_cg_manager_service_instance` ADD COLUMN `ticketId` 
varchar(255) COLLATE utf8_bin DEFAULT NULL;
+ALTER TABLE `linkis_cg_ec_resource_info_record` MODIFY COLUMN metrics TEXT 
DEFAULT NULL COMMENT 'ec metrics';
+```
+
+### 4.2 需要新执行的sql:
+
+连接 mysql 数据库执行 
linkis-dist\package\db\upgrade\1.3.2_schema\mysql\linkis_dml.sql 脚本内容,具体内容如下:
+```sql
+-- 默认版本升级
+UPDATE linkis_ps_configuration_config_key SET default_value = 'python3' WHERE 
`key` = 'spark.python.version';
+UPDATE linkis_cg_manager_label SET label_value = '*-*,hive-3.1.3' WHERE 
label_value = '*-*,hive-2.3.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-IDE,hive-3.1.3' WHERE 
label_value = '*-IDE,hive-2.3.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-nodeexecution,hive-3.1.3' 
WHERE label_value = '*-nodeexecution,hive-2.3.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-*,spark-3.2.1' WHERE 
label_value = '*-*,spark-2.4.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-IDE,spark-3.2.1' WHERE 
label_value = '*-IDE,spark-2.4.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-Visualis,spark-3.2.1' 
WHERE label_value = '*-Visualis,spark-2.4.3';
+UPDATE linkis_cg_manager_label SET label_value = '*-nodeexecution,spark-3.2.1' 
WHERE label_value = '*-nodeexecution,spark-2.4.3';
+
+-- 支持不同的数据源
+INSERT INTO `linkis_ps_dm_datasource_type` (`name`, `description`, `option`, 
`classifier`, `icon`, `layers`, `description_en`, `option_en`, `classifier_en`) 
VALUES ('tidb', 'tidb数据库', 'tidb', '关系型数据库', '', 3, 'TiDB Database', 'TiDB', 
'Relational Database');
+
+select @data_source_type_id := id from `linkis_ps_dm_datasource_type` where 
`name` = 'tidb';
+INSERT INTO `linkis_ps_dm_datasource_type_key`
+(`data_source_type_id`, `key`, `name`, `name_en`, `default_value`, 
`value_type`, `scope`, `require`, `description`, `description_en`, 
`value_regex`, `ref_id`, `ref_value`, `data_source`, `update_time`, 
`create_time`)
+VALUES (@data_source_type_id, 'address', '地址', 'Address', NULL, 'TEXT', NULL, 
0, '地址(host1:port1,host2:port2...)', 'Address(host1:port1,host2:port2...)', 
NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'host', '主机名(Host)', 'Host', NULL, 'TEXT', NULL, 
1, '主机名(Host)', 'Host', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'port', '端口号(Port)', 'Port', NULL, 'TEXT', NULL, 
1, '端口号(Port)', 'Port', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'driverClassName', '驱动类名(Driver class name)', 
'Driver class name', 'com.mysql.jdbc.Driver', 'TEXT', NULL, 1, '驱动类名(Driver 
class name)', 'Driver class name', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'params', '连接参数(Connection params)', 'Connection 
params', NULL, 'TEXT', NULL, 0, '输入JSON格式(Input JSON format): 
{"param":"value"}', 'Input JSON format: {"param":"value"}', NULL, NULL, NULL, 
NULL,  now(), now()),
+       (@data_source_type_id, 'username', '用户名(Username)', 'Username', NULL, 
'TEXT', NULL, 1, '用户名(Username)', 'Username', '^[0-9A-Za-z_-]+$', NULL, NULL, 
NULL,  now(), now()),
+       (@data_source_type_id, 'password', '密码(Password)', 'Password', NULL, 
'PASSWORD', NULL, 0, '密码(Password)', 'Password', '', NULL, NULL, NULL,  now(), 
now()),
+       (@data_source_type_id, 'instance', '实例名(instance)', 'Instance', NULL, 
'TEXT', NULL, 1, '实例名(instance)', 'Instance', NULL, NULL, NULL, NULL,  now(), 
now());
+
+INSERT INTO `linkis_ps_dm_datasource_type` (`name`, `description`, `option`, 
`classifier`, `icon`, `layers`, `description_en`, `option_en`, `classifier_en`) 
VALUES ('starrocks', 'starrocks数据库', 'starrocks', 'olap', '', 4, 'StarRocks 
Database', 'StarRocks', 'Olap');
+
+select @data_source_type_id := id from `linkis_ps_dm_datasource_type` where 
`name` = 'starrocks';
+INSERT INTO `linkis_ps_dm_datasource_type_key`
+(`data_source_type_id`, `key`, `name`, `name_en`, `default_value`, 
`value_type`, `scope`, `require`, `description`, `description_en`, 
`value_regex`, `ref_id`, `ref_value`, `data_source`, `update_time`, 
`create_time`)
+VALUES (@data_source_type_id, 'address', '地址', 'Address', NULL, 'TEXT', NULL, 
0, '地址(host1:port1,host2:port2...)', 'Address(host1:port1,host2:port2...)', 
NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'host', '主机名(Host)', 'Host', NULL, 'TEXT', NULL, 
1, '主机名(Host)', 'Host', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'port', '端口号(Port)', 'Port', NULL, 'TEXT', NULL, 
1, '端口号(Port)', 'Port', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'driverClassName', '驱动类名(Driver class name)', 
'Driver class name', 'com.mysql.jdbc.Driver', 'TEXT', NULL, 1, '驱动类名(Driver 
class name)', 'Driver class name', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'params', '连接参数(Connection params)', 'Connection 
params', NULL, 'TEXT', NULL, 0, '输入JSON格式(Input JSON format): 
{"param":"value"}', 'Input JSON format: {"param":"value"}', NULL, NULL, NULL, 
NULL,  now(), now()),
+       (@data_source_type_id, 'username', '用户名(Username)', 'Username', NULL, 
'TEXT', NULL, 1, '用户名(Username)', 'Username', '^[0-9A-Za-z_-]+$', NULL, NULL, 
NULL,  now(), now()),
+       (@data_source_type_id, 'password', '密码(Password)', 'Password', NULL, 
'PASSWORD', NULL, 0, '密码(Password)', 'Password', '', NULL, NULL, NULL,  now(), 
now()),
+       (@data_source_type_id, 'instance', '实例名(instance)', 'Instance', NULL, 
'TEXT', NULL, 1, '实例名(instance)', 'Instance', NULL, NULL, NULL, NULL,  now(), 
now());
+
+INSERT INTO `linkis_ps_dm_datasource_type` (`name`, `description`, `option`, 
`classifier`, `icon`, `layers`, `description_en`, `option_en`, `classifier_en`) 
VALUES ('gaussdb', 'gaussdb数据库', 'gaussdb', '关系型数据库', '', 3, 'GaussDB 
Database', 'GaussDB', 'Relational Database');
+
+select @data_source_type_id := id from `linkis_ps_dm_datasource_type` where 
`name` = 'gaussdb';
+INSERT INTO `linkis_ps_dm_datasource_type_key`
+(`data_source_type_id`, `key`, `name`, `name_en`, `default_value`, 
`value_type`, `scope`, `require`, `description`, `description_en`, 
`value_regex`, `ref_id`, `ref_value`, `data_source`, `update_time`, 
`create_time`)
+VALUES (@data_source_type_id, 'address', '地址', 'Address', NULL, 'TEXT', NULL, 
0, '地址(host1:port1,host2:port2...)', 'Address(host1:port1,host2:port2...)', 
NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'host', '主机名(Host)', 'Host', NULL, 'TEXT', NULL, 
1, '主机名(Host)', 'Host', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'port', '端口号(Port)', 'Port', NULL, 'TEXT', NULL, 
1, '端口号(Port)', 'Port', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'driverClassName', '驱动类名(Driver class name)', 
'Driver class name', 'org.postgresql.Driver', 'TEXT', NULL, 1, '驱动类名(Driver 
class name)', 'Driver class name', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'params', '连接参数(Connection params)', 'Connection 
params', NULL, 'TEXT', NULL, 0, '输入JSON格式(Input JSON format): 
{"param":"value"}', 'Input JSON format: {"param":"value"}', NULL, NULL, NULL, 
NULL,  now(), now()),
+       (@data_source_type_id, 'username', '用户名(Username)', 'Username', NULL, 
'TEXT', NULL, 1, '用户名(Username)', 'Username', '^[0-9A-Za-z_-]+$', NULL, NULL, 
NULL,  now(), now()),
+       (@data_source_type_id, 'password', '密码(Password)', 'Password', NULL, 
'PASSWORD', NULL, 1, '密码(Password)', 'Password', '', NULL, NULL, NULL,  now(), 
now()),
+       (@data_source_type_id, 'instance', '实例名(instance)', 'Instance', NULL, 
'TEXT', NULL, 1, '实例名(instance)', 'Instance', NULL, NULL, NULL, NULL,  now(), 
now());
+
+INSERT INTO `linkis_ps_dm_datasource_type` (`name`, `description`, `option`, 
`classifier`, `icon`, `layers`, `description_en`, `option_en`, `classifier_en`) 
VALUES ('oceanbase', 'oceanbase数据库', 'oceanbase', 'olap', '', 4, 'oceanbase 
Database', 'oceanbase', 'Olap');
+
+select @data_source_type_id := id from `linkis_ps_dm_datasource_type` where 
`name` = 'oceanbase';
+INSERT INTO `linkis_ps_dm_datasource_type_key`
+(`data_source_type_id`, `key`, `name`, `name_en`, `default_value`, 
`value_type`, `scope`, `require`, `description`, `description_en`, 
`value_regex`, `ref_id`, `ref_value`, `data_source`, `update_time`, 
`create_time`)
+VALUES (@data_source_type_id, 'address', '地址', 'Address', NULL, 'TEXT', NULL, 
0, '地址(host1:port1,host2:port2...)', 'Address(host1:port1,host2:port2...)', 
NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'host', '主机名(Host)', 'Host', NULL, 'TEXT', NULL, 
1, '主机名(Host)', 'Host', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'port', '端口号(Port)', 'Port', NULL, 'TEXT', NULL, 
1, '端口号(Port)', 'Port', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'driverClassName', '驱动类名(Driver class name)', 
'Driver class name', 'com.mysql.jdbc.Driver', 'TEXT', NULL, 1, '驱动类名(Driver 
class name)', 'Driver class name', NULL, NULL, NULL, NULL,  now(), now()),
+       (@data_source_type_id, 'params', '连接参数(Connection params)', 'Connection 
params', NULL, 'TEXT', NULL, 0, '输入JSON格式(Input JSON format): 
{"param":"value"}', 'Input JSON format: {"param":"value"}', NULL, NULL, NULL, 
NULL,  now(), now()),
+       (@data_source_type_id, 'username', '用户名(Username)', 'Username', NULL, 
'TEXT', NULL, 1, '用户名(Username)', 'Username', '^[0-9A-Za-z_-]+$', NULL, NULL, 
NULL,  now(), now()),
+       (@data_source_type_id, 'password', '密码(Password)', 'Password', NULL, 
'PASSWORD', NULL, 1, '密码(Password)', 'Password', '', NULL, NULL, NULL,  now(), 
now()),
+       (@data_source_type_id, 'instance', '实例名(instance)', 'Instance', NULL, 
'TEXT', NULL, 1, '实例名(instance)', 'Instance', NULL, NULL, NULL, NULL,  now(), 
now());
+```
+
+## 4. 添加mysql驱动包
+linkis 升级为 1.4.0 版本时 mysql 驱动包需使用 8.x 版本,以 8.0.28 
版本为例:[下载连接](https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar)拷贝驱动包至lib包下
+
+```
+cp mysql-connector-java-8.0.28.jar  
${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/
+cp mysql-connector-java-8.0.28.jar  
${LINKIS_HOME}/lib/linkis-commons/public-module/
+```
+
+## 5. 启动服务
+
+```shell
+sh linkis-start-all.sh
+```
+
+## 6. 注意事项
+
+1. Spark 升级为 3.x 后,不兼容 python2,因此在执行 pyspark 任务时需要安装 python3,并执行如下操作
+```shell
+sudo ln -snf /usr/bin/python3 /usr/bin/python2
+```
+并且在 spark 引擎配置 
$LINKIS_HOME/lib/linkis-engineconn-plugins/spark/dist/3.2.1/conf/linkis-engineconn.properties
 中添加如下配置,指定python安装路径
+```
+pyspark.python3.path=/usr/bin/python3
+```
+2. 升级时配置文件中 Token 值没法自动与原数据库 Token 值统一。需要手动修改 `linkis.properties` 和 
`linkis-cli/linkis-cli.properties` 文件中的Token 值为与数据表 
`linkis_mg_gateway_auth_token` 相对应的 Token 值。
+3. 低版本升级高版本时请逐级执行数据库升级脚本。
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/udf-function.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/udf-function.md
index 0ee158a09c8..975aad8e934 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/udf-function.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/udf-function.md
@@ -15,7 +15,29 @@ sidebar_position: 5
 
 **Step1 本地编写jar包**
 
-UDF示例:https://help.aliyun.com/apsara/agile/v_3_6_0_20210705/odps/ase-user-guide/udf-example.html
+Hive UDF示例:
+1. 引入 hive 依赖
+```xml
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-exec</artifactId>
+    <version>3.1.3</version>
+</dependency>
+```
+2. 编写UDF 类
+```java
+import org.apache.hadoop.hive.ql.exec.UDF;
+
+public class UDFExample extends UDF {
+    public Integer evaluate(Integer value) {
+        return value == null ? null : value + 1;
+    }
+}
+
+3. 编译打包
+```shell
+mvn package
+```
 
 **Step2【Scriptis >> 工作空间】上传jar包**
 选择对应的文件夹 鼠标右键 选择上传
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.2/user-guide/udf-function.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.2/user-guide/udf-function.md
index c0c2d3144b3..e3ea35e1b33 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.2/user-guide/udf-function.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.2/user-guide/udf-function.md
@@ -15,7 +15,29 @@ sidebar_position: 5
 
 **Step1 本地编写jar包**
 
-UDF示例:https://help.aliyun.com/apsara/agile/v_3_6_0_20210705/odps/ase-user-guide/udf-example.html
+Hive UDF示例:
+1. 引入 hive 依赖
+```xml
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-exec</artifactId>
+    <version>3.1.3</version>
+</dependency>
+```
+2. 编写UDF 类
+```java
+import org.apache.hadoop.hive.ql.exec.UDF;
+
+public class UDFExample extends UDF {
+    public Integer evaluate(Integer value) {
+        return value == null ? null : value + 1;
+    }
+}
+
+3. 编译打包
+```shell
+mvn package
+```
 
 **Step2【Scriptis >> 工作空间】上传jar包**
 选择对应的文件夹 鼠标右键 选择上传
diff --git a/versioned_docs/version-1.3.2/user-guide/udf-function.md 
b/versioned_docs/version-1.3.2/user-guide/udf-function.md
index d6a9cb65b67..b0942259893 100644
--- a/versioned_docs/version-1.3.2/user-guide/udf-function.md
+++ b/versioned_docs/version-1.3.2/user-guide/udf-function.md
@@ -17,7 +17,31 @@ Overall step description
 
 **Step1 Writing jar packages locally**
 
-UDF 
Example:https://help.aliyun.com/apsara/agile/v_3_6_0_20210705/odps/ase-user-guide/udf-example.html
+Hive UDF Example:
+1. add hive dependency
+```xml
+<dependency>
+    <groupId>org.apache.hive</groupId>
+    <artifactId>hive-exec</artifactId>
+    <version>3.1.3</version>
+</dependency>
+```
+2. create UDF class
+```java
+import org.apache.hadoop.hive.ql.exec.UDF;
+
+public class UDFExample extends UDF {
+    public Integer evaluate(Integer value) {
+        return value == null ? null : value + 1;
+    }
+}
+```
+
+3. package
+```shell
+mvn package
+```
+
 
 **Step2【Scriptis >> Workspace】Upload jar package**
 Select the corresponding folder and right-click to select Upload


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to