This is an automated email from the ASF dual-hosted git repository.

leebai pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new 1494908  update 1.0.2 usage doc
     new b6f1037  Merge pull request #57 from peacewong/dev
1494908 is described below

commit 1494908e6ae18681e318dedc97650fc3ab8a8095
Author: peacewong <[email protected]>
AuthorDate: Tue Dec 21 11:12:07 2021 +0800

    update 1.0.2 usage doc
---
 .../version-1.0.2/engine_usage/hive.md             |  32 +-
 .../version-1.0.2/engine_usage/jdbc.md             |  37 +-
 .../version-1.0.2/engine_usage/python.md           |  32 +-
 .../version-1.0.2/engine_usage/shell.md            |  30 +-
 .../version-1.0.2/engine_usage/spark.md            |  36 +-
 .../version-1.0.2/user_guide/sdk_manual.md         | 482 +++++++++-----------
 versioned_docs/version-1.0.2/engine_usage/hive.md  |  72 +--
 versioned_docs/version-1.0.2/engine_usage/jdbc.md  |  58 ++-
 .../version-1.0.2/engine_usage/python.md           |  55 ++-
 versioned_docs/version-1.0.2/engine_usage/shell.md |  53 ++-
 versioned_docs/version-1.0.2/engine_usage/spark.md |  65 +--
 .../version-1.0.2/user_guide/sdk_manual.md         | 500 +++++++++------------
 12 files changed, 720 insertions(+), 732 deletions(-)

diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/hive.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/hive.md
index af14d55..3bbe2f3 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/hive.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/hive.md
@@ -53,27 +53,37 @@ hive的MapReduce任务是需要用到yarn的资源,所以需要您在一开始
 
 图3-1 队列设置
 
-### 3.1 Scriptis的使用方式
+您也可以通过在提交参数的StartUpMap里面添加队列的值:`startupMap.put("wds.linkis.rm.yarnqueue", 
"dws")`
 
-Scriptis的使用方式是最简单的,您可以直接进入Scriptis,右键目录然后新建hive脚本并编写hivesql代码。
+### 3.1 通过Linkis SDK进行使用
 
-hive引擎的实现方式通过实例化hive的Driver实例,然后由Driver来提交任务,并获取结果集并展示。
+Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以参考 [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+对于Hive任务你只需要修改Demo中的EngineConnType和CodeType参数即可:
 
-![](/Images-zh/EngineUsage/hive-run.png)
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-2.3.3"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "hql"); // required codeType
+```
 
-图3-2 hivesql的执行效果截图
+### 3.2 通过Linkis-cli进行任务提交
 
-### 3.2工作流的使用方式
+Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Hive的使用如下:
+```shell
+sh ./bin/linkis-cli -engineType hive-2.3.3 -codeType hql -code "show tables"  
-submitUser hadoop -proxyUser hadoop
+```
+具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
-DSS工作流也有hive的节点,您可以拖入工作流节点,然后双击进入然后进行编辑代码,然后以工作流的形式进行执行。
+### 3.3 Scriptis的使用方式
 
-![](/Images-zh/EngineUsage/workflow.png)
+[Scriptis](https://github.com/WeBankFinTech/Scriptis)的使用方式是最简单的,您可以直接进入Scriptis,右键目录然后新建hive脚本并编写hivesql代码
 
-图3-5 工作流执行hive的节点
+hive引擎的实现方式通过实例化hive的Driver实例,然后由Driver来提交任务,并获取结果集并展示。
 
-### 3.3 Linkis Client的使用方式
+![](/Images-zh/EngineUsage/hive-run.png)
 
-Linkis也提供了client的方式进行调用hive的任务,调用的方式是通过LinkisClient提供的SDK的方式。我们提供了java和scala两种方式进行调用,具体的使用方式可以参考<https://github.com/apache/incubator-linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4%BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>。
+图3-2 hivesql的执行效果截图
 
 ## 4.Hive引擎的用户设置
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/jdbc.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/jdbc.md
index 1d7f907..010ef2c 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/jdbc.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/jdbc.md
@@ -33,23 +33,42 @@ JDBC引擎不需要用户自行编译,直接使用编译好的JDBC引擎插件
 
 图3-1 JDBC配置信息
 
-### 3.1 Scriptis的使用方式
+您也可以才提交任务接口中的RuntimeMap进行修改即可
+```shell
+wds.linkis.jdbc.connect.url 
+wds.linkis.jdbc.username
+wds.linkis.jdbc.password
+```
 
-Scriptis的使用方式是最简单的,您可以直接进入Scriptis,右键目录然后新建JDBC脚本并编写JDBC代码并点击执行。
+### 3.1 通过Linkis SDK进行使用
 
-JDBC的执行原理是通过加载JDBC的Driver然后提交sql到SQL的server去执行并获取到结果集并返回。
+Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以参考 [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+对于JDBC任务您只需要修改Demo中的EngineConnType和CodeType参数即可:
 
-![](/Images-zh/EngineUsage/jdbc-run.png)
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "jdbc-4"); // required 
engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "jdbc"); // required 
codeType
+```
 
-图3-2 JDBC的执行效果截图
+### 3.2 通过Linkis-cli进行任务提交
+
+Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,JDBC的使用如下:
+```shell
+sh ./bin/linkis-cli -engineType jdbc-4 -codeType jdbc -code "show tables"  
-submitUser hadoop -proxyUser hadoop
+```
+具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
-### 3.2工作流的使用方式
+### 3.3 Scriptis的使用方式
 
-DSS工作流也有JDBC的节点,您可以拖入工作流节点,然后双击进入然后进行编辑代码,然后以工作流的形式进行执行。
+Scriptis的使用方式是最简单的,您可以直接进入Scriptis,右键目录然后新建JDBC脚本并编写JDBC代码并点击执行。
 
-### 3.3 Linkis Client的使用方式
+JDBC的执行原理是通过加载JDBC的Driver然后提交sql到SQL的server去执行并获取到结果集并返回。
 
-Linkis也提供了client的方式进行调用JDBC的任务,调用的方式是通过LinkisClient提供的SDK的方式。我们提供了java和scala两种方式进行调用,具体的使用方式可以参考<https://github.com/apache/incubator-linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4%BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>。
+![](/Images-zh/EngineUsage/jdbc-run.png)
+
+图3-2 JDBC的执行效果截图
 
 ## 4.JDBC引擎的用户设置
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/python.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/python.md
index 373a2ea..942b92a 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/python.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/python.md
@@ -36,7 +36,27 @@ python3的,您可以简单更改配置就可以完成Python版本的切换,
 
 在linkis上提交python之前,您只需要保证您的用户的\$PATH中有python的路径即可。
 
-### 3.1 Scriptis的使用方式
+### 3.1 通过Linkis SDK进行使用
+
+Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以参考 [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+对于Python任务您只需要修改Demo中的EngineConnType和CodeType参数即可:
+
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "python-python2"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "python"); // required 
codeType 
+```
+
+### 3.2 通过Linkis-cli进行任务提交
+
+Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Python的使用如下:
+```shell
+sh ./bin/linkis-cli -engineType python-python2 -codeType python -code 
"print(\"hello\")"  -submitUser hadoop -proxyUser hadoop
+```
+具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
+
+### 3.3 Scriptis的使用方式
 
 Scriptis的使用方式是最简单的,您可以直接进入Scriptis,右键目录然后新建python脚本并编写python代码并点击执行。
 
@@ -47,18 +67,10 @@ python的执行逻辑是通过 Py4j的方式,启动一个的python
 
 图3-1 python的执行效果截图
 
-### 3.2工作流的使用方式
-
-DSS工作流也有python的节点,您可以拖入工作流节点,然后双击进入然后进行编辑代码,然后以工作流的形式进行执行。
-
-### 3.3 Linkis Client的使用方式
-
-Linkis也提供了client的方式进行调用spark的任务,调用的方式是通过LinkisClient提供的SDK的方式。我们提供了java和scala两种方式进行调用,具体的使用方式可以参考<https://github.com/apache/incubator-linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4%BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>。
-
 ## 4.Python引擎的用户设置
 
 除了以上引擎配置,用户还可以进行自定义的设置,比如python的版本和以及python需要加载的一些module等。
 
-![](/Images-zh/EngineUsage/jdbc-conf.png)
+![](/Images-zh/EngineUsage/python-config.png)
 
 图4-1 python的用户自定义配置管理台
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/shell.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/shell.md
index 80c661e..84bdc26 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/shell.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/shell.md
@@ -35,25 +35,35 @@ Shell引擎不需要用户自行编译,直接使用编译好的shell引擎插
 
 在linkis上提交shell之前,您只需要保证您的用户的\$PATH中有shell的路径即可。
 
-### 3.1 Scriptis的使用方式
+### 3.1 通过Linkis SDK进行使用
 
-Scriptis的使用方式是最简单的,您可以直接进入Scriptis,右键目录然后新建shell脚本并编写shell代码并点击执行。
+Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以参考 [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+对于Shell任务你只需要修改Demo中的EngineConnType和CodeType参数即可:
 
-shell的执行原理是shell引擎通过java自带的ProcessBuilder启动一个系统进程来进行执行,并且将进程的输出重定向到引擎并写入到日志中。
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "shell-1"); // required 
engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "shell"); // required 
codeType
+```
 
-![](/Images-zh/EngineUsage/shell-run.png)
+### 3.2 通过Linkis-cli进行任务提交
 
-图3-1 shell的执行效果截图
+Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Shell的使用如下:
+```shell
+sh ./bin/linkis-cli -engineType shell-1 -codeType shell -code "echo \"hello\" 
"  -submitUser hadoop -proxyUser hadoop
+```
+具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
 
-### 3.2工作流的使用方式
+### 3.3 Scriptis的使用方式
 
-DSS工作流也有shell的节点,您可以拖入工作流节点,然后双击进入然后进行编辑代码,然后以工作流的形式进行执行。
+Scriptis的使用方式是最简单的,您可以直接进入Scriptis,右键目录然后新建shell脚本并编写shell代码并点击执行。
 
-Shell执行需要注意一点,在工作流中如果是多行执行的话,工作流节点是否成功是由最后一个命令确定,比如前两行是错的,但是最后一行的shell返回值是0,那么这个节点是成功的。
+shell的执行原理是shell引擎通过java自带的ProcessBuilder启动一个系统进程来进行执行,并且将进程的输出重定向到引擎并写入到日志中。
 
-### 3.3 Linkis Client的使用方式
+![](/Images-zh/EngineUsage/shell-run.png)
 
-Linkis也提供了client的方式进行调用shell的任务,调用的方式是通过LinkisClient提供的SDK的方式。我们提供了java和scala两种方式进行调用,具体的使用方式可以参考<https://github.com/apache/incubator-linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4%BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>。
+图3-1 shell的执行效果截图
 
 ## 4.Shell引擎的用户设置
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/spark.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/spark.md
index 74587ce..92fd554 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/spark.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/engine_usage/spark.md
@@ -50,8 +50,30 @@ Linkis1.0是通过标签来进行的,所以需要在我们数据库中插入
 ![](/Images-zh/EngineUsage/queue-set.png)
 
 图3-1 队列设置
+您也可以通过在提交参数的StartUpMap里面添加队列的值:`startupMap.put("wds.linkis.rm.yarnqueue", 
"dws")`
 
-### 3.1 Scriptis的使用方式
+### 3.1 通过Linkis SDK进行使用
+
+Linkis提供了Java和Scala 的SDK向Linkis服务端提交任务. 具体可以参考 [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+对于Spark任务你只需要修改Demo中的EngineConnType和CodeType参数即可:
+
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "sql"); // required 
codeType py,sql,scala
+```
+
+### 3.2 通过Linkis-cli进行任务提交
+
+Linkis 1.0后提供了cli的方式提交任务,我们只需要指定对应的EngineConn和CodeType标签类型即可,Spark的使用如下:
+```shell
+You can also add the queue value in the StartUpMap of the submission 
parameter: `startupMap.put("wds.linkis.rm.yarnqueue", "dws")`
+
+```
+具体使用可以参考: [Linkis CLI Manual](user_guide/linkiscli_manual.md).
+
+### 3.3 Scriptis的使用方式
 
 Scriptis的使用方式是最简单的,您可以直接进入Scriptis,新建sql、scala或者pyspark脚本进行执行。
 
@@ -73,18 +95,6 @@ spark-scala的任务,我们已经初始化好了sqlContext等变量,用户
 ![](/Images-zh/EngineUsage/pyspakr-run.png)
 图3-4 pyspark的执行方式
 
-### 3.2工作流的使用方式
-
-DSS工作流也是有spark的三个节点,您可以拖入工作流节点,如sql、scala或者pyspark节点,然后双击进入然后进行编辑代码,然后以工作流的形式进行执行。
-
-![](/Images-zh/EngineUsage/workflow.png)
-
-图3-5 工作流执行spark的节点
-
-### 3.3 Linkis Client的使用方式
-
-Linkis也提供了client的方式进行调用spark的任务,调用的方式是通过LinkisClient提供的SDK的方式。我们提供了java和scala两种方式进行调用,具体的使用方式可以参考<https://github.com/apache/incubator-linkis/wiki/Linkis1.0%E7%94%A8%E6%88%B7%E4%BD%BF%E7%94%A8%E6%96%87%E6%A1%A3>。
-
 ## 4.spark引擎的用户设置
 
 
除了以上引擎配置,用户还可以进行自定义的设置,比如spark会话executor个数和executor的内存。这些参数是为了用户能够更加自由地设置自己的spark的参数,另外spark其他参数也可以进行修改,比如的pyspark的python版本等。
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/user_guide/sdk_manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/user_guide/sdk_manual.md
index 471ff27..759527d 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/user_guide/sdk_manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.0.2/user_guide/sdk_manual.md
@@ -8,35 +8,37 @@ sidebar_position: 3
 ## 1. 引入依赖模块
 ```
 <dependency>
-  <groupId>com.webank.wedatasphere.linkis</groupId>
+  <groupId>org.apache.linkis</groupId>
   <artifactId>linkis-computation-client</artifactId>
   <version>${linkis.version}</version>
 </dependency>
 如:
 <dependency>
-  <groupId>com.webank.wedatasphere.linkis</groupId>
+  <groupId>org.apache.linkis</groupId>
   <artifactId>linkis-computation-client</artifactId>
-  <version>1.0.0</version>
+  <version>1.0.2</version>
 </dependency>
 ```
 
-## 2. 兼容0.X的Execute方法提交
-### 2.1 Java测试代码
-建立Java的测试类UJESClientImplTestJ,具体接口含义可以见注释:
+## 2. Java测试代码
+建立Java的测试类LinkisClientTest,具体接口含义可以见注释:
 ```java
 package com.webank.wedatasphere.linkis.client.test;
 
 import com.webank.wedatasphere.linkis.common.utils.Utils;
 import 
com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
-import 
com.webank.wedatasphere.linkis.httpclient.dws.authentication.TokenAuthenticationStrategy;
 import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
 import 
com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
+import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant;
+import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant;
 import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
 import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
 import com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction;
+import com.webank.wedatasphere.linkis.ujes.client.request.JobSubmitAction;
 import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
 import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
 import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobLogResult;
 import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
 import org.apache.commons.io.IOUtils;
 
@@ -44,256 +46,138 @@ import java.util.HashMap;
 import java.util.Map;
 import java.util.concurrent.TimeUnit;
 
-public class LinkisClientTest {
-
-    public static void main(String[] args){
-
-        String user = "hadoop";
-        String executeCode = "show databases;";
+public class JavaClientTest {
 
-        // 1. 配置DWSClientBuilder,通过DWSClientBuilder获取一个DWSClientConfig
-        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) 
(DWSClientConfigBuilder.newBuilder()
-                .addServerUrl("http://${ip}:${port}";)  
//指定ServerUrl,linkis服务器端网关的地址,如http://{ip}:{port}
-                .connectionTimeout(30000)   //connectionTimeOut 客户端连接超时时间
-                .discoveryEnabled(false).discoveryFrequency(1, 
TimeUnit.MINUTES)  //是否启用注册发现,如果启用,会自动发现新启动的Gateway 
-                .loadbalancerEnabled(true)  // 是否启用负载均衡,如果不启用注册发现,则负载均衡没有意义
-                .maxConnectionSize(5)   //指定最大连接数,即最大并发数
-                .retryEnabled(false).readTimeout(30000)   //执行失败,是否允许重试
-                .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
  //AuthenticationStrategy Linkis认证方式
-                
.setAuthTokenKey("${username}").setAuthTokenValue("${password}")))  
//认证key,一般为用户名;  认证value,一般为用户名对应的密码
-                .setDWSVersion("v1").build();  //linkis后台协议的版本,当前版本为v1
+    // 1. build config: linkis gateway url
+    private static DWSClientConfig clientConfig = ((DWSClientConfigBuilder) 
(DWSClientConfigBuilder.newBuilder()
+            .addServerUrl("http://127.0.0.1:9001/";)   //set linkis-mg-gateway 
url: http://{ip}:{port}
+            .connectionTimeout(30000)   //connectionTimeOut
+            .discoveryEnabled(false) //disable discovery
+            .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
+            .loadbalancerEnabled(true)  // enable loadbalance
+            .maxConnectionSize(5)   // set max Connection
+            .retryEnabled(false) // set retry
+            .readTimeout(30000)  //set read timeout
+            .setAuthenticationStrategy(new StaticAuthenticationStrategy())   
//AuthenticationStrategy Linkis authen suppory static and Token
+            .setAuthTokenKey("hadoop")  // set submit user
+            .setAuthTokenValue("hadoop")))  // set passwd or token 
(setAuthTokenValue("BML-AUTH"))
+            .setDWSVersion("v1") //linkis rest version v1
+            .build();
+
+    // 2. new Client(Linkis Client) by clientConfig
+    private static UJESClient client = new UJESClientImpl(clientConfig);
 
-        // 2. 通过DWSClientConfig获取一个UJESClient
-        UJESClient client = new UJESClientImpl(clientConfig);
+    public static void main(String[] args){
 
+        String user = "hadoop"; // execute user
+        String executeCode = "df=spark.sql(\"show tables\")\n" +
+                "show(df)"; // code support:sql/hql/py/scala
         try {
-            // 3. 开始执行代码
+
             System.out.println("user : " + user + ", code : [" + executeCode + 
"]");
-            Map<String, Object> startupMap = new HashMap<String, Object>();
-            startupMap.put("wds.linkis.yarnqueue", "default"); // 
在startupMap可以存放多种启动参数,参见linkis管理台配置
-            JobExecuteResult jobExecuteResult = 
client.execute(JobExecuteAction.builder()
-                    .setCreator("linkisClient-Test")  
//creator,请求linkis的客户端的系统名,用于做系统级隔离
-                    .addExecuteCode(executeCode)   //ExecutionCode 请求执行的代码
-                    .setEngineType((JobExecuteAction.EngineType) 
JobExecuteAction.EngineType$.MODULE$.HIVE()) // 希望请求的linkis的执行引擎类型,如Spark hive等
-                    .setUser(user)   //User,请求用户;用于做用户级多租户隔离
-                    .setStartupParams(startupMap)
-                    .build());
+            // 3. build job and execute
+            JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
+            //0.x:JobExecuteResult jobExecuteResult = toExecute(user, 
executeCode);
             System.out.println("execId: " + jobExecuteResult.getExecID() + ", 
taskId: " + jobExecuteResult.taskID());
-
-            // 4. 获取脚本的执行状态
+            // 4. get job jonfo
             JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
             int sleepTimeMills = 1000;
+            int logFromLen = 0;
+            int logSize = 100;
             while(!jobInfoResult.isCompleted()) {
-                // 5. 获取脚本的执行进度
+                // 5. get progress and log
                 JobProgressResult progress = client.progress(jobExecuteResult);
+                System.out.println("progress: " + progress.getProgress());
+                JobLogResult logRes = client.log(jobExecuteResult, logFromLen, 
logSize);
+                logFromLen = logRes.fromLine();
+                // 0: info 1: warn 2: error 3: all
+                System.out.println(logRes.log().get(3));
                 Utils.sleepQuietly(sleepTimeMills);
                 jobInfoResult = client.getJobInfo(jobExecuteResult);
             }
 
-            // 6. 获取脚本的Job信息
             JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
-            // 7. 获取结果集列表(如果用户一次提交多个SQL,会产生多个结果集)
+            // 6. Get the result set list (if the user submits multiple SQLs 
at a time,
+            // multiple result sets will be generated)
             String resultSet = jobInfo.getResultSetList(client)[0];
-            // 8. 通过一个结果集信息,获取具体的结果集
+            // 7. get resultContent
             Object fileContents = 
client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
-            System.out.println("fileContents: " + fileContents);
-
+            System.out.println("res: " + fileContents);
         } catch (Exception e) {
             e.printStackTrace();
             IOUtils.closeQuietly(client);
         }
         IOUtils.closeQuietly(client);
     }
-}
-```
-运行上述的代码即可以和Linkis进行交互
-
-### 3. Scala测试代码:
-```scala
-package com.webank.wedatasphere.linkis.client.test
-
-import java.util.concurrent.TimeUnit
-
-import com.webank.wedatasphere.linkis.common.utils.Utils
-import 
com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
-import 
com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder
-import com.webank.wedatasphere.linkis.ujes.client.UJESClient
-import 
com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction.EngineType
-import com.webank.wedatasphere.linkis.ujes.client.request.{JobExecuteAction, 
ResultSetAction}
-import org.apache.commons.io.IOUtils
-
-object LinkisClientImplTest extends App {
-
-  var executeCode = "show databases;"
-  var user = "hadoop"
 
-  // 1. 配置DWSClientBuilder,通过DWSClientBuilder获取一个DWSClientConfig
-  val clientConfig = DWSClientConfigBuilder.newBuilder()
-    .addServerUrl("http://${ip}:${port}";)  
//指定ServerUrl,Linkis服务器端网关的地址,如http://{ip}:{port}
-    .connectionTimeout(30000)  //connectionTimeOut 客户端连接超时时间
-    .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES)  
//是否启用注册发现,如果启用,会自动发现新启动的Gateway
-    .loadbalancerEnabled(true)  // 是否启用负载均衡,如果不启用注册发现,则负载均衡没有意义
-    .maxConnectionSize(5)   //指定最大连接数,即最大并发数
-    .retryEnabled(false).readTimeout(30000)   //执行失败,是否允许重试
-    .setAuthenticationStrategy(new StaticAuthenticationStrategy())  
//AuthenticationStrategy Linkis认证方式
-    .setAuthTokenKey("${username}").setAuthTokenValue("${password}")  
//认证key,一般为用户名;  认证value,一般为用户名对应的密码
-    .setDWSVersion("v1").build()  //Linkis后台协议的版本,当前版本为v1
-
-  // 2. 通过DWSClientConfig获取一个UJESClient
-  val client = UJESClient(clientConfig)
-  
-  try {
-    // 3. 开始执行代码
-    println("user : " + user + ", code : [" + executeCode + "]")
-    val startupMap = new java.util.HashMap[String, Any]()
-    startupMap.put("wds.linkis.yarnqueue", "default") //启动参数配置
-    val jobExecuteResult = client.execute(JobExecuteAction.builder()
-      .setCreator("LinkisClient-Test")  //creator,请求Linkis的客户端的系统名,用于做系统级隔离
-      .addExecuteCode(executeCode)   //ExecutionCode 请求执行的代码
-      .setEngineType(EngineType.SPARK) // 希望请求的Linkis的执行引擎类型,如Spark hive等
-      .setStartupParams(startupMap)
-      .setUser(user).build())  //User,请求用户;用于做用户级多租户隔离
-    println("execId: " + jobExecuteResult.getExecID + ", taskId: " + 
jobExecuteResult.taskID)
-    
-    // 4. 获取脚本的执行状态
-    var jobInfoResult = client.getJobInfo(jobExecuteResult)
-    val sleepTimeMills : Int = 1000
-    while (!jobInfoResult.isCompleted) {
-      // 5. 获取脚本的执行进度   
-      val progress = client.progress(jobExecuteResult)
-      val progressInfo = if (progress.getProgressInfo != null) 
progress.getProgressInfo.toList else List.empty
-      println("progress: " + progress.getProgress + ", progressInfo: " + 
progressInfo)
-      Utils.sleepQuietly(sleepTimeMills)
-      jobInfoResult = client.getJobInfo(jobExecuteResult)
-    }
-    if (!jobInfoResult.isSucceed) {
-      println("Failed to execute job: " + jobInfoResult.getMessage)
-      throw new Exception(jobInfoResult.getMessage)
+    /**
+     * Linkis 1.0 recommends the use of Submit method
+     */
+    private static JobExecuteResult toSubmit(String user, String code) {
+        // 1. build  params
+        // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // required codeType
+        // set start up map :engineConn start params
+        Map<String, Object> startupMap = new HashMap<String, Object>(16);
+        // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+        startupMap.put("spark.executor.instances", 2);
+        // setting linkis params
+        startupMap.put("wds.linkis.rm.yarnqueue", "dws");
+
+        // 2. build jobSubmitAction
+        JobSubmitAction jobSubmitAction = JobSubmitAction.builder()
+                .addExecuteCode(code)
+                .setStartupParams(startupMap)
+                .setUser(user) //submit user
+                .addExecuteUser(user)  // execute user
+                .setLabels(labels)
+                .build();
+        // 3. to execute
+        return client.submit(jobSubmitAction);
     }
 
-    // 6. 获取脚本的Job信息
-    val jobInfo = client.getJobInfo(jobExecuteResult)
-    // 7. 获取结果集列表(如果用户一次提交多个SQL,会产生多个结果集)
-    val resultSetList = jobInfoResult.getResultSetList(client)
-    println("All result set list:")
-    resultSetList.foreach(println)
-    val oneResultSet = jobInfo.getResultSetList(client).head
-    // 8. 通过一个结果集信息,获取具体的结果集
-    val fileContents = 
client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
-    println("First fileContents: ")
-    println(fileContents)
-  } catch {
-    case e: Exception => {
-      e.printStackTrace()
+    /**
+     * Compatible with 0.X execution mode
+     */
+    private static JobExecuteResult toExecute(String user, String code) {
+        // 1. build  params
+        // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+        Map<String, Object> labels = new HashMap<String, Object>();
+        // labels.put(LabelKeyConstant.TENANT_KEY, "fate");
+        // set start up map :engineConn start params
+        Map<String, Object> startupMap = new HashMap<String, Object>(16);
+        // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+        startupMap.put("spark.executor.instances", 2);
+        // setting linkis params
+        startupMap.put("wds.linkis.rm.yarnqueue", "dws");
+
+        // 2. build JobExecuteAction (0.X old way of using)
+        JobExecuteAction executionAction = JobExecuteAction.builder()
+                .setCreator("IDE")  //creator, the system name of the client 
requesting linkis, used for system-level isolation
+                .addExecuteCode(code)   //Execution Code
+                .setEngineTypeStr("spark") // engineConn type
+                .setRunTypeStr("py") // code type
+                .setUser(user)   //execute user
+                .setStartupParams(startupMap) // start up params
+                .build();
+        executionAction.addRequestPayload(TaskConstant.LABELS, labels);
+        String body = executionAction.getRequestPayload();
+        System.out.println(body);
+
+        // 3. to execute
+        return client.execute(executionAction);
     }
-  }
-  IOUtils.closeQuietly(client)
 }
-```
 
-## 3. 1.0新增的支持带Label提交的Submit方式
-1.0 新增了client.submit方法,用于对接1.0新的任务执行接口,支持传入Label等参数
-### 3.1 Java测试类
 ```
-package com.webank.wedatasphere.linkis.client.test;
-
-import com.webank.wedatasphere.linkis.common.utils.Utils;
-import 
com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
-import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
-import 
com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
-import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant;
-import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant;
-import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
-import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
-import com.webank.wedatasphere.linkis.ujes.client.request.JobSubmitAction;
-import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
-import org.apache.commons.io.IOUtils;
-
-import java.util.HashMap;
-import java.util.Map;
-import java.util.concurrent.TimeUnit;
-
-public class JavaClientTest {
-
-    public static void main(String[] args){
-
-        String user = "hadoop";
-        String executeCode = "show tables";
-
-        // 1. 配置ClientBuilder,获取ClientConfig
-        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) 
(DWSClientConfigBuilder.newBuilder()
-                .addServerUrl("http://${ip}:${port}";)  
//指定ServerUrl,linkis服务器端网关的地址,如http://{ip}:{port}
-                .connectionTimeout(30000)   //connectionTimeOut 客户端连接超时时间
-                .discoveryEnabled(false).discoveryFrequency(1, 
TimeUnit.MINUTES)  //是否启用注册发现,如果启用,会自动发现新启动的Gateway
-                .loadbalancerEnabled(true)  // 是否启用负载均衡,如果不启用注册发现,则负载均衡没有意义
-                .maxConnectionSize(5)   //指定最大连接数,即最大并发数
-                .retryEnabled(false).readTimeout(30000)   //执行失败,是否允许重试
-                .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
  //AuthenticationStrategy Linkis认证方式
-                
.setAuthTokenKey("${username}").setAuthTokenValue("${password}")))  
//认证key,一般为用户名;  认证value,一般为用户名对应的密码
-                .setDWSVersion("v1").build();  //linkis后台协议的版本,当前版本为v1
 
-        // 2. 通过DWSClientConfig获取一个UJESClient
-        UJESClient client = new UJESClientImpl(clientConfig);
+运行上述的代码即可以完成任务提交/执行/日志/结果集获取等
 
-        try {
-            // 3. 开始执行代码
-            System.out.println("user : " + user + ", code : [" + executeCode + 
"]");
-            Map<String, Object> startupMap = new HashMap<String, Object>();
-            // 在startupMap可以存放多种启动参数,参见linkis管理台配置
-            startupMap.put("wds.linkis.yarnqueue", "q02");
-            //指定Label
-            Map<String, Object> labels = new HashMap<String, Object>();
-            //添加本次执行所依赖的的标签:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel
-            labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1");
-            labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");
-            labels.put(LabelKeyConstant.CODE_TYPE_KEY, "hql");
-            //指定source
-            Map<String, Object> source = new HashMap<String, Object>();
-            source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test");
-            JobExecuteResult jobExecuteResult = client.submit( 
JobSubmitAction.builder()
-                    .addExecuteCode(executeCode)
-                    .setStartupParams(startupMap)
-                    .setUser(user)//Job提交用户
-                    .addExecuteUser(user)//实际执行用户
-                    .setLabels(labels)
-                    .setSource(source)
-                    .build()
-            );
-            System.out.println("execId: " + jobExecuteResult.getExecID() + ", 
taskId: " + jobExecuteResult.taskID());
-
-            // 4. 获取脚本的执行状态
-            JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
-            int sleepTimeMills = 1000;
-            while(!jobInfoResult.isCompleted()) {
-                // 5. 获取脚本的执行进度
-                JobProgressResult progress = client.progress(jobExecuteResult);
-                Utils.sleepQuietly(sleepTimeMills);
-                jobInfoResult = client.getJobInfo(jobExecuteResult);
-            }
-
-            // 6. 获取脚本的Job信息
-            JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
-            // 7. 获取结果集列表(如果用户一次提交多个SQL,会产生多个结果集)
-            String resultSet = jobInfo.getResultSetList(client)[0];
-            // 8. 通过一个结果集信息,获取具体的结果集
-            Object fileContents = 
client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
-            System.out.println("fileContents: " + fileContents);
-
-        } catch (Exception e) {
-            e.printStackTrace();
-            IOUtils.closeQuietly(client);
-        }
-        IOUtils.closeQuietly(client);
-    }
-}
-
-```
-### 3.2 Scala 测试类
-
-```
+### 3. Scala测试代码:
+```scala
 package com.webank.wedatasphere.linkis.client.test
 
 import java.util
@@ -305,63 +189,59 @@ import 
com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuild
 import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant
 import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant
 import com.webank.wedatasphere.linkis.ujes.client.UJESClient
-import com.webank.wedatasphere.linkis.ujes.client.request.{JobSubmitAction, 
ResultSetAction}
+import com.webank.wedatasphere.linkis.ujes.client.request.{JobExecuteAction, 
JobSubmitAction, ResultSetAction}
+import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult
 import org.apache.commons.io.IOUtils
+import org.apache.commons.lang.StringUtils
 
 
 object ScalaClientTest {
 
-  def main(args: Array[String]): Unit = {
-    val executeCode = "show tables"
-    val user = "hadoop"
-
-    // 1. 配置DWSClientBuilder,通过DWSClientBuilder获取一个DWSClientConfig
-    val clientConfig = DWSClientConfigBuilder.newBuilder()
-      .addServerUrl("http://${ip}:${port}";) 
//指定ServerUrl,Linkis服务器端网关的地址,如http://{ip}:{port}
-      .connectionTimeout(30000)  //connectionTimeOut 客户端连接超时时间
-      .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES)  
//是否启用注册发现,如果启用,会自动发现新启动的Gateway
-      .loadbalancerEnabled(true)  // 是否启用负载均衡,如果不启用注册发现,则负载均衡没有意义
-      .maxConnectionSize(5)   //指定最大连接数,即最大并发数
-      .retryEnabled(false).readTimeout(30000)   //执行失败,是否允许重试
-      .setAuthenticationStrategy(new StaticAuthenticationStrategy())  
//AuthenticationStrategy Linkis认证方式
-      .setAuthTokenKey("${username}").setAuthTokenValue("${password}") 
//认证key,一般为用户名;  认证value,一般为用户名对应的密码
-      .setDWSVersion("v1").build()  //Linkis后台协议的版本,当前版本为v1
-
-    // 2. 通过DWSClientConfig获取一个UJESClient
-    val client = UJESClient(clientConfig)
+  // 1. build config: linkis gateway url
+  val clientConfig = DWSClientConfigBuilder.newBuilder()
+    .addServerUrl("http://127.0.0.1:9001/";)   //set linkis-mg-gateway url: 
http://{ip}:{port}
+    .connectionTimeout(30000)   //connectionTimeOut
+    .discoveryEnabled(false) //disable discovery
+    .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
+    .loadbalancerEnabled(true)  // enable loadbalance
+    .maxConnectionSize(5)   // set max Connection
+    .retryEnabled(false) // set retry
+    .readTimeout(30000)  //set read timeout
+    .setAuthenticationStrategy(new StaticAuthenticationStrategy())   
//AuthenticationStrategy Linkis authen suppory static and Token
+    .setAuthTokenKey("hadoop")  // set submit user
+    .setAuthTokenValue("hadoop")  // set passwd or token 
(setAuthTokenValue("BML-AUTH"))
+    .setDWSVersion("v1") //linkis rest version v1
+    .build();
+
+  // 2. new Client(Linkis Client) by clientConfig
+  val client = UJESClient(clientConfig)
 
+  def main(args: Array[String]): Unit = {
+    val user = "hadoop" // execute user
+    val executeCode = "df=spark.sql(\"show tables\")\n" +
+      "show(df)"; // code support:sql/hql/py/scala
     try {
-      // 3. 开始执行代码
+      // 3. build job and execute
       println("user : " + user + ", code : [" + executeCode + "]")
-      val startupMap = new java.util.HashMap[String, Any]()
-      startupMap.put("wds.linkis.yarnqueue", "q02") //启动参数配置
-      //指定Label
-      val labels: util.Map[String, Any] = new util.HashMap[String, Any]
-      //添加本次执行所依赖的的标签,如engineLabel
-      labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1")
-      labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE")
-      labels.put(LabelKeyConstant.CODE_TYPE_KEY, "hql")
-      //指定source
-      val source: util.Map[String, Any] = new util.HashMap[String, Any]
-      source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test")
-      val jobExecuteResult = client.submit(JobSubmitAction.builder
-          .addExecuteCode(executeCode)
-          .setStartupParams(startupMap)
-          .setUser(user) //Job提交用户
-          .addExecuteUser(user) //实际执行用户
-          .setLabels(labels)
-          .setSource(source)
-          .build)  //User,请求用户;用于做用户级多租户隔离
+      val jobExecuteResult = toSubmit(user, executeCode)
+      //0.X: val jobExecuteResult = toExecute(user, executeCode) 
       println("execId: " + jobExecuteResult.getExecID + ", taskId: " + 
jobExecuteResult.taskID)
-
-      // 4. 获取脚本的执行状态
+      // 4. get job jonfo
       var jobInfoResult = client.getJobInfo(jobExecuteResult)
+      var logFromLen = 0
+      val logSize = 100
       val sleepTimeMills : Int = 1000
       while (!jobInfoResult.isCompleted) {
-        // 5. 获取脚本的执行进度
+        // 5. get progress and log
         val progress = client.progress(jobExecuteResult)
-        val progressInfo = if (progress.getProgressInfo != null) 
progress.getProgressInfo.toList else List.empty
-        println("progress: " + progress.getProgress + ", progressInfo: " + 
progressInfo)
+        println("progress: " + progress.getProgress)
+        val logObj = client .log(jobExecuteResult, logFromLen, logSize)
+        logFromLen = logObj.fromLine
+        val logArray = logObj.getLog
+        // 0: info 1: warn 2: error 3: all
+        if (logArray != null && logArray.size >= 4 && 
StringUtils.isNotEmpty(logArray.get(3))) {
+          println(s"log: ${logArray.get(3)}")
+        }
         Utils.sleepQuietly(sleepTimeMills)
         jobInfoResult = client.getJobInfo(jobExecuteResult)
       }
@@ -370,14 +250,14 @@ object ScalaClientTest {
         throw new Exception(jobInfoResult.getMessage)
       }
 
-      // 6. 获取脚本的Job信息
+      // 6. Get the result set list (if the user submits multiple SQLs at a 
time,
+      // multiple result sets will be generated)
       val jobInfo = client.getJobInfo(jobExecuteResult)
-      // 7. 获取结果集列表(如果用户一次提交多个SQL,会产生多个结果集)
       val resultSetList = jobInfoResult.getResultSetList(client)
       println("All result set list:")
       resultSetList.foreach(println)
       val oneResultSet = jobInfo.getResultSetList(client).head
-      // 8. 通过一个结果集信息,获取具体的结果集
+      // 7. get resultContent
       val fileContents = 
client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
       println("First fileContents: ")
       println(fileContents)
@@ -389,6 +269,64 @@ object ScalaClientTest {
     IOUtils.closeQuietly(client)
   }
 
+  /**
+   * Linkis 1.0 recommends the use of Submit method
+   */
+  def toSubmit(user: String, code: String): JobExecuteResult = {
+    // 1. build  params
+    // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+    val labels: util.Map[String, Any] = new util.HashMap[String, Any]
+    labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required 
engineType Label
+    labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-IDE");// 
required execute user and creator
+    labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // required codeType
+
+    val startupMap = new java.util.HashMap[String, Any]()
+    // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+    startupMap.put("spark.executor.instances", 2);
+    // setting linkis params
+    startupMap.put("wds.linkis.rm.yarnqueue", "dws");
+    // 2. build jobSubmitAction
+    val jobSubmitAction = JobSubmitAction.builder
+      .addExecuteCode(code)
+      .setStartupParams(startupMap)
+      .setUser(user) //submit user
+      .addExecuteUser(user) //execute user
+      .setLabels(labels)
+      .build
+    // 3. to execute
+    client.submit(jobSubmitAction)
+  }
+
+
+  /**
+   * Compatible with 0.X execution mode
+   */
+  def toExecute(user: String, code: String): JobExecuteResult = {
+    // 1. build  params
+    // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+    val labels = new util.HashMap[String, Any]
+    // labels.put(LabelKeyConstant.TENANT_KEY, "fate");
+
+    val startupMap = new java.util.HashMap[String, Any]()
+    // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+    startupMap.put("spark.executor.instances", 2)
+    // setting linkis params
+    startupMap.put("wds.linkis.rm.yarnqueue", "dws")
+    // 2. build JobExecuteAction (0.X old way of using)
+    val  executionAction = JobExecuteAction.builder()
+      .setCreator("IDE")  //creator, the system name of the client requesting 
linkis, used for system-level isolation
+      .addExecuteCode(code)   //Execution Code
+      .setEngineTypeStr("spark") // engineConn type
+      .setRunTypeStr("py") // code type
+      .setUser(user)   //execute user
+      .setStartupParams(startupMap) // start up params
+      .build();
+    executionAction.addRequestPayload(TaskConstant.LABELS, labels);
+    // 3. to execute
+    client.execute(executionAction)
+  }
+
+
 }
 
 ```
diff --git a/versioned_docs/version-1.0.2/engine_usage/hive.md 
b/versioned_docs/version-1.0.2/engine_usage/hive.md
index 8ea76b5..c38b0ac 100644
--- a/versioned_docs/version-1.0.2/engine_usage/hive.md
+++ b/versioned_docs/version-1.0.2/engine_usage/hive.md
@@ -1,15 +1,15 @@
 ---
-title:  Hive Engine Usage
+title:  Hive engineConn Usage
 sidebar_position: 2
 ---
 
-# Hive engine usage documentation
+# Hive engineConn usage documentation
 
-This article mainly introduces the configuration, deployment and use of Hive 
engine in Linkis1.0.
+This article mainly introduces the configuration, deployment and use of Hive 
engineConn in Linkis1.0.
 
-## 1. Environment configuration before Hive engine use
+## 1. Environment configuration before Hive engineConn use
 
-If you want to use the hive engine on your server, you need to ensure that the 
following environment variables have been set correctly and that the user who 
started the engine has these environment variables.
+If you want to use the hive engineConn on your linkis server, you need to 
ensure that the following environment variables have been set correctly and 
that the user who started the engineConn has these environment variables.
 
 It is strongly recommended that you check these environment variables of the 
executing user before executing hive tasks.
 
@@ -22,30 +22,27 @@ It is strongly recommended that you check these environment 
variables of the exe
 
 Table 1-1 Environmental configuration list
 
-## 2. Hive engine configuration and deployment
+## 2. Hive engineConn configuration and deployment
 
 ### 2.1 Hive version selection and compilation
 
-The version of Hive supports hive1.x and hive2.x, the default is to support 
hive on MapReduce, if you want to change to Hive
-on Tez, you need to make some changes in accordance with this pr.
+The version of Hive supports hive1.x/hive2.x/hive3.x. The hive version 
supported by default is 2.3.3. If you want to modify the hive version, such as 
2.3.3, you can find the linkis-engineConnplugin-hive module and change the 
\<hive.version\> tag to 2.3 .3, then compile this module separately.
+The default is to support hive on MapReduce, if you want to change to Hive on 
Tez, You need to copy all the jars prefixed with tez-* to the directory: 
`${LINKIS_HOME}/lib/linkis-engineconn-plugins/hive/dist/version/lib`.
+Other hive operating modes are similar, just copy the corresponding 
dependencies to the lib directory of Hive EngineConn.
 
-<https://github.com/apache/incubator-linkis/pull/541>
+### 2.2 hive engineConnConn deployment and loading
 
-The hive version supported by default is 1.2.1. If you want to modify the hive 
version, such as 2.3.3, you can find the linkis-engineplugin-hive module and 
change the \<hive.version\> tag to 2.3 .3, then compile this module separately
+If you have already compiled your hive engineConn plug-in has been compiled, 
then you need to put the new plug-in in the specified location to load, you can 
refer to the following article for details
 
-### 2.2 hive engineConn deployment and loading
+[engineConnConnPlugin 
Installation](deployment/engineConn_conn_plugin_installation.md) 
 
-If you have already compiled your hive engine plug-in has been compiled, then 
you need to put the new plug-in in the specified location to load, you can 
refer to the following article for details
+### 2.3 Linkis adds Hive console parameters(optional)
 
-[EngineConnPlugin Installation](deployment/engine_conn_plugin_installation.md) 
+Linkis can configure the corresponding EngineConn parameters on the management 
console. If your newly added EngineConn needs this feature, you can refer to 
the following documents:
 
-### 2.3 Hive engine tags
+[engineConnConnPlugin Installation > 2.2 Configuration modification of 
management console 
(optional)](deployment/engineConn_conn_plugin_installation.md) 
 
-Linkis1.0 is done through tags, so we need to insert data in our database, the 
way of inserting is shown below.
-
-[EngineConnPlugin Installation > 2.2 Configuration modification of management 
console (optional)](deployment/engine_conn_plugin_installation.md) 
-
-## 3. Use of hive engine
+## 3. Use of hive engineConn
 
 ### Preparation for operation, queue setting
 
@@ -55,31 +52,40 @@ Hive's MapReduce task requires yarn resources, so you need 
to set up the queue a
 
 Figure 3-1 Queue settings
 
-### 3.1 How to use Scriptis
+You can also add the queue value in the StartUpMap of the submission 
parameter: `startupMap.put("wds.linkis.rm.yarnqueue", "dws")`
 
-The use of Scriptis is the simplest. You can directly enter Scriptis, 
right-click the directory and create a new hive script and write hivesql code.
+### 3.1 How to use Linkis SDK
 
-The implementation of the hive engine is by instantiating the driver instance 
of hive, and then the driver submits the task, and obtains the result set and 
displays it.
+Linkis  provides a client method to call hive tasks. The call method is 
through the SDK provided by LinkisClient. We provide java and scala two ways to 
call, the specific usage can refer to [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+If you use Hive, you only need to make the following changes:
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-2.3.3"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "hql"); // required codeType
+```
 
-![](/Images/EngineUsage/hive-run.png)
+### 3.2 How to use Linkis-cli
 
-Figure 3-2 Screenshot of the execution effect of hivesql
+After Linkis 1.0, you can submit tasks through cli. We only need to specify 
the corresponding EngineConn and CodeType tag types. The use of Hive is as 
follows:
+```shell
+sh ./bin/linkis-cli -engineType jdbc-4 -codeType jdbc -code "show tables"  
-submitUser hadoop -proxyUser hadoop
+```
+The specific usage can refer to [Linkis CLI 
Manual](user_guide/linkiscli_manual.md).
 
-### 3.2 How to use workflow
+### 3.3 How to use Scriptis
 
-DSS workflow also has a hive node, you can drag in the workflow node, then 
double-click to enter and edit the code, and then execute it in the form of a 
workflow.
+The use of [Scriptis](https://github.com/WeBankFinTech/Scriptis) is the 
simplest. You can directly enter Scriptis, right-click the directory and create 
a new hive script and write hivesql code.
 
-![](/Images/EngineUsage/workflow.png)
+The implementation of the hive engineConn is by instantiating the driver 
instance of hive, and then the driver submits the task, and obtains the result 
set and displays it.
 
-Figure 3-5 The node where the workflow executes hive
-
-### 3.3 How to use Linkis Client
+![](/Images/EngineUsage/hive-run.png)
 
-Linkis also provides a client method to call hive tasks. The call method is 
through the SDK provided by LinkisClient. We provide java and scala two ways to 
call, the specific usage can refer to [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+Figure 3-2 Screenshot of the execution effect of hql
 
-## 4. Hive engine user settings
+## 4. Hive engineConn user settings
 
-In addition to the above engine configuration, users can also make custom 
settings, including the memory size of the hive Driver process, etc.
+In addition to the above engineConn configuration, users can also make custom 
settings, including the memory size of the hive Driver process, etc.
 
 ![](/Images/EngineUsage/hive-config.png)
 
diff --git a/versioned_docs/version-1.0.2/engine_usage/jdbc.md 
b/versioned_docs/version-1.0.2/engine_usage/jdbc.md
index 2b81728..75d926e 100644
--- a/versioned_docs/version-1.0.2/engine_usage/jdbc.md
+++ b/versioned_docs/version-1.0.2/engine_usage/jdbc.md
@@ -1,32 +1,32 @@
 ---
-title:  JDBC Engine Usage
+title:  JDBC EngineConn Usage
 sidebar_position: 2
 ---
 
 
-# JDBC engine usage documentation
+# JDBC EngineConn usage documentation
 
-This article mainly introduces the configuration, deployment and use of JDBC 
engine in Linkis1.0.
+This article mainly introduces the configuration, deployment and use of JDBC 
EngineConn in Linkis1.0.
 
-## 1. Environment configuration before using the JDBC engine
+## 1. Environment configuration before using the JDBC EngineConn
 
-If you want to use the JDBC engine on your server, you need to prepare the 
JDBC connection information, such as the connection address, user name and 
password of the MySQL database, etc.
+If you want to use the JDBC EngineConn on your server, you need to prepare the 
JDBC connection information, such as the connection address, user name and 
password of the MySQL database, etc.
 
-## 2. JDBC engine configuration and deployment
+## 2. JDBC EngineConn configuration and deployment
 
 ### 2.1 JDBC version selection and compilation
 
-The JDBC engine does not need to be compiled by the user, and the compiled 
JDBC engine plug-in package can be used directly. Drivers that have been 
provided include MySQL, PostgreSQL, etc.
+The JDBC EngineConn does not need to be compiled by the user, and the compiled 
JDBC EngineConn plug-in package can be used directly. Drivers that have been 
provided include MySQL, PostgreSQL, etc.
 
-### 2.2 JDBC engineConn deployment and loading
+### 2.2 JDBC EngineConn deployment and loading
 
 Here you can use the default loading method to use it normally, just install 
it according to the standard version.
 
-### 2.3 JDBC engine tags
+### 2.3 JDBC EngineConn Labels
 
 Here you can use the default dml.sql to insert it and it can be used normally.
 
-## 3. The use of JDBC engine
+## 3. The use of JDBC EngineConn
 
 ### Ready to operate
 
@@ -36,24 +36,42 @@ You need to configure JDBC connection information, 
including connection address
 
 Figure 3-1 JDBC configuration information
 
-### 3.1 How to use Scriptis
+You can also specify in the RuntimeMap of the submitted task
+```shell
+wds.linkis.jdbc.connect.url 
+wds.linkis.jdbc.username
+wds.linkis.jdbc.password
+```
 
-The way to use Scriptis is the simplest. You can go directly to Scriptis, 
right-click the directory and create a new JDBC script, write JDBC code and 
click Execute.
+### 3.1 How to use Linkis SDK
 
-The execution principle of JDBC is to load the JDBC Driver and submit sql to 
the SQL server for execution and obtain the result set and return.
+Linkis provides a client method to call jdbc tasks. The call method is through 
the SDK provided by LinkisClient. We provide java and scala two ways to call, 
the specific usage can refer to [JAVA SDK Manual](user_guide/sdk_manual.md).
+If you use Hive, you only need to make the following changes:
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "jdbc-2.3.3"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "jdbc"); // required 
codeType
+```
 
-![](/Images/EngineUsage/jdbc-run.png)
+### 3.2 How to use Linkis-cli
 
-Figure 3-2 Screenshot of the execution effect of JDBC
+After Linkis 1.0, you can submit tasks through cli. We only need to specify 
the corresponding EngineConn and CodeType tag types. The use of JDBC is as 
follows:
+```shell
+sh ./bin/linkis-cli -engineType jdbc-4 -codeType jdbc -code "show tables"  
-submitUser hadoop -proxyUser hadoop
+```
+The specific usage can refer to [Linkis CLI 
Manual](user_guide/linkiscli_manual.md).
 
-### 3.2 How to use workflow
+### 3.3 How to use Scriptis
 
-DSS workflow also has a JDBC node, you can drag into the workflow node, then 
double-click to enter and edit the code, and then execute it in the form of a 
workflow.
+The way to use [Scriptis](https://github.com/WeBankFinTech/Scriptis)  is the 
simplest. You can go directly to Scriptis, right-click the directory and create 
a new JDBC script, write JDBC code and click Execute.
 
-### 3.3 How to use Linkis Client
+The execution principle of JDBC is to load the JDBC Driver and submit sql to 
the SQL server for execution and obtain the result set and return.
 
-Linkis also provides a client way to call JDBC tasks, the way to call is 
through the SDK provided by LinkisClient. We provide java and scala two ways to 
call, the specific usage can refer to [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+![](/Images/EngineUsage/jdbc-run.png)
+
+Figure 3-2 Screenshot of the execution effect of JDBC
 
-## 4. JDBC engine user settings
+## 4. JDBC EngineConn user settings
 
 JDBC user settings are mainly JDBC connection information, but it is 
recommended that users encrypt and manage this password and other information.
\ No newline at end of file
diff --git a/versioned_docs/version-1.0.2/engine_usage/python.md 
b/versioned_docs/version-1.0.2/engine_usage/python.md
index fd16ba3..6a13512 100644
--- a/versioned_docs/version-1.0.2/engine_usage/python.md
+++ b/versioned_docs/version-1.0.2/engine_usage/python.md
@@ -1,16 +1,16 @@
 ---
-title:  Python Engine Usage
+title:  Python EngineConn Usage
 sidebar_position: 2
 ---
 
 
-# Python engine usage documentation
+# Python EngineConn usage documentation
 
-This article mainly introduces the configuration, deployment and use of the 
Python engine in Linkis1.0.
+This article mainly introduces the configuration, deployment and use of the 
Python EngineConn in Linkis1.0.
 
-## 1. Environment configuration before using Python engine
+## 1. Environment configuration before using Python EngineConn
 
-If you want to use the python engine on your server, you need to ensure that 
the python execution directory and execution permissions are in the user's PATH.
+If you want to use the python EngineConn on your server, you need to ensure 
that the python execution directory and execution permissions are in the user's 
PATH.
 
 | Environment variable name | Environment variable content | Remarks |
 |------------|-----------------|--------------------------------|
@@ -18,50 +18,61 @@ If you want to use the python engine on your server, you 
need to ensure that the
 
 Table 1-1 Environmental configuration list
 
-## 2. Python engine configuration and deployment
+## 2. Python EngineConn configuration and deployment
 
 ### 2.1 Python version selection and compilation
 
 Python supports python2 and
-For python3, you can simply change the configuration to complete the Python 
version switch, without recompiling the python engine version.
+For python3, you can simply change the configuration to complete the Python 
version switch, without recompiling the python EngineConn version.
 
 ### 2.2 python engineConn deployment and loading
 
 Here you can use the default loading method to be used normally.
 
-### 2.3 tags of python engine
+### 2.3 tags of python EngineConn
 
 Here you can use the default dml.sql to insert it and it can be used normally.
 
-## 3. Use of Python engine
+## 3. Use of Python EngineConn
 
 ### Ready to operate
 
 Before submitting python on linkis, you only need to make sure that there is 
python path in your user's PATH.
 
-### 3.1 How to use Scriptis
+### 3.1 How to use Linkis SDK
 
-The way to use Scriptis is the simplest. You can directly enter Scriptis, 
right-click the directory and create a new python script, write python code and 
click Execute.
+Linkis  provides a client method to call python tasks. The call method is 
through the SDK provided by LinkisClient. We provide java and scala two ways to 
call, the specific usage can refer to [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+If you use Hive, you only need to make the following changes:
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "python-python2"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "python"); // required 
codeType
+```
 
-The execution logic of python is to start a python through Py4j
-Gateway, and then the Python engine submits the code to the python executor 
for execution.
+### 3.2 How to use Linkis-cli
 
-![](/Images/EngineUsage/python-run.png)
+After Linkis 1.0, you can submit tasks through cli. We only need to specify 
the corresponding EngineConn and CodeType tag types. The use of Python is as 
follows:
+```shell
+sh ./bin/linkis-cli -engineType python-python2 -codeType python -code 
"print(\"hello\")"  -submitUser hadoop -proxyUser hadoop
+```
+The specific usage can refer to [Linkis CLI 
Manual](user_guide/linkiscli_manual.md).
 
-Figure 3-1 Screenshot of the execution effect of python
+### 3.3 How to use Scriptis
 
-### 3.2 How to use workflow
+The way to use [Scriptis](https://github.com/WeBankFinTech/Scriptis) is the 
simplest. You can directly enter Scriptis, right-click the directory and create 
a new python script, write python code and click Execute.
 
-The DSS workflow also has a python node, you can drag into the workflow node, 
then double-click to enter and edit the code, and then execute it in the form 
of a workflow.
+The execution logic of python is to start a python through Py4j
+Gateway, and then the Python EngineConn submits the code to the python 
executor for execution.
 
-### 3.3 How to use Linkis Client
+![](/Images/EngineUsage/python-run.png)
 
-Linkis also provides a client method to call spark tasks, the call method is 
through the SDK provided by LinkisClient. We provide java and scala two ways to 
call, the specific usage can refer to [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+Figure 3-1 Screenshot of the execution effect of python
 
-## 4. Python engine user settings
+## 4. Python EngineConn user settings
 
-In addition to the above engine configuration, users can also make custom 
settings, such as the version of python and some modules that python needs to 
load.
+In addition to the above EngineConn configuration, users can also make custom 
settings, such as the version of python and some modules that python needs to 
load.
 
-![](/Images/EngineUsage/jdbc-conf.png)
+![](/Images/EngineUsage/python-config.png)
 
 Figure 4-1 User-defined configuration management console of python
\ No newline at end of file
diff --git a/versioned_docs/version-1.0.2/engine_usage/shell.md 
b/versioned_docs/version-1.0.2/engine_usage/shell.md
index 494c812..82117fa 100644
--- a/versioned_docs/version-1.0.2/engine_usage/shell.md
+++ b/versioned_docs/version-1.0.2/engine_usage/shell.md
@@ -1,14 +1,14 @@
 ---
-title:  Shell Engine Usage
+title:  Shell EngineConn Usage
 sidebar_position: 2
 ---
 
-# Shell engine usage document
+# Shell EngineConn usage document
 
-This article mainly introduces the configuration, deployment and use of Shell 
engine in Linkis1.0
-## 1. The environment configuration before using the Shell engine
+This article mainly introduces the configuration, deployment and use of Shell 
EngineConn in Linkis1.0
+## 1. The environment configuration before using the Shell EngineConn
 
-If you want to use the shell engine on your server, you need to ensure that 
the user's PATH has the bash execution directory and execution permissions.
+If you want to use the shell EngineConn on your server, you need to ensure 
that the user's PATH has the bash execution directory and execution permissions.
 
 | Environment variable name | Environment variable content | Remarks           
  |
 
|---------------------------|------------------------------|---------------------|
@@ -16,45 +16,54 @@ If you want to use the shell engine on your server, you 
need to ensure that the
 
 Table 1-1 Environmental configuration list
 
-## 2. Shell engine configuration and deployment
+## 2. Shell EngineConn configuration and deployment
 
 ### 2.1 Shell version selection and compilation
 
-The shell engine does not need to be compiled by the user, and the compiled 
shell engine plug-in package can be used directly.
+The shell EngineConn does not need to be compiled by the user, and the 
compiled shell EngineConn plug-in package can be used directly.
 ### 2.2 shell engineConn deployment and loading
 
 Here you can use the default loading method to be used normally.
 
-### 2.3 Labels of the shell engine
+### 2.3 Labels of the shell EngineConn
 
 Here you can use the default dml.sql to insert it and it can be used normally.
 
-## 3. Use of Shell Engine
+## 3. Use of Shell EngineConn
 
 ### Ready to operate
 
 Before submitting the shell on linkis, you only need to ensure that there is 
the path of the shell in your user's $PATH.
 
-### 3.1 How to use Scriptis
+### 3.1 How to use Linkis SDK
 
-The use of Scriptis is the simplest. You can directly enter Scriptis, 
right-click the directory and create a new shell script, write shell code and 
click Execute.
+Linkis  provides a client method to call shell tasks. The call method is 
through the SDK provided by LinkisClient. We provide java and scala two ways to 
call, the specific usage can refer to [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+If you use Hive, you only need to make the following changes:
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "shell-1"); // required 
engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "shell"); // required 
codeType
+```
 
-The execution principle of the shell is that the shell engine starts a system 
process to execute through the ProcessBuilder that comes with java, and 
redirects the output of the process to the engine and writes it to the log.
+### 3.2 How to use Linkis-cli
 
-![](/Images/EngineUsage/shell-run.png)
-
-Figure 3-1 Screenshot of shell execution effect
+After Linkis 1.0, you can submit tasks through cli. We only need to specify 
the corresponding EngineConn and CodeType tag types. The use of shell is as 
follows:
+```shell
+sh ./bin/linkis-cli -engineType shell-1 -codeType shell -code "echo \"hello\" 
"  -submitUser hadoop -proxyUser hadoop
+```
+The specific usage can refer to [Linkis CLI 
Manual](user_guide/linkiscli_manual.md).
 
-### 3.2 How to use workflow
+### 3.3 How to use Scriptis
 
-The DSS workflow also has a shell node. You can drag in the workflow node, 
then double-click to enter and edit the code, and then execute it in the form 
of a workflow.
+The use of [Scriptis](https://github.com/WeBankFinTech/Scriptis) is the 
simplest. You can directly enter Scriptis, right-click the directory and create 
a new shell script, write shell code and click Execute.
 
-Shell execution needs to pay attention to one point. If the workflow is 
executed in multiple lines, the success of the workflow node is determined by 
the last command. For example, the first two lines are wrong, but the shell 
return value of the last line is 0, then this node Is successful.
+The execution principle of the shell is that the shell EngineConn starts a 
system process to execute through the ProcessBuilder that comes with java, and 
redirects the output of the process to the EngineConn and writes it to the log.
 
-### 3.3 How to use Linkis Client
+![](/Images/EngineUsage/shell-run.png)
 
-Linkis also provides a client method to call the shell task, the calling 
method is through the SDK provided by LinkisClient. We provide java and scala 
two ways to call, the specific usage can refer to [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+Figure 3-1 Screenshot of shell execution effect
 
-## 4. Shell engine user settings
+## 4. Shell EngineConn user settings
 
-The shell engine can generally set the maximum memory of the engine JVM.
+The shell EngineConn can generally set the maximum memory of the EngineConn 
JVM.
diff --git a/versioned_docs/version-1.0.2/engine_usage/spark.md 
b/versioned_docs/version-1.0.2/engine_usage/spark.md
index 90e6949..517a382 100644
--- a/versioned_docs/version-1.0.2/engine_usage/spark.md
+++ b/versioned_docs/version-1.0.2/engine_usage/spark.md
@@ -1,16 +1,16 @@
 ---
-title:  Spark Engine Usage
+title:  Spark EngineConn Usage
 sidebar_position: 2
 ---
 
 
-# Spark engine usage documentation
+# Spark EngineConn usage documentation
 
-This article mainly introduces the configuration, deployment and use of spark 
engine in Linkis1.0.
+This article mainly introduces the configuration, deployment and use of spark 
EngineConn in Linkis1.0.
 
-## 1. Environment configuration before using Spark engine
+## 1. Environment configuration before using Spark EngineConn
 
-If you want to use the spark engine on your server, you need to ensure that 
the following environment variables have been set correctly and that the user 
who started the engine has these environment variables.
+If you want to use the spark EngineConn on your server, you need to ensure 
that the following environment variables have been set correctly and that the 
user who started the EngineConn has these environment variables.
 
 It is strongly recommended that you check these environment variables of the 
executing user before executing spark tasks.
 
@@ -19,14 +19,14 @@ It is strongly recommended that you check these environment 
variables of the exe
 | JAVA_HOME | JDK installation path | Required |
 | HADOOP_HOME | Hadoop installation path | Required |
 | HADOOP_CONF_DIR | Hadoop configuration path | Required |
-| HIVE\_CONF_DIR | Hive configuration path | Required |
+| HIVE_CONF_DIR | Hive configuration path | Required |
 | SPARK_HOME | Spark installation path | Required |
 | SPARK_CONF_DIR | Spark configuration path | Required |
 | python | python | Anaconda's python is recommended as the default python |
 
 Table 1-1 Environmental configuration list
 
-## 2. Configuration and deployment of Spark engine
+## 2. Configuration and deployment of Spark EngineConn
 
 ### 2.1 Selection and compilation of spark version
 
@@ -34,17 +34,17 @@ In theory, Linkis1.0 supports all versions of spark2.x and 
above. Spark 2.4.3 is
 
 ### 2.2 spark engineConn deployment and loading
 
-If you have already compiled your spark engine plug-in has been compiled, then 
you need to put the new plug-in to the specified location to load, you can 
refer to the following article for details
+If you have already compiled your spark EngineConn plug-in has been compiled, 
then you need to put the new plug-in to the specified location to load, you can 
refer to the following article for details
 
 [EngineConnPlugin Installation](deployment/engine_conn_plugin_installation.md) 
 
-### 2.3 tags of spark engine
+### 2.3 tags of spark EngineConn
 
 Linkis1.0 is done through tags, so we need to insert data in our database, the 
way of inserting is shown below.
 
 [EngineConnPlugin Installation > 2.2 Configuration modification of management 
console (optional)](deployment/engine_conn_plugin_installation.md) 
 
-## 3. Use of spark engine
+## 3. Use of spark EngineConn
 
 ### Preparation for operation, queue setting
 
@@ -54,11 +54,34 @@ Because the execution of spark is a resource that requires 
a queue, the user mus
 
 Figure 3-1 Queue settings
 
-### 3.1 How to use Scriptis
+You can also add the queue value in the StartUpMap of the submission 
parameter: `startupMap.put("wds.linkis.rm.yarnqueue", "dws")`
 
-The use of Scriptis is the simplest. You can directly enter Scriptis and 
create a new sql, scala or pyspark script for execution.
+### 3.1 How to use Linkis SDK
 
-The sql method is the simplest. You can create a new sql script and write and 
execute it. When it is executed, the progress will be displayed. If the user 
does not have a spark engine at the beginning, the execution of sql will start 
a spark session (it may take some time here),
+Linkis  provides a client method to call Spark tasks. The call method is 
through the SDK provided by LinkisClient. We provide java and scala two ways to 
call, the specific usage can refer to [JAVA SDK 
Manual](user_guide/sdk_manual.md).
+If you use Hive, you only need to make the following changes:
+```java
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "sql"); // required codeType
+```
+
+### 3.2 How to use Linkis-cli
+
+After Linkis 1.0, you can submit tasks through cli. We only need to specify 
the corresponding EngineConn and CodeType tag types. The use of Spark is as 
follows:
+```shell
+## codeType py-->pyspark  sql-->sparkSQL scala-->Spark scala
+sh ./bin/linkis-cli -engineType spark-2.4.3 -codeType sql -code "show tables"  
-submitUser hadoop -proxyUser hadoop
+```
+The specific usage can refer to [Linkis CLI 
Manual](user_guide/linkiscli_manual.md).
+
+
+### 3.3 How to use Scriptis
+
+The use of [Scriptis](https://github.com/WeBankFinTech/Scriptis) is the 
simplest. You can directly enter Scriptis and create a new sql, scala or 
pyspark script for execution.
+
+The sql method is the simplest. You can create a new sql script and write and 
execute it. When it is executed, the progress will be displayed. If the user 
does not have a spark EngineConn at the beginning, the execution of sql will 
start a spark session (it may take some time here),
 After the SparkSession is initialized, you can start to execute sql.
 
 ![](/Images/EngineUsage/sparksql-run.png)
@@ -76,21 +99,9 @@ Similarly, in the way of pyspark, we have also initialized 
the SparkSession, and
 ![](/Images/EngineUsage/pyspakr-run.png)
 Figure 3-4 pyspark execution mode
 
-### 3.2 How to use workflow
-
-DSS workflow also has three spark nodes. You can drag in workflow nodes, such 
as sql, scala or pyspark nodes, and then double-click to enter and edit the 
code, and then execute in the form of workflow.
-
-![](/Images/EngineUsage/workflow.png)
-
-Figure 3-5 The node where the workflow executes spark
-
-### 3.3 How to use Linkis Client
-
-Linkis also provides a client method to call spark tasks, the call method is 
through the SDK provided by LinkisClient. We provide java and scala two ways to 
call, the specific usage can refer to [JAVA SDK 
Manual](user_guide/sdk_manual.md).
-
-## 4. Spark engine user settings
+## 4. Spark EngineConn user settings
 
-In addition to the above engine configuration, users can also make custom 
settings, such as the number of spark session executors and the memory of the 
executors. These parameters are for users to set their own spark parameters 
more freely, and other spark parameters can also be modified, such as the 
python version of pyspark.
+In addition to the above EngineConn configuration, users can also make custom 
settings, such as the number of spark session executors and the memory of the 
executors. These parameters are for users to set their own spark parameters 
more freely, and other spark parameters can also be modified, such as the 
python version of pyspark.
 
 ![](/Images/EngineUsage/spark-conf.png)
 
diff --git a/versioned_docs/version-1.0.2/user_guide/sdk_manual.md 
b/versioned_docs/version-1.0.2/user_guide/sdk_manual.md
index acd7304..6973171 100644
--- a/versioned_docs/version-1.0.2/user_guide/sdk_manual.md
+++ b/versioned_docs/version-1.0.2/user_guide/sdk_manual.md
@@ -8,38 +8,39 @@ sidebar_position: 2
 ## 1. Introduce dependent modules
 ```
 <dependency>
-   <groupId>com.webank.wedatasphere.linkis</groupId>
+   <groupId>org.apache.linkis</groupId>
    <artifactId>linkis-computation-client</artifactId>
    <version>${linkis.version}</version>
 </dependency>
 Such as:
 <dependency>
-   <groupId>com.webank.wedatasphere.linkis</groupId>
+   <groupId>org.apache.linkis</groupId>
    <artifactId>linkis-computation-client</artifactId>
-   <version>1.0.0-RC1</version>
+   <version>1.0.2</version>
 </dependency>
 ```
 
-## 2. Compatible with 0.X Execute method submission
+## 2. Java test code
 
-### 2.1 Java test code
-
-Create the Java test class UJESClientImplTestJ. Refer to the comments to 
understand the purposes of those interfaces:
+Create the Java test class LinkisClientTest. Refer to the comments to 
understand the purposes of those interfaces:
 
 ```java
 package com.webank.wedatasphere.linkis.client.test;
 
 import com.webank.wedatasphere.linkis.common.utils.Utils;
 import 
com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
-import 
com.webank.wedatasphere.linkis.httpclient.dws.authentication.TokenAuthenticationStrategy;
 import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
 import 
com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
+import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant;
+import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant;
 import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
 import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
 import com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction;
+import com.webank.wedatasphere.linkis.ujes.client.request.JobSubmitAction;
 import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
 import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
 import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
+import com.webank.wedatasphere.linkis.ujes.client.response.JobLogResult;
 import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
 import org.apache.commons.io.IOUtils;
 
@@ -47,260 +48,138 @@ import java.util.HashMap;
 import java.util.Map;
 import java.util.concurrent.TimeUnit;
 
-public class LinkisClientTest {
-
-    public static void main(String[] args){
-
-        String user = "hadoop";
-        String executeCode = "show databases;";
+public class JavaClientTest {
 
-        // 1. Configure DWSClientBuilder, get a DWSClientConfig through 
DWSClientBuilder
-        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) 
(DWSClientConfigBuilder.newBuilder()
-                .addServerUrl("http://${ip}:${port}";)  //Specify ServerUrl, 
the address of the linkis gateway, such as http://{ip}:{port}
-                .connectionTimeout(30000)   //connectionTimeOut Client 
connection timeout
-                .discoveryEnabled(false).discoveryFrequency(1, 
TimeUnit.MINUTES)  //Whether to enable registration discovery, if enabled, the 
newly launched Gateway will be automatically discovered
-                .loadbalancerEnabled(true)  // Whether to enable load 
balancing, if registration discovery is not enabled, load balancing is 
meaningless
-                .maxConnectionSize(5)   //Specify the maximum number of 
connections, that is, the maximum number of concurrent
-                .retryEnabled(false).readTimeout(30000)   //Execution failed, 
whether to allow retry
-                .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
  //AuthenticationStrategy Linkis login authentication method
-                
.setAuthTokenKey("${username}").setAuthTokenValue("${password}")))  
//Authentication key, generally the user name; authentication value, generally 
the password corresponding to the user name
-                .setDWSVersion("v1").build();  //The version of the linkis 
backend protocol, the current version is v1
+    // 1. build config: linkis gateway url
+    private static DWSClientConfig clientConfig = ((DWSClientConfigBuilder) 
(DWSClientConfigBuilder.newBuilder()
+            .addServerUrl("http://10.107.118.104:9001/";)   //set 
linkis-mg-gateway url: http://{ip}:{port}
+            .connectionTimeout(30000)   //connectionTimeOut
+            .discoveryEnabled(false) //disable discovery
+            .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
+            .loadbalancerEnabled(true)  // enable loadbalance
+            .maxConnectionSize(5)   // set max Connection
+            .retryEnabled(false) // set retry
+            .readTimeout(30000)  //set read timeout
+            .setAuthenticationStrategy(new StaticAuthenticationStrategy())   
//AuthenticationStrategy Linkis authen suppory static and Token
+            .setAuthTokenKey("hadoop")  // set submit user
+            .setAuthTokenValue("hadoop")))  // set passwd or token 
(setAuthTokenValue("BML-AUTH"))
+            .setDWSVersion("v1") //linkis rest version v1
+            .build();
+
+    // 2. new Client(Linkis Client) by clientConfig
+    private static UJESClient client = new UJESClientImpl(clientConfig);
 
-        // 2. Obtain a UJESClient through DWSClientConfig
-        UJESClient client = new UJESClientImpl(clientConfig);
+    public static void main(String[] args){
 
+        String user = "hadoop"; // execute user
+        String executeCode = "df=spark.sql(\"show tables\")\n" +
+                "show(df)"; // code support:sql/hql/py/scala
         try {
-            // 3. Start code execution
+
             System.out.println("user : " + user + ", code : [" + executeCode + 
"]");
-            Map<String, Object> startupMap = new HashMap<String, Object>();
-            startupMap.put("wds.linkis.yarnqueue", "default"); // A variety of 
startup parameters can be stored in startupMap, see linkis management console 
configuration
-            JobExecuteResult jobExecuteResult = 
client.execute(JobExecuteAction.builder()
-                    .setCreator("linkisClient-Test")  //creator,the system 
name of the client requesting linkis, used for system-level isolation
-                    .addExecuteCode(executeCode)   //ExecutionCode Requested 
code
-                    .setEngineType((JobExecuteAction.EngineType) 
JobExecuteAction.EngineType$.MODULE$.HIVE()) // The execution engine type of 
the linkis that you want to request, such as Spark hive, etc.
-                    .setUser(user)   //User,Requesting users; used for 
user-level multi-tenant isolation
-                    .setStartupParams(startupMap)
-                    .build());
+            // 3. build job and execute
+            JobExecuteResult jobExecuteResult = toSubmit(user, executeCode);
+            //0.x:JobExecuteResult jobExecuteResult = toExecute(user, 
executeCode);
             System.out.println("execId: " + jobExecuteResult.getExecID() + ", 
taskId: " + jobExecuteResult.taskID());
-
-            // 4. Get the execution status of the script
+            // 4. get job jonfo
             JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
             int sleepTimeMills = 1000;
+            int logFromLen = 0;
+            int logSize = 100;
             while(!jobInfoResult.isCompleted()) {
-                // 5. Get the execution progress of the script
+                // 5. get progress and log
                 JobProgressResult progress = client.progress(jobExecuteResult);
+                System.out.println("progress: " + progress.getProgress());
+                JobLogResult logRes = client.log(jobExecuteResult, logFromLen, 
logSize);
+                logFromLen = logRes.fromLine();
+                // 0: info 1: warn 2: error 3: all
+                System.out.println(logRes.log().get(3));
                 Utils.sleepQuietly(sleepTimeMills);
                 jobInfoResult = client.getJobInfo(jobExecuteResult);
             }
 
-            // 6. Get the job information of the script
             JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
-            // 7. Get a list of result sets (if the user submits multiple SQL 
at a time, multiple result sets will be generated)
+            // 6. Get the result set list (if the user submits multiple SQLs 
at a time,
+            // multiple result sets will be generated)
             String resultSet = jobInfo.getResultSetList(client)[0];
-            // 8. Get a specific result set through a result set information
+            // 7. get resultContent
             Object fileContents = 
client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
-            System.out.println("fileContents: " + fileContents);
-
+            System.out.println("res: " + fileContents);
         } catch (Exception e) {
             e.printStackTrace();
             IOUtils.closeQuietly(client);
         }
         IOUtils.closeQuietly(client);
     }
-}
-```
-
-Run the above code to interact with Linkis
-
-### 3. Scala test code:
-
-```scala
-package com.webank.wedatasphere.linkis.client.test
-
-import java.util.concurrent.TimeUnit
-
-import com.webank.wedatasphere.linkis.common.utils.Utils
-import 
com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy
-import 
com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder
-import com.webank.wedatasphere.linkis.ujes.client.UJESClient
-import 
com.webank.wedatasphere.linkis.ujes.client.request.JobExecuteAction.EngineType
-import com.webank.wedatasphere.linkis.ujes.client.request.{JobExecuteAction, 
ResultSetAction}
-import org.apache.commons.io.IOUtils
-
-object LinkisClientImplTest extends App {
-
-  var executeCode = "show databases;"
-  var user = "hadoop"
-
-  // 1. Configure DWSClientBuilder, get a DWSClientConfig through 
DWSClientBuilder
-  val clientConfig = DWSClientConfigBuilder.newBuilder()
-    .addServerUrl("http://${ip}:${port}";) //Specify ServerUrl, the address of 
the Linkis server-side gateway, such as http://{ip}:{port}
-    .connectionTimeout(30000) //connectionTimeOut client connection timeout
-    .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) //Whether 
to enable registration discovery, if enabled, the newly launched Gateway will 
be automatically discovered
-    .loadbalancerEnabled(true) // Whether to enable load balancing, if 
registration discovery is not enabled, load balancing is meaningless
-    .maxConnectionSize(5) //Specify the maximum number of connections, that 
is, the maximum number of concurrent
-    .retryEnabled(false).readTimeout(30000) //execution failed, whether to 
allow retry
-    .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
//AuthenticationStrategy Linkis authentication method
-    .setAuthTokenKey("${username}").setAuthTokenValue("${password}") 
//Authentication key, generally the user name; authentication value, generally 
the password corresponding to the user name
-    .setDWSVersion("v1").build() //Linkis backend protocol version, the 
current version is v1
-
-  // 2. Get a UJESClient through DWSClientConfig
-  val client = UJESClient(clientConfig)
-  
-  try {
-    // 3. Start code execution
-    println("user: "+ user + ", code: [" + executeCode + "]")
-    val startupMap = new java.util.HashMap[String, Any]()
-    startupMap.put("wds.linkis.yarnqueue", "default") //Startup parameter 
configuration
-    val jobExecuteResult = client.execute(JobExecuteAction.builder()
-      .setCreator("LinkisClient-Test") //creator, requesting the system name 
of the Linkis client, used for system-level isolation
-      .addExecuteCode(executeCode) //ExecutionCode The code to be executed
-      .setEngineType(EngineType.SPARK) // The execution engine type of Linkis 
that you want to request, such as Spark hive, etc.
-      .setStartupParams(startupMap)
-      .setUser(user).build()) //User, request user; used for user-level 
multi-tenant isolation
-    println("execId: "+ jobExecuteResult.getExecID + ", taskId:" + 
jobExecuteResult.taskID)
-    
-    // 4. Get the execution status of the script
-    var jobInfoResult = client.getJobInfo(jobExecuteResult)
-    val sleepTimeMills: Int = 1000
-    while (!jobInfoResult.isCompleted) {
-      // 5. Get the execution progress of the script
-      val progress = client.progress(jobExecuteResult)
-      val progressInfo = if (progress.getProgressInfo != null) 
progress.getProgressInfo.toList else List.empty
-      println("progress: "+ progress.getProgress + ", progressInfo:" + 
progressInfo)
-      Utils.sleepQuietly(sleepTimeMills)
-      jobInfoResult = client.getJobInfo(jobExecuteResult)
-    }
-    if (!jobInfoResult.isSucceed) {
-      println("Failed to execute job: "+ jobInfoResult.getMessage)
-      throw new Exception(jobInfoResult.getMessage)
-    }
 
-    // 6. Get the job information of the script
-    val jobInfo = client.getJobInfo(jobExecuteResult)
-    // 7. Get the list of result sets (if the user submits multiple SQL at a 
time, multiple result sets will be generated)
-    val resultSetList = jobInfoResult.getResultSetList(client)
-    println("All result set list:")
-    resultSetList.foreach(println)
-    val oneResultSet = jobInfo.getResultSetList(client).head
-    // 8. Get a specific result set through a result set information
-    val fileContents = 
client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
-    println("First fileContents: ")
-    println(fileContents)
-  } catch {
-    case e: Exception => {
-      e.printStackTrace()
+    /**
+     * Linkis 1.0 recommends the use of Submit method
+     */
+    private static JobExecuteResult toSubmit(String user, String code) {
+        // 1. build  params
+        // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+        Map<String, Object> labels = new HashMap<String, Object>();
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-IDE");// 
required execute user and creator
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // required codeType
+        // set start up map :engineConn start params
+        Map<String, Object> startupMap = new HashMap<String, Object>(16);
+        // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+        startupMap.put("spark.executor.instances", 2);
+        // setting linkis params
+        startupMap.put("wds.linkis.rm.yarnqueue", "dws");
+
+        // 2. build jobSubmitAction
+        JobSubmitAction jobSubmitAction = JobSubmitAction.builder()
+                .addExecuteCode(code)
+                .setStartupParams(startupMap)
+                .setUser(user) //submit user
+                .addExecuteUser(user)  // execute user
+                .setLabels(labels)
+                .build();
+        // 3. to execute
+        return client.submit(jobSubmitAction);
     }
-  }
-  IOUtils.closeQuietly(client)
-}
-```
-
-## 3. Linkis1.0 new submit interface with Label support
-
-Linkis1.0 adds the client.submit method, which is used to adapt with the new 
task execution interface of 1.0, and supports the input of Label and other 
parameters
-
-### 3.1 Java Test Class
-
-```java
-package com.webank.wedatasphere.linkis.client.test;
-
-import com.webank.wedatasphere.linkis.common.utils.Utils;
-import 
com.webank.wedatasphere.linkis.httpclient.dws.authentication.StaticAuthenticationStrategy;
-import com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfig;
-import 
com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuilder;
-import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant;
-import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant;
-import com.webank.wedatasphere.linkis.ujes.client.UJESClient;
-import com.webank.wedatasphere.linkis.ujes.client.UJESClientImpl;
-import com.webank.wedatasphere.linkis.ujes.client.request.JobSubmitAction;
-import com.webank.wedatasphere.linkis.ujes.client.request.ResultSetAction;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobInfoResult;
-import com.webank.wedatasphere.linkis.ujes.client.response.JobProgressResult;
-import org.apache.commons.io.IOUtils;
-
-import java.util.HashMap;
-import java.util.Map;
-import java.util.concurrent.TimeUnit;
-
-public class JavaClientTest {
-
-    public static void main(String[] args){
-
-        String user = "hadoop";
-        String executeCode = "show tables";
-
-        // 1. Configure ClientBuilder and get ClientConfig
-        DWSClientConfig clientConfig = ((DWSClientConfigBuilder) 
(DWSClientConfigBuilder.newBuilder()
-                .addServerUrl("http://${ip}:${port}";) //Specify ServerUrl, the 
address of the linkis server-side gateway, such as http://{ip}:{port}
-                .connectionTimeout(30000) //connectionTimeOut client 
connection timeout
-                .discoveryEnabled(false).discoveryFrequency(1, 
TimeUnit.MINUTES) //Whether to enable registration discovery, if enabled, the 
newly launched Gateway will be automatically discovered
-                .loadbalancerEnabled(true) // Whether to enable load 
balancing, if registration discovery is not enabled, load balancing is 
meaningless
-                .maxConnectionSize(5) //Specify the maximum number of 
connections, that is, the maximum number of concurrent
-                .retryEnabled(false).readTimeout(30000) //execution failed, 
whether to allow retry
-                .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
//AuthenticationStrategy Linkis authentication method
-                
.setAuthTokenKey("${username}").setAuthTokenValue("${password}"))) 
//Authentication key, generally the user name; authentication value, generally 
the password corresponding to the user name
-                .setDWSVersion("v1").build(); //Linkis background protocol 
version, the current version is v1
-
-        // 2. Get a UJESClient through DWSClientConfig
-        UJESClient client = new UJESClientImpl(clientConfig);
-
-        try {
-            // 3. Start code execution
-            System.out.println("user: "+ user + ", code: [" + executeCode + 
"]");
-            Map<String, Object> startupMap = new HashMap<String, Object>();
-            // A variety of startup parameters can be stored in startupMap, 
see linkis management console configuration
-            startupMap.put("wds.linkis.yarnqueue", "q02");
-            //Specify Label
-            Map<String, Object> labels = new HashMap<String, Object>();
-            //Add the label that this execution depends on: 
EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel
-            labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1");
-            labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE");
-            labels.put(LabelKeyConstant.ENGINE_RUN_TYPE_KEY, "hql");
-            //Specify source
-            Map<String, Object> source = new HashMap<String, Object>();
-            source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test");
-            JobExecuteResult jobExecuteResult = client.submit( 
JobSubmitAction.builder()
-                    .addExecuteCode(executeCode)
-                    .setStartupParams(startupMap)
-                    .setUser(user)//Job submit user
-                    .addExecuteUser(user)//The actual execution user
-                    .setLabels(labels)
-                    .setSource(source)
-                    .build()
-            );
-            System.out.println("execId: "+ jobExecuteResult.getExecID() + ", 
taskId:" + jobExecuteResult.taskID());
-
-            // 4. Get the execution status of the script
-            JobInfoResult jobInfoResult = client.getJobInfo(jobExecuteResult);
-            int sleepTimeMills = 1000;
-            while(!jobInfoResult.isCompleted()) {
-                // 5. Get the execution progress of the script
-                JobProgressResult progress = client.progress(jobExecuteResult);
-                Utils.sleepQuietly(sleepTimeMills);
-                jobInfoResult = client.getJobInfo(jobExecuteResult);
-            }
-
-            // 6. Get the job information of the script
-            JobInfoResult jobInfo = client.getJobInfo(jobExecuteResult);
-            // 7. Get the list of result sets (if the user submits multiple 
SQL at a time, multiple result sets will be generated)
-            String resultSet = jobInfo.getResultSetList(client)[0];
-            // 8. Get a specific result set through a result set information
-            Object fileContents = 
client.resultSet(ResultSetAction.builder().setPath(resultSet).setUser(jobExecuteResult.getUser()).build()).getFileContent();
-            System.out.println("fileContents: "+ fileContents);
 
-        } catch (Exception e) {
-            e.printStackTrace();
-            IOUtils.closeQuietly(client);
-        }
-        IOUtils.closeQuietly(client);
+    /**
+     * Compatible with 0.X execution mode
+     */
+    private static JobExecuteResult toExecute(String user, String code) {
+        // 1. build  params
+        // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+        Map<String, Object> labels = new HashMap<String, Object>();
+        // labels.put(LabelKeyConstant.TENANT_KEY, "fate");
+        // set start up map :engineConn start params
+        Map<String, Object> startupMap = new HashMap<String, Object>(16);
+        // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+        startupMap.put("spark.executor.instances", 2);
+        // setting linkis params
+        startupMap.put("wds.linkis.rm.yarnqueue", "dws");
+
+        // 2. build JobExecuteAction (0.X old way of using)
+        JobExecuteAction executionAction = JobExecuteAction.builder()
+                .setCreator("IDE")  //creator, the system name of the client 
requesting linkis, used for system-level isolation
+                .addExecuteCode(code)   //Execution Code
+                .setEngineTypeStr("spark") // engineConn type
+                .setRunTypeStr("py") // code type
+                .setUser(user)   //execute user
+                .setStartupParams(startupMap) // start up params
+                .build();
+        executionAction.addRequestPayload(TaskConstant.LABELS, labels);
+        String body = executionAction.getRequestPayload();
+        System.out.println(body);
+
+        // 3. to execute
+        return client.execute(executionAction);
     }
 }
 
 ```
 
-### 3.2 Scala Test Class
+Run the above code to interact with Linkis
+
+### 3. Scala test code:
+Create the Scala test class LinkisClientTest. Refer to the comments to 
understand the purposes of those interfaces:
 
 ```scala
 package com.webank.wedatasphere.linkis.client.test
@@ -314,79 +193,75 @@ import 
com.webank.wedatasphere.linkis.httpclient.dws.config.DWSClientConfigBuild
 import com.webank.wedatasphere.linkis.manager.label.constant.LabelKeyConstant
 import com.webank.wedatasphere.linkis.protocol.constants.TaskConstant
 import com.webank.wedatasphere.linkis.ujes.client.UJESClient
-import com.webank.wedatasphere.linkis.ujes.client.request.{JobSubmitAction, 
ResultSetAction}
+import com.webank.wedatasphere.linkis.ujes.client.request.{JobExecuteAction, 
JobSubmitAction, ResultSetAction}
+import com.webank.wedatasphere.linkis.ujes.client.response.JobExecuteResult
 import org.apache.commons.io.IOUtils
+import org.apache.commons.lang.StringUtils
 
 
 object ScalaClientTest {
 
-  def main(args: Array[String]): Unit = {
-    val executeCode = "show tables"
-    val user = "hadoop"
-
-    // 1. Configure DWSClientBuilder, get a DWSClientConfig through 
DWSClientBuilder
-    val clientConfig = DWSClientConfigBuilder.newBuilder()
-      .addServerUrl("http://${ip}:${port}";) //Specify ServerUrl, the address 
of the Linkis server-side gateway, such as http://{ip}:{port}
-      .connectionTimeout(30000) //connectionTimeOut client connection timeout
-      .discoveryEnabled(false).discoveryFrequency(1, TimeUnit.MINUTES) 
//Whether to enable registration discovery, if enabled, the newly launched 
Gateway will be automatically discovered
-      .loadbalancerEnabled(true) // Whether to enable load balancing, if 
registration discovery is not enabled, load balancing is meaningless
-      .maxConnectionSize(5) //Specify the maximum number of connections, that 
is, the maximum number of concurrent
-      .retryEnabled(false).readTimeout(30000) //execution failed, whether to 
allow retry
-      .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
//AuthenticationStrategy Linkis authentication method
-      .setAuthTokenKey("${username}").setAuthTokenValue("${password}") 
//Authentication key, generally the user name; authentication value, generally 
the password corresponding to the user name
-      .setDWSVersion("v1").build() //Linkis backend protocol version, the 
current version is v1
-
-    // 2. Get a UJESClient through DWSClientConfig
-    val client = UJESClient(clientConfig)
+  // 1. build config: linkis gateway url
+  val clientConfig = DWSClientConfigBuilder.newBuilder()
+    .addServerUrl("http://10.107.118.104:9001/";)   //set linkis-mg-gateway 
url: http://{ip}:{port}
+    .connectionTimeout(30000)   //connectionTimeOut
+    .discoveryEnabled(false) //disable discovery
+    .discoveryFrequency(1, TimeUnit.MINUTES)  // discovery frequency
+    .loadbalancerEnabled(true)  // enable loadbalance
+    .maxConnectionSize(5)   // set max Connection
+    .retryEnabled(false) // set retry
+    .readTimeout(30000)  //set read timeout
+    .setAuthenticationStrategy(new StaticAuthenticationStrategy())   
//AuthenticationStrategy Linkis authen suppory static and Token
+    .setAuthTokenKey("hadoop")  // set submit user
+    .setAuthTokenValue("hadoop")  // set passwd or token 
(setAuthTokenValue("BML-AUTH"))
+    .setDWSVersion("v1") //linkis rest version v1
+    .build();
+
+  // 2. new Client(Linkis Client) by clientConfig
+  val client = UJESClient(clientConfig)
 
+  def main(args: Array[String]): Unit = {
+    val user = "hadoop" // execute user
+    val executeCode = "df=spark.sql(\"show tables\")\n" +
+      "show(df)"; // code support:sql/hql/py/scala
     try {
-      // 3. Start code execution
-      println("user: "+ user + ", code: [" + executeCode + "]")
-      val startupMap = new java.util.HashMap[String, Any]()
-      startupMap.put("wds.linkis.yarnqueue", "q02") //Startup parameter 
configuration
-      //Specify Label
-      val labels: util.Map[String, Any] = new util.HashMap[String, Any]
-      //Add the label that this execution depends on, such as engineLabel
-      labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "hive-1.2.1")
-      labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, "hadoop-IDE")
-      labels.put(LabelKeyConstant.ENGINE_RUN_TYPE_KEY, "hql")
-      //Specify source
-      val source: util.Map[String, Any] = new util.HashMap[String, Any]
-      source.put(TaskConstant.SCRIPTPATH, "LinkisClient-test")
-      val jobExecuteResult = client.submit(JobSubmitAction.builder
-          .addExecuteCode(executeCode)
-          .setStartupParams(startupMap)
-          .setUser(user) //Job submit user
-          .addExecuteUser(user) //The actual execution user
-          .setLabels(labels)
-          .setSource(source)
-          .build) //User, requesting user; used for user-level multi-tenant 
isolation
-      println("execId: "+ jobExecuteResult.getExecID + ", taskId:" + 
jobExecuteResult.taskID)
-
-      // 4. Get the execution status of the script
+      // 3. build job and execute
+      println("user : " + user + ", code : [" + executeCode + "]")
+      val jobExecuteResult = toSubmit(user, executeCode)
+      //0.X: val jobExecuteResult = toExecute(user, executeCode) 
+      println("execId: " + jobExecuteResult.getExecID + ", taskId: " + 
jobExecuteResult.taskID)
+      // 4. get job jonfo
       var jobInfoResult = client.getJobInfo(jobExecuteResult)
-      val sleepTimeMills: Int = 1000
+      var logFromLen = 0
+      val logSize = 100
+      val sleepTimeMills : Int = 1000
       while (!jobInfoResult.isCompleted) {
-        // 5. Get the execution progress of the script
+        // 5. get progress and log
         val progress = client.progress(jobExecuteResult)
-        val progressInfo = if (progress.getProgressInfo != null) 
progress.getProgressInfo.toList else List.empty
-        println("progress: "+ progress.getProgress + ", progressInfo:" + 
progressInfo)
+        println("progress: " + progress.getProgress)
+        val logObj = client .log(jobExecuteResult, logFromLen, logSize)
+        logFromLen = logObj.fromLine
+        val logArray = logObj.getLog
+        // 0: info 1: warn 2: error 3: all
+        if (logArray != null && logArray.size >= 4 && 
StringUtils.isNotEmpty(logArray.get(3))) {
+          println(s"log: ${logArray.get(3)}")
+        }
         Utils.sleepQuietly(sleepTimeMills)
         jobInfoResult = client.getJobInfo(jobExecuteResult)
       }
       if (!jobInfoResult.isSucceed) {
-        println("Failed to execute job: "+ jobInfoResult.getMessage)
+        println("Failed to execute job: " + jobInfoResult.getMessage)
         throw new Exception(jobInfoResult.getMessage)
       }
 
-      // 6. Get the job information of the script
+      // 6. Get the result set list (if the user submits multiple SQLs at a 
time,
+      // multiple result sets will be generated)
       val jobInfo = client.getJobInfo(jobExecuteResult)
-      // 7. Get the list of result sets (if the user submits multiple SQL at a 
time, multiple result sets will be generated)
       val resultSetList = jobInfoResult.getResultSetList(client)
       println("All result set list:")
       resultSetList.foreach(println)
       val oneResultSet = jobInfo.getResultSetList(client).head
-      // 8. Get a specific result set through a result set information
+      // 7. get resultContent
       val fileContents = 
client.resultSet(ResultSetAction.builder().setPath(oneResultSet).setUser(jobExecuteResult.getUser).build()).getFileContent
       println("First fileContents: ")
       println(fileContents)
@@ -398,6 +273,65 @@ object ScalaClientTest {
     IOUtils.closeQuietly(client)
   }
 
+  /**
+   * Linkis 1.0 recommends the use of Submit method
+   */
+  def toSubmit(user: String, code: String): JobExecuteResult = {
+    // 1. build  params
+    // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+    val labels: util.Map[String, Any] = new util.HashMap[String, Any]
+    labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required 
engineType Label
+    labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-IDE");// 
required execute user and creator
+    labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // required codeType
+
+    val startupMap = new java.util.HashMap[String, Any]()
+    // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+    startupMap.put("spark.executor.instances", 2);
+    // setting linkis params
+    startupMap.put("wds.linkis.rm.yarnqueue", "dws");
+    // 2. build jobSubmitAction
+    val jobSubmitAction = JobSubmitAction.builder
+      .addExecuteCode(code)
+      .setStartupParams(startupMap)
+      .setUser(user) //submit user
+      .addExecuteUser(user) //execute user
+      .setLabels(labels)
+      .build
+    // 3. to execute
+    client.submit(jobSubmitAction)
+  }
+
+
+  /**
+   * Compatible with 0.X execution mode
+   */
+  def toExecute(user: String, code: String): JobExecuteResult = {
+    // 1. build  params
+    // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+    val labels = new util.HashMap[String, Any]
+    // labels.put(LabelKeyConstant.TENANT_KEY, "fate");
+
+    val startupMap = new java.util.HashMap[String, Any]()
+    // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+    startupMap.put("spark.executor.instances", 2)
+    // setting linkis params
+    startupMap.put("wds.linkis.rm.yarnqueue", "dws")
+    // 2. build JobExecuteAction (0.X old way of using)
+    val  executionAction = JobExecuteAction.builder()
+      .setCreator("IDE")  //creator, the system name of the client requesting 
linkis, used for system-level isolation
+      .addExecuteCode(code)   //Execution Code
+      .setEngineTypeStr("spark") // engineConn type
+      .setRunTypeStr("py") // code type
+      .setUser(user)   //execute user
+      .setStartupParams(startupMap) // start up params
+      .build();
+    executionAction.addRequestPayload(TaskConstant.LABELS, labels);
+    // 3. to execute
+    client.execute(executionAction)
+  }
+
+
 }
 
-```
+
+```
\ No newline at end of file

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to