This is an automated email from the ASF dual-hosted git repository.

peacewong pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new 41f10cf66d9 add pes doc (#729)
41f10cf66d9 is described below

commit 41f10cf66d9854d5553528baa26e89f9b3b963b0
Author: peacewong <[email protected]>
AuthorDate: Wed Jul 12 17:30:30 2023 +0800

    add pes doc (#729)
---
 .../public-enhancement-services/public-service.md  |  32 +-
 docs/engine-usage/python.md                        |  25 ++
 docs/engine-usage/spark.md                         |  16 +
 docs/user-guide/sdk-manual.md                      | 328 +++++++++++++++------
 .../public-enhancement-services/public-service.md  |  33 +--
 .../current/engine-usage/python.md                 |  25 ++
 .../current/engine-usage/spark.md                  |  15 +
 .../current/user-guide/sdk-manual.md               | 180 ++++++++++-
 .../version-1.3.2/user-guide/sdk-manual.md         |  14 +-
 .../Public_Enhancement_Service/pes_arc.png         | Bin 0 -> 78579 bytes
 .../Public_Enhancement_Service/pes_arc_demo.png    | Bin 0 -> 140623 bytes
 .../version-1.3.2/user-guide/sdk-manual.md         | 172 +++++------
 12 files changed, 615 insertions(+), 225 deletions(-)

diff --git 
a/docs/architecture/feature/public-enhancement-services/public-service.md 
b/docs/architecture/feature/public-enhancement-services/public-service.md
index 2438cb8ca76..fc635e01fa5 100644
--- a/docs/architecture/feature/public-enhancement-services/public-service.md
+++ b/docs/architecture/feature/public-enhancement-services/public-service.md
@@ -3,10 +3,9 @@ title: Public Service
 sidebar_position: 2
 ---
 ## **Background**
+Why do we need to add public enhanced capabilities after we use Linkis as a 
unified gateway or JobServer? This is after we actually developed multiple 
upper-level application tools, and found that if a UDF and variable debugging 
were defined in the IDE tool, after publishing to the scheduling tool, these 
UDFs and variables need to be redefined again. When some dependent jar 
packages, configuration files, etc. change, two places also need to be modified.
+Aiming at these issues like the common context across upper-layer application 
tools, after we realized the unified entry of tasks as Linkis, we wondered 
whether Linkis could provide this public enhancement capability, and provide 
some common features that can be used by multiple application tools. The 
ability to reuse. Therefore, a layer of public enhanced service PES is designed 
at the Linkis layer.
 
-PublicService is a comprehensive service composed of multiple sub-modules such 
as "configuration", "jobhistory", "udf", "variable", etc. Linkis 
-1.0 added label management based on version 0.9. Linkis doesn't need to set 
the parameters every time during the execution of different jobs.
-Many variables, functions and configurations can be reused after the user 
completes the settings once, and of course that they can also be shared with 
other users.
 
 ## **Architecture diagram**
 
@@ -14,24 +13,11 @@ Many variables, functions and configurations can be reused 
after the user comple
 
 ## **Architecture Introduction**
 
-1. linkis-configuration:Provides query and save operations for global settings 
and general settings, especially engine configuration parameters.
-
-2. linkis-jobhistory:Specially used for storage and query of historical 
execution task, users can obtain the historical tasks through the interface 
provided by "jobhistory", include logs, status and execution content.
-At the same time, the historical task also support the paging query 
operation.The administrator can view all the historical tasks, but the ordinary 
users can only view their own tasks.
-
-3. Linkis-udf:Provides the user function management capability in Linkis, 
which can be divided into shared functions, personal functions, system 
functions and the functions used by engine.
-Once the user selects one, it will be automatically loaded for users to 
directly quote in the code and reuse between different scripts when the engine 
starting. 
-
-4. Linkis-variable:Provides the global variable management capability in 
Linkis, store and query the user-defined global variables。
-
-5. linkis-instance-label:Provides two modules named "label server" and "label 
client" for labeling Engine and EM. It also provides node-based label addition, 
deletion, modification and query capabilities.
-The main functions are as follows:
-
--   Provides resource management capabilities for some specific labels to 
assist RM in more refined resource management.
-
--   Provides labeling capabilities for users. The user label will be 
automatically added for judgment when applying for the engine. 
-
--   Provides the label analysis module, which can parse the users' request 
into a bunch of labels。
-
--   With the ability of node label management, it is mainly used to provide 
the label  CRUD capability of the node and the label resource management to 
manage the resources of certain labels, marking the maximum resource, minimum 
resource and used resource of a Label.
+The capabilities are now provided:
 
+- Provide unified data source capability: data sources are defined and managed 
uniformly at the Linkis layer, and application tools only need to use the data 
source name, and no longer need to maintain the connection information of the 
corresponding data source. And the meaning of the data source is the same 
between different tools. And it provides the query ability of the metadata of 
the corresponding data source.
+- Provide public UDF capabilities: Unify the definition specifications and 
semantics of UDF and small functions, so that multiple tools can be used when 
defined in one place.
+- The ability to provide a unified context: support the transfer of 
information between tasks, including the transfer of variables, result sets, 
and resource files between multiple tasks, and provide the ability to transfer 
context between tasks.
+- The ability to provide unified materials: Provide unified materials, support 
shared access to these materials among multiple tools, and materials support 
storage of various file types, and support version control.
+- Ability to provide unified configuration and variables: Provides unified 
configuration capabilities to support templated configuration of different 
engine parameter templates, custom variables, built-in commonly used system 
variables and time format variables, etc.
+- Ability to provide public error codes: Provide unified error code 
capabilities, classify and code crops of commonly used computing storage 
engines and knowledge bases, and provide a convenient SDK for calling.
\ No newline at end of file
diff --git a/docs/engine-usage/python.md b/docs/engine-usage/python.md
index 066c0305219..905ad56ad67 100644
--- a/docs/engine-usage/python.md
+++ b/docs/engine-usage/python.md
@@ -154,4 +154,29 @@ INNER JOIN linkis_cg_manager_label label ON 
config.engine_conn_type = 'python' a
 insert into `linkis_ps_configuration_config_value` (`config_key_id`, 
`config_value`, `config_label_id`)
 (select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, 
`relation`.`engine_type_label_id` AS `config_label_id` FROM 
linkis_ps_configuration_key_engine_relation relation
 INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = 
label.id AND label.label_value = @PYTHON_ALL);
+```
+
+
+### 4.4 other python code demo
+
+```python
+import pandas as pd
+ 
+data = {'name': ['aaaaaa', 'bbbbbb', 'cccccc'], 'pay': [4000, 5000, 6000]}
+frame = pd.DataFrame(data)
+show.show(frame)
+
+
+print('new reuslt')
+
+from matplotlib import pyplot as plt
+
+x=[4,8,10]
+y=[12,16,6]
+x2=[6,9,11]
+y2=[6,15,7]
+plt.bar(x,y,color='r',align='center')
+plt.bar(x2,y2,color='g',align='center')
+plt.show()
+
 ```
\ No newline at end of file
diff --git a/docs/engine-usage/spark.md b/docs/engine-usage/spark.md
index afea35669fb..d047ec2aa7d 100644
--- a/docs/engine-usage/spark.md
+++ b/docs/engine-usage/spark.md
@@ -72,6 +72,22 @@ labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, 
"hadoop-IDE");// required exe
 labels.put(LabelKeyConstant.CODE_TYPE_KEY, "sql"); // required codeType 
py,sql,scala
 ```
 
+You can also submit scala and python code:
+````java
+
+//scala 
+labels.put(LabelKeyConstant.CODE_TYPE_KEY, "scala");
+code:
+val df=spark.sql("show tables")
+show(df)        
+//pyspark
+/labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py");
+code:
+df=spark.sql("show tables")
+show(df)
+````
+
+
 ### 3.3 Submitting tasks by submitting the jar package
 
 Through `OnceEngineConn` submit tasks (through the spark-submit submit jar 
package mission), submission for reference `org.apache.linkis.com 
putation.Client.SparkOnceJobTest`.
diff --git a/docs/user-guide/sdk-manual.md b/docs/user-guide/sdk-manual.md
index 765a27fe944..d7313438418 100644
--- a/docs/user-guide/sdk-manual.md
+++ b/docs/user-guide/sdk-manual.md
@@ -254,46 +254,47 @@ import 
org.apache.linkis.httpclient.dws.config.DWSClientConfigBuilder
 import org.apache.linkis.manager.label.constant.LabelKeyConstant
 import org.apache.linkis.ujes.client.request._
 import org.apache.linkis.ujes.client.response._
-
 import java.util
 import java.util.concurrent.TimeUnit
 
+import org.apache.linkis.ujes.client.UJESClient
+
 object LinkisClientTest {
-  // 1. build config: linkis gateway url
-  val clientConfig = DWSClientConfigBuilder.newBuilder()
-    .addServerUrl("http://127.0.0.1:9001/";) //set linkis-mg-gateway url: 
http://{ip}:{port}
-    .connectionTimeout(30000) //connectionTimeOut
-    .discoveryEnabled(false) //disable discovery
-    .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
-    .loadbalancerEnabled(true) // enable loadbalance
-    .maxConnectionSize(5) // set max Connection
-    .retryEnabled(false) // set retry
-    .readTimeout(30000) //set read timeout
-    .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
//AuthenticationStrategy Linkis authen suppory static and Token
-    .setAuthTokenKey("hadoop") // set submit user
-    .setAuthTokenValue("hadoop") // set passwd or token 
(setAuthTokenValue("BML-AUTH"))
-    .setDWSVersion("v1") //link rest version v1
-    .build();
-
-  // 2. new Client(Linkis Client) by clientConfig
-  val client = UJESClient(clientConfig)
-
-  def main(args: Array[String]): Unit = {
-    val user = "hadoop" // execute user user needs to be consistent with the 
value of AuthTokenKey
-    val executeCode = "df=spark.sql(\"show tables\")\n" +
-      "show(df)"; // code support:sql/hql/py/scala
-    try {
-      // 3. build job and execute
-      println("user : " + user + ", code : [" + executeCode + "]")
-      // It is recommended to use submit, which supports the transfer of task 
labels
-      val jobExecuteResult = toSubmit(user, executeCode)
-      println("execId: " + jobExecuteResult.getExecID + ", taskId: " + 
jobExecuteResult.taskID)
-      // 4. get job info
-      var jobInfoResult = client.getJobInfo(jobExecuteResult)
-      where logFromLen = 0
-      val logSize = 100
-      val sleepTimeMills: Int = 1000
-      while (!jobInfoResult.isCompleted) {
+        // 1. build config: linkis gateway url
+        val clientConfig = DWSClientConfigBuilder.newBuilder()
+        .addServerUrl("http://127.0.0.1:9001/";) //set linkis-mg-gateway url: 
http://{ip}:{port}
+        .connectionTimeout(30000) //connectionTimeOut
+        .discoveryEnabled(false) //disable discovery
+        .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
+        .loadbalancerEnabled(true) // enable loadbalance
+        .maxConnectionSize(5) // set max Connection
+        .retryEnabled(false) // set retry
+        .readTimeout(30000) //set read timeout
+        .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
//AuthenticationStrategy Linkis authen suppory static and Token
+        .setAuthTokenKey("hadoop") // set submit user
+        .setAuthTokenValue("hadoop") // set passwd or token 
(setAuthTokenValue("BML-AUTH"))
+        .setDWSVersion("v1") //link rest version v1
+        .build();
+
+        // 2. new Client(Linkis Client) by clientConfig
+        val client = UJESClient(clientConfig)
+
+        def main(args: Array[String]): Unit = {
+        val user = "hadoop" // execute user user needs to be consistent with 
the value of AuthTokenKey
+        val executeCode = "df=spark.sql(\"show tables\")\n" +
+        "show(df)"; // code support:sql/hql/py/scala
+        try {
+        // 3. build job and execute
+        println("user : " + user + ", code : [" + executeCode + "]")
+        // It is recommended to use submit, which supports the transfer of 
task labels
+        val jobExecuteResult = toSubmit(user, executeCode)
+        println("execId: " + jobExecuteResult.getExecID + ", taskId: " + 
jobExecuteResult.taskID)
+        // 4. get job info
+        var jobInfoResult = client.getJobInfo(jobExecuteResult)
+        var logFromLen = 0
+        val logSize = 100
+        val sleepTimeMills: Int = 1000
+        while (!jobInfoResult.isCompleted) {
         // 5. get progress and log
         val progress = client.progress(jobExecuteResult)
         println("progress: " + progress.getProgress)
@@ -302,59 +303,216 @@ object LinkisClientTest {
         val logArray = logObj.getLog
         // 0: info 1: warn 2: error 3: all
         if (logArray != null && logArray.size >= 4 && 
StringUtils.isNotEmpty(logArray.get(3))) {
-          println(s"log: ${logArray.get(3)}")
+        println(s"log: ${logArray.get(3)}")
         }
         Utils.sleepQuietly(sleepTimeMills)
         jobInfoResult = client.getJobInfo(jobExecuteResult)
-      }
-      if (!jobInfoResult.isSucceed) {
+        }
+        if (!jobInfoResult.isSucceed) {
         println("Failed to execute job: " + jobInfoResult.getMessage)
         throw new Exception(jobInfoResult.getMessage)
-      }
-
-      // 6. Get the result set list (if the user submits multiple SQLs at a 
time,
-      // multiple result sets will be generated)
-      val jobInfo = client.getJobInfo(jobExecuteResult)
-      val resultSetList = jobInfoResult.getResultSetList(client)
-      println("All result set list:")
-      resultSetList.foreach(println)
-      val oneResultSet = jobInfo.getResultSetList(client).head
-      // 7. get resultContent
-      val resultSetResult: ResultSetResult = 
client.resultSet(ResultSetAction.builder.setPath(oneResultSet).setUser(jobExecuteResult.getUser).build)
-      println("metadata: " + resultSetResult.getMetadata) // column name type
-      println("res: " + resultSetResult.getFileContent) //row data
-    } catch {
-      case e: Exception => {
+        }
+
+        // 6. Get the result set list (if the user submits multiple SQLs at a 
time,
+        // multiple result sets will be generated)
+        val jobInfo = client.getJobInfo(jobExecuteResult)
+        val resultSetList = jobInfoResult.getResultSetList(client)
+        println("All result set list:")
+        resultSetList.foreach(println)
+        val oneResultSet = jobInfo.getResultSetList(client).head
+        // 7. get resultContent
+        val resultSetResult: ResultSetResult = 
client.resultSet(ResultSetAction.builder.setPath(oneResultSet).setUser(jobExecuteResult.getUser).build)
+        println("metadata: " + resultSetResult.getMetadata) // column name type
+        println("res: " + resultSetResult.getFileContent) //row data
+        } catch {
+        case e: Exception => {
         e.printStackTrace() //please use log
-      }
-    }
-    IOUtils.closeQuietly(client)
-  }
-
-
-  def toSubmit(user: String, code: String): JobExecuteResult = {
-    // 1. build  params
-    // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
-    val labels: util.Map[String, Any] = new util.HashMap[String, Any]
-    labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required 
engineType Label
-    labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName"); // 
The requested user and application name, both parameters must be missing, where 
APPName cannot contain "-", it is recommended to replace it with "_"
-    labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // specify the script 
type
-
-    val startupMap = new java.util.HashMap[String, Any]()
-    // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
-    startupMap.put("spark.executor.instances", 2);
-    // setting linkis params
-    startupMap.put("wds.linkis.rm.yarnqueue", "default");
-    // 2. build jobSubmitAction
-    val jobSubmitAction = JobSubmitAction.builder
-      .addExecuteCode(code)
-      .setStartupParams(startupMap)
-      .setUser(user) //submit user
-      .addExecuteUser(user) //execute user
-      .setLabels(labels) .
-      .build
-    // 3. to execute
-    client.submit(jobSubmitAction)
-  }
-}
+        }
+        }
+        IOUtils.closeQuietly(client)
+        }
+
+
+        def toSubmit(user: String, code: String): JobExecuteResult = {
+        // 1. build  params
+        // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+        val labels: util.Map[String, AnyRef] = new util.HashMap[String, AnyRef]
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName"); 
// The requested user and application name, both parameters must be missing, 
where APPName cannot contain "-", it is recommended to replace it with "_"
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // specify the 
script type
+
+        val startupMap = new java.util.HashMap[String, AnyRef]()
+        // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+        val instances: Integer = 2
+        startupMap.put("spark.executor.instances", instances)
+        // setting linkis params
+        startupMap.put("wds.linkis.rm.yarnqueue", "default");
+        // 2. build jobSubmitAction
+        val jobSubmitAction = JobSubmitAction.builder
+        .addExecuteCode(code)
+        .setStartupParams(startupMap)
+        .setUser(user) //submit user
+        .addExecuteUser(user) //execute user
+        .setLabels(labels)
+        .build
+        // 3. to execute
+        client.submit(jobSubmitAction)
+        }
+        }
 ```
+
+
+## 4. Once SDK Usage
+The Linkis-cli client supports submitting tasks of the Once type. After the 
engine process is started, the task will only be run once, and it will be 
automatically destroyed after the task ends.
+
+OnceEngineConn calls LinkisManager's createEngineConn interface through 
LinkisManagerClient, and sends the code to the engine created by the user, and 
then the engine starts to execute
+
+write a test class
+Use clien conditions
+
+```java
+1. Configure the correct and available gateway address:
+LinkisJobClient.config().setDefaultServerUrl("http://ip:9001";);
+2. Write the engine parameters, configuration items, and execution code in the 
code:
+  String code = "env {
+                           + " spark.app.name = \"SeaTunnel\"\n"
+                           + "spark.executor.instances = 2\n"
+                           + "spark.executor.cores = 1\n"
+                           + " spark.executor.memory = \"1g\"\n"
+                           + "}\n"
+                           + "\n"
+                           + "source {\n"
+                           + "Fake {\n"
+                           + " result_table_name = \"my_dataset\"\n"
+                           + " }\n"
+                           + "\n"
+                           + "}\n"
+                           + "\n"
+                           + "transform {\n"
+                           + "}\n"
+                           + "\n"
+                           + "sink {\n"
+                           + " Console {}\n"
+                           + "}";
+3. Create an Once mode object: SubmittableSimpleOnceJob:
+SubmittableSimpleOnceJob = LinkisJobClient.once()
+                 .simple()
+                 .builder()
+                 .setCreateService("seatunnel-Test")
+                 .setMaxSubmitTime(300000) timeout
+                 .addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY(), 
"seatunnel-2.1.2") engine label
+                 .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY(), 
"hadoop-seatunnel") user label
+                 .addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY(), "once") 
engine mode label
+                 .addStartupParam(Configuration.IS_TEST_MODE().key(), true) 
Whether to enable the test mode
+                 .addExecuteUser("hadoop") execute user
+                 .addJobContent("runType", "spark") execution engine
+                 .addJobContent("code", code) execute code
+                 .addJobContent("master", "local[4]")
+                 .addJobContent("deploy-mode", "client")
+                 .addSource("jobName", "OnceJobTest") name
+                 .build();
+
+```
+
+Test class sample code:
+
+```java
+package org.apache.linkis.ujes.client
+
+import org.apache.linkis.common.utils.Utils
+import java.util.concurrent.TimeUnit
+import java.util
+import org.apache.linkis.computation.client.LinkisJobBuilder
+import org.apache.linkis.computation.client.once.simple.{SimpleOnceJob, 
SimpleOnceJobBuilder, SubmittableSimpleOnceJob}
+import 
org.apache.linkis.computation.client.operator.impl.{EngineConnLogOperator, 
EngineConnMetricsOperator, EngineConnProgressOperator}
+import org.apache.linkis.computation.client.utils.LabelKeyUtils
+import scala.collection.JavaConverters._
+
+object SqoopOnceJobTest extends App {
+        LinkisJobBuilder.setDefaultServerUrl("http://gateway address:9001")
+        val logPath = "C:\\Users\\resources\\log4j.properties"
+        System.setProperty("log4j.configurationFile", logPath)
+        val startUpMap = new util. HashMap[String, AnyRef]
+        startUpMap.put("wds.linkis.engineconn.java.driver.memory", "1g")
+        val builder = SimpleOnceJob. builder(). 
setCreateService("Linkis-Client")
+        .addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY, "sqoop-1.4.6")
+        .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY, "hadoop-Client")
+        .addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY, "once")
+        .setStartupParams(startUpMap)
+        .setMaxSubmitTime(30000)
+        .addExecuteUser("hadoop")
+        val onceJob = importJob(builder)
+        val time = System. currentTimeMillis()
+        onceJob. submit()
+        println(onceJob. getId)
+        val logOperator = 
onceJob.getOperator(EngineConnLogOperator.OPERATOR_NAME).asInstanceOf[EngineConnLogOperator]
+        println(onceJob. getECMServiceInstance)
+        logOperator. setFromLine(0)
+        logOperator.setECMServiceInstance(onceJob.getECMServiceInstance)
+        logOperator.setEngineConnType("sqoop")
+        logOperator.setIgnoreKeywords("[main],[SpringContextShutdownHook]")
+        var progressOperator = 
onceJob.getOperator(EngineConnProgressOperator.OPERATOR_NAME).asInstanceOf[EngineConnProgressOperator]
+        var metricOperator = 
onceJob.getOperator(EngineConnMetricsOperator.OPERATOR_NAME).asInstanceOf[EngineConnMetricsOperator]
+        var end = false
+        var rowBefore = 1
+        while (!end || rowBefore > 0) {
+        if (onceJob. isCompleted) {
+        end = true
+        metricOperator = null
+        }
+        logOperator. setPageSize(100)
+        Utils. tryQuietly {
+        val logs = logOperator.apply()
+        logs. logs. asScala. foreach(log => {
+        println(log)
+        })
+        rowBefore = logs. logs. size
+        }
+        Thread. sleep(3000)
+        Option(metricOperator).foreach(operator => {
+        if (!onceJob.isCompleted) {
+        println(s"Metric monitoring: ${operator.apply()}")
+        println(s"Progress: ${progressOperator.apply()}")
+        }
+        })
+        }
+        onceJob. isCompleted
+        onceJob. waitForCompleted()
+        println(onceJob. getStatus)
+        println(TimeUnit. SECONDS. convert(System. currentTimeMillis() - time, 
TimeUnit. MILLISECONDS) + "s")
+        System. exit(0)
+
+        def importJob(jobBuilder: SimpleOnceJobBuilder): 
SubmittableSimpleOnceJob = {
+        jobBuilder
+        .addJobContent("sqoop.env.mapreduce.job.queuename", "queue_1003_01")
+        .addJobContent("sqoop. mode", "import")
+        .addJobContent("sqoop.args.connect", "jdbc:mysql://database 
address/library name")
+        .addJobContent("sqoop.args.username", "database account")
+        .addJobContent("sqoop.args.password", "database password")
+        .addJobContent("sqoop.args.query", "select * from 
linkis_ps_udf_manager where 1=1 and $CONDITIONS")
+        #The table must exist $CONDITIONS is indispensable
+        .addJobContent("sqoop.args.hcatalog.database", "janicegong_ind")
+        .addJobContent("sqoop.args.hcatalog.table", 
"linkis_ps_udf_manager_sync2")
+        .addJobContent("sqoop.args.hcatalog.partition.keys", "ds")
+        .addJobContent("sqoop.args.hcatalog.partition.values", "20220708")
+        .addJobContent("sqoop.args.num.mappers", "1")
+        .build()
+        }
+        def exportJob(jobBuilder: SimpleOnceJobBuilder): 
SubmittableSimpleOnceJob = {
+        jobBuilder
+        .addJobContent("sqoop.env.mapreduce.job.queuename", "queue_1003_01")
+        .addJobContent("sqoop. mode", "import")
+        .addJobContent("sqoop.args.connect", "jdbc:mysql://database 
address/library name")
+        .addJobContent("sqoop.args.username", "database account")
+        .addJobContent("sqoop.args.password", "database password")
+        .addJobContent("sqoop.args.query", "select * from 
linkis_ps_udf_manager where 1=1 and $CONDITIONS")
+        #The table must exist $CONDITIONS is indispensable
+        .addJobContent("sqoop.args.hcatalog.database", "janicegong_ind")
+        .addJobContent("sqoop.args.hcatalog.table", 
"linkis_ps_udf_manager_sync2")
+        .addJobContent("sqoop.args.hcatalog.partition.keys", "ds")
+        .addJobContent("sqoop.args.hcatalog.partition.values", "20220708")
+        .addJobContent("sqoop.args.num.mappers", "1")
+        .build
+        }
+        }
+```
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/feature/public-enhancement-services/public-service.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/feature/public-enhancement-services/public-service.md
index 1eb80e687ad..2ca0ea2e92e 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/feature/public-enhancement-services/public-service.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/architecture/feature/public-enhancement-services/public-service.md
@@ -3,32 +3,25 @@ title: 公共服务架构
 sidebar_position: 2
 ---
 ## **背景**
+为什么在我们将Linkis作为统一网关或JobServer后,还要为其增加公共增强的能力呢?这个是在我们实际去开发了多个上层应用工具后,发现如在IDE工具里面定义了一个UDF、变量调试通过后,在发布到调度工具的时候,这些UDF和变量又需要重现定义一遍。当依赖的一些jar包、配置文件等发生变化时,也需要修改两个地方。
+针对这些类似跨上层应用工具的公共上下文的问题,在我们实现任务统一入口为Linkis后,我们就在想是不是可以由Linkis去提供这个公共增强的能力,提供一些公共可以被多个应用工具去复用的能力。所以在Linkis层设计了一层公共增强服务PES
 
-PublicService公共服务是由configuration、jobhistory、udf、variable等多个子模块组成的综合性服务。Linkis
-1.0在0.9版本的基础上还新增了标签管理。Linkis在用户不同作业执行过程中,不是每次执行都需要去设置一遍参数,很多可以复用的变量,函数,配置都是用户在完成一次设置后,能够被复用起来,当然还可以共享给别的用户使用。
 
 ## **架构图**
 
-![](/Images/Architecture/linkis-publicService-01.png)
+![](/Images/Architecture/Public_Enhancement_Service/pes_arc.png)
 
 ## **架构说明**
 
-1. linkis-configuration:对外提供了全局设置和通用设置的查询和保存操作,特别是引擎配置参数
+现在已经提供了以下能力:
+- 
提供统一的数据源能力:数据源在Linkis层进行统一定义和管理,应用工具只需要通过数据源名字来进行使用,不再需要去维护对应数据源的连接信息。而且在不同的工具间数据源的含义都是一样的。并提供了相应的数据源的元数据的查询能力。
+- 提供公共的UDF能力:统一UDF、小函数的定义规范和语义,做到一处定义多个工具都可使用。
+- 提供统一上下文的能力:支持任务间传递信息,包括变量、结果集、资源文件的多任务间传递,提供任务间传递上下文的能力。
+- 提供统一物料的能力:提供统一的物料,在多个工具间支持共享访问这些物料,并且物料支持存储多种的文件类型,并支持版本控制。
+- 提供统一配置和变量的能力:提供了统一的配置能力支持模板化的配置不同的引擎参数模版,支持自定义变量、内置常用的系统变量和时间格式变量等。
+- 提供公共错误码的能力:提供统一的错误码能力,对常用计算存储引擎的作物进行分类编码以及知识库的能力,并提供了方便的SDK进行调用。
 
-2. linkis-jobhistory:专门用于历史执行任务的存储和查询,用户可以通过jobhistory提供的接口获取历史任务
-    的执行情况。包括日志、状态、执行内容等。同时历史任务还支持了分页查询操作,对于管理员可以查看所有的历史任务,普通用户只能查看自己的历史任务。
-3. 
Linkis-udf:提供linkis的用户函数管理功能,具体可分为共享函数、个人函数、系统函数,以及函数使用的引擎,用户勾选后会在引擎启动的时候被自动加载。供用户在代码中直接引用和不同的脚本间进行函数复用。
-
-4. Linkis-variable:提供linkis的全局变量管理能力,存储用户定义的全局变量,查询用户定义的全局变量。
-
-5. linkis-instance-label:提供了label server 和label
-    client两个模块,为Engine和EM打标签,提供基于节点的标签增删改查能力。主要功能如下:
-
--   为一些特定的标签,提供资源管理能力,协助RM在资源管理层面更加精细化
-
--   为用户提供标签能力。为一些用户打上标签,这样在引擎申请时,会自动加上这些标签判断
-
--   提供标签解析模块,能将用户的请求,解析成一堆标签。
-
--   
具备节点标签管理的能力,主要用于提供节点的标签CRUD能力,还有标签资源管理用于管理某些标签的资源,标记一个Label的最大资源、最小资源和已使用资源。
+通过Linkis的公共增强服务,可以打破上层应用工具间的孤岛,做到变量、函数、文件、结果集等上下文的共享,就像下图所暂时的一样,并且大大减少应用工具间的重复开发工作。
+![](/Images/Architecture/Public_Enhancement_Service/pes_arc_demo.png)
 
+[详细介绍可以参考](https://mp.weixin.qq.com/s/UfUB8AGZtusbFmmtiZfK1A)
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/python.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/python.md
index 310e44d0d08..86e4a9847e2 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/python.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/python.md
@@ -155,3 +155,28 @@ insert into `linkis_ps_configuration_config_value` 
(`config_key_id`, `config_val
 (select `relation`.`config_key_id` AS `config_key_id`, '' AS `config_value`, 
`relation`.`engine_type_label_id` AS `config_label_id` FROM 
linkis_ps_configuration_key_engine_relation relation
 INNER JOIN linkis_cg_manager_label label ON relation.engine_type_label_id = 
label.id AND label.label_value = @PYTHON_ALL);
 ```
+
+
+### 4.4 其他python样例代码
+
+```python
+import pandas as pd
+ 
+data = {'name': ['aaaaaa', 'bbbbbb', 'cccccc'], 'pay': [4000, 5000, 6000]}
+frame = pd.DataFrame(data)
+show.show(frame)
+
+
+print('new reuslt')
+
+from matplotlib import pyplot as plt
+
+x=[4,8,10]
+y=[12,16,6]
+x2=[6,9,11]
+y2=[6,15,7]
+plt.bar(x,y,color='r',align='center')
+plt.bar(x2,y2,color='g',align='center')
+plt.show()
+
+```
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/spark.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/spark.md
index c3024597156..5b39faf4b40 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/spark.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/spark.md
@@ -71,6 +71,21 @@ labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, 
"hadoop-IDE");// required exe
 labels.put(LabelKeyConstant.CODE_TYPE_KEY, "sql"); // required codeType 
py,sql,scala
 ```
 
+Spark还支持提交Scala代码和Pyspark代码:
+````java
+
+//scala 
+labels.put(LabelKeyConstant.CODE_TYPE_KEY, "scala");
+code:
+val df=spark.sql("show tables")
+show(df)        
+//pyspark
+/labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py");
+code:
+df=spark.sql("show tables")
+show(df)
+````
+
 ### 3.3 通过提交jar包执行任务
 
 通过 `OnceEngineConn` 提交任务(通过 spark-submit 提交 jar 包执行任务),提交方式参考 
`org.apache.linkis.computation.client.SparkOnceJobTest`
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md
index f3136770eea..808fbea0aab 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/user-guide/sdk-manual.md
@@ -260,14 +260,15 @@ import 
org.apache.linkis.httpclient.dws.config.DWSClientConfigBuilder
 import org.apache.linkis.manager.label.constant.LabelKeyConstant
 import org.apache.linkis.ujes.client.request._
 import org.apache.linkis.ujes.client.response._
-
 import java.util
 import java.util.concurrent.TimeUnit
 
+import org.apache.linkis.ujes.client.UJESClient
+
 object LinkisClientTest {
   // 1. build config: linkis gateway url
   val clientConfig = DWSClientConfigBuilder.newBuilder()
-    .addServerUrl("http://127.0.0.1:9001/";) //set linkis-mg-gateway url: 
http://{ip}:{port}
+    .addServerUrl("http://127.0.0.1:8088/";) //set linkis-mg-gateway url: 
http://{ip}:{port}
     .connectionTimeout(30000) //connectionTimeOut
     .discoveryEnabled(false) //disable discovery
     .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
@@ -341,16 +342,17 @@ object LinkisClientTest {
   def toSubmit(user: String, code: String): JobExecuteResult = {
     // 1. build  params
     // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
-    val labels: util.Map[String, Any] = new util.HashMap[String, Any]
+    val labels: util.Map[String, AnyRef] = new util.HashMap[String, AnyRef]
     labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required 
engineType Label
     labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName"); // 
请求的用户和应用名,两个参数都不能少,其中APPName不能带有"-"建议替换为"_"
     labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // 指定脚本类型
 
-    val startupMap = new java.util.HashMap[String, Any]()
+    val startupMap = new java.util.HashMap[String, AnyRef]()
     // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
-    startupMap.put("spark.executor.instances", 2);
+    val instances: Integer = 2
+    startupMap.put("spark.executor.instances", instances)
     // setting linkis params
-    startupMap.put("wds.linkis.rm.yarnqueue", "default");
+    startupMap.put("wds.linkis.rm.yarnqueue", "default")
     // 2. build jobSubmitAction
     val jobSubmitAction = JobSubmitAction.builder
       .addExecuteCode(code)
@@ -364,3 +366,169 @@ object LinkisClientTest {
   }
 }
 ```
+
+## 4. Once SDK 使用
+Linkis-cli客户端支持提交Once类型的任务,引擎进程启动后只运行一次任务,任务结束后自动销毁
+
+OnceEngineConn 通过 LinkisManagerClient 调用 LinkisManager 的 createEngineConn 
接口,并将代码发送到用户创建的引擎,然后引擎开始执行
+
+
+## Once模式使用:
+
+1.首先创建一个新的 maven 项目或者在项目中引入以下依赖项
+
+```plain
+<dependency>
+    <groupId>org.apache.linkis</groupId>
+    <artifactId>linkis-computation-client</artifactId>
+    <version>${linkis.version}</version>
+</dependency>
+```
+2.编写一个测试类
+使用clien条件
+
+```plain
+1.配置正确可用的gatew地址:
+LinkisJobClient.config().setDefaultServerUrl("http://ip:9001";);
+2.将引擎参数,配置项,执行code写在code里面:
+ String code = "env {
+                          + "  spark.app.name = \"SeaTunnel\"\n"
+                          + "  spark.executor.instances = 2\n"
+                          + "  spark.executor.cores = 1\n"
+                          + "  spark.executor.memory = \"1g\"\n"
+                          + "}\n"
+                          + "\n"
+                          + "source {\n"
+                          + "  Fake {\n"
+                          + "    result_table_name = \"my_dataset\"\n"
+                          + "  }\n"
+                          + "\n"
+                          + "}\n"
+                          + "\n"
+                          + "transform {\n"
+                          + "}\n"
+                          + "\n"
+                          + "sink {\n"
+                          + "  Console {}\n"
+                          + "}";
+3.创建Once模式对象:SubmittableSimpleOnceJob :
+SubmittableSimpleOnceJob = LinkisJobClient.once()
+                .simple()
+                .builder()
+                .setCreateService("seatunnel-Test")
+                .setMaxSubmitTime(300000)   超时时间
+                .addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY(), 
"seatunnel-2.1.2")    引擎标签
+                .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY(), 
"hadoop-seatunnel")   用户标签
+                .addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY(), "once")  
            引擎模式标签
+                .addStartupParam(Configuration.IS_TEST_MODE().key(), true)     
           是否开启测试模式
+                .addExecuteUser("hadoop")      执行用户
+                .addJobContent("runType", "spark")  执行引擎
+                .addJobContent("code", code)    执行代码  
+                .addJobContent("master", "local[4]")
+                .addJobContent("deploy-mode", "client")
+                .addSource("jobName", "OnceJobTest")  名称
+                .build();
+```
+## 测试类示例代码:
+
+```plain
+package org.apache.linkis.ujes.client
+
+import org.apache.linkis.common.utils.Utils
+import java.util.concurrent.TimeUnit
+import java.util
+import org.apache.linkis.computation.client.LinkisJobBuilder
+import org.apache.linkis.computation.client.once.simple.{SimpleOnceJob, 
SimpleOnceJobBuilder, SubmittableSimpleOnceJob}
+import 
org.apache.linkis.computation.client.operator.impl.{EngineConnLogOperator, 
EngineConnMetricsOperator, EngineConnProgressOperator}
+import org.apache.linkis.computation.client.utils.LabelKeyUtils
+import scala.collection.JavaConverters._
+@Deprecated
+object SqoopOnceJobTest extends App {
+  LinkisJobBuilder.setDefaultServerUrl("http://gateway地址:9001";)
+  val logPath = "C:\\Users\\resources\\log4j.properties"
+  System.setProperty("log4j.configurationFile", logPath)
+  val startUpMap = new util.HashMap[String, AnyRef]
+  startUpMap.put("wds.linkis.engineconn.java.driver.memory", "1g")
+  val builder = SimpleOnceJob.builder().setCreateService("Linkis-Client")
+    .addLabel(LabelKeyUtils.ENGINE_TYPE_LABEL_KEY, "sqoop-1.4.6")
+    .addLabel(LabelKeyUtils.USER_CREATOR_LABEL_KEY, "hadoop-Client")
+    .addLabel(LabelKeyUtils.ENGINE_CONN_MODE_LABEL_KEY, "once")
+    .setStartupParams(startUpMap)
+    .setMaxSubmitTime(30000)
+    .addExecuteUser("hadoop")
+  val onceJob = importJob(builder)
+  val time = System.currentTimeMillis()
+  onceJob.submit()
+  println(onceJob.getId)
+  val logOperator = 
onceJob.getOperator(EngineConnLogOperator.OPERATOR_NAME).asInstanceOf[EngineConnLogOperator]
+  println(onceJob.getECMServiceInstance)
+  logOperator.setFromLine(0)
+  logOperator.setECMServiceInstance(onceJob.getECMServiceInstance)
+  logOperator.setEngineConnType("sqoop")
+  logOperator.setIgnoreKeywords("[main],[SpringContextShutdownHook]")
+  var progressOperator = 
onceJob.getOperator(EngineConnProgressOperator.OPERATOR_NAME).asInstanceOf[EngineConnProgressOperator]
+  var metricOperator = 
onceJob.getOperator(EngineConnMetricsOperator.OPERATOR_NAME).asInstanceOf[EngineConnMetricsOperator]
+  var end = false
+  var rowBefore = 1
+  while (!end || rowBefore > 0) {
+    if (onceJob.isCompleted) {
+      end = true
+      metricOperator = null
+    }
+    logOperator.setPageSize(100)
+    Utils.tryQuietly {
+      val logs = logOperator.apply()
+      logs.logs.asScala.foreach(log => {
+        println(log)
+      })
+      rowBefore = logs.logs.size
+    }
+    Thread.sleep(3000)
+    Option(metricOperator).foreach(operator => {
+      if (!onceJob.isCompleted) {
+        println(s"Metric监控: ${operator.apply()}")
+        println(s"进度: ${progressOperator.apply()}")
+      }
+    })
+  }
+  onceJob.isCompleted
+  onceJob.waitForCompleted()
+  println(onceJob.getStatus)
+  println(TimeUnit.SECONDS.convert(System.currentTimeMillis() - time, 
TimeUnit.MILLISECONDS) + "s")
+  System.exit(0)
+
+  def importJob(jobBuilder: SimpleOnceJobBuilder): SubmittableSimpleOnceJob = {
+    jobBuilder
+      .addJobContent("sqoop.env.mapreduce.job.queuename", "queue_1003_01")
+      .addJobContent("sqoop.mode", "import")
+      .addJobContent("sqoop.args.connect", "jdbc:mysql://数据库地址/库名")
+      .addJobContent("sqoop.args.username", "数据库账户")
+      .addJobContent("sqoop.args.password", "数据库密码")
+      .addJobContent("sqoop.args.query", "select * from linkis_ps_udf_manager 
where 1=1 and  $CONDITIONS") 
+       #表一定要存在 $CONDITIONS不可缺少
+      .addJobContent("sqoop.args.hcatalog.database", "janicegong_ind")
+      .addJobContent("sqoop.args.hcatalog.table", 
"linkis_ps_udf_manager_sync2")
+      .addJobContent("sqoop.args.hcatalog.partition.keys", "ds")
+      .addJobContent("sqoop.args.hcatalog.partition.values", "20220708")
+      .addJobContent("sqoop.args.num.mappers", "1")
+      .build()
+  }
+  def exportJob(jobBuilder: SimpleOnceJobBuilder): SubmittableSimpleOnceJob = {
+      jobBuilder
+      .addJobContent("sqoop.env.mapreduce.job.queuename", "queue_1003_01")
+      .addJobContent("sqoop.mode", "import")
+      .addJobContent("sqoop.args.connect", "jdbc:mysql://数据库地址/库名")
+      .addJobContent("sqoop.args.username", "数据库账户")
+      .addJobContent("sqoop.args.password", "数据库密码")
+      .addJobContent("sqoop.args.query", "select * from linkis_ps_udf_manager 
where 1=1 and  $CONDITIONS") 
+       #表一定要存在 $CONDITIONS不可缺少
+      .addJobContent("sqoop.args.hcatalog.database", "janicegong_ind")
+      .addJobContent("sqoop.args.hcatalog.table", 
"linkis_ps_udf_manager_sync2")
+      .addJobContent("sqoop.args.hcatalog.partition.keys", "ds")
+      .addJobContent("sqoop.args.hcatalog.partition.values", "20220708")
+      .addJobContent("sqoop.args.num.mappers", "1")
+      .build
+  }
+}
+```
+3.测试程序完成,引擎会自动销毁,不用手动清除
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.2/user-guide/sdk-manual.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.2/user-guide/sdk-manual.md
index f3136770eea..0fc0e6b8f1c 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.2/user-guide/sdk-manual.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-1.3.2/user-guide/sdk-manual.md
@@ -260,14 +260,15 @@ import 
org.apache.linkis.httpclient.dws.config.DWSClientConfigBuilder
 import org.apache.linkis.manager.label.constant.LabelKeyConstant
 import org.apache.linkis.ujes.client.request._
 import org.apache.linkis.ujes.client.response._
-
 import java.util
 import java.util.concurrent.TimeUnit
 
+import org.apache.linkis.ujes.client.UJESClient
+
 object LinkisClientTest {
   // 1. build config: linkis gateway url
   val clientConfig = DWSClientConfigBuilder.newBuilder()
-    .addServerUrl("http://127.0.0.1:9001/";) //set linkis-mg-gateway url: 
http://{ip}:{port}
+    .addServerUrl("http://127.0.0.1:8088/";) //set linkis-mg-gateway url: 
http://{ip}:{port}
     .connectionTimeout(30000) //connectionTimeOut
     .discoveryEnabled(false) //disable discovery
     .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
@@ -341,16 +342,17 @@ object LinkisClientTest {
   def toSubmit(user: String, code: String): JobExecuteResult = {
     // 1. build  params
     // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
-    val labels: util.Map[String, Any] = new util.HashMap[String, Any]
+    val labels: util.Map[String, AnyRef] = new util.HashMap[String, AnyRef]
     labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required 
engineType Label
     labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName"); // 
请求的用户和应用名,两个参数都不能少,其中APPName不能带有"-"建议替换为"_"
     labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // 指定脚本类型
 
-    val startupMap = new java.util.HashMap[String, Any]()
+    val startupMap = new java.util.HashMap[String, AnyRef]()
     // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
-    startupMap.put("spark.executor.instances", 2);
+    val instances: Integer = 2
+    startupMap.put("spark.executor.instances", instances)
     // setting linkis params
-    startupMap.put("wds.linkis.rm.yarnqueue", "default");
+    startupMap.put("wds.linkis.rm.yarnqueue", "default")
     // 2. build jobSubmitAction
     val jobSubmitAction = JobSubmitAction.builder
       .addExecuteCode(code)
diff --git a/static/Images/Architecture/Public_Enhancement_Service/pes_arc.png 
b/static/Images/Architecture/Public_Enhancement_Service/pes_arc.png
new file mode 100644
index 00000000000..f01fa25d43d
Binary files /dev/null and 
b/static/Images/Architecture/Public_Enhancement_Service/pes_arc.png differ
diff --git 
a/static/Images/Architecture/Public_Enhancement_Service/pes_arc_demo.png 
b/static/Images/Architecture/Public_Enhancement_Service/pes_arc_demo.png
new file mode 100644
index 00000000000..aa4cab17c48
Binary files /dev/null and 
b/static/Images/Architecture/Public_Enhancement_Service/pes_arc_demo.png differ
diff --git a/versioned_docs/version-1.3.2/user-guide/sdk-manual.md 
b/versioned_docs/version-1.3.2/user-guide/sdk-manual.md
index 765a27fe944..cdc34501055 100644
--- a/versioned_docs/version-1.3.2/user-guide/sdk-manual.md
+++ b/versioned_docs/version-1.3.2/user-guide/sdk-manual.md
@@ -254,46 +254,47 @@ import 
org.apache.linkis.httpclient.dws.config.DWSClientConfigBuilder
 import org.apache.linkis.manager.label.constant.LabelKeyConstant
 import org.apache.linkis.ujes.client.request._
 import org.apache.linkis.ujes.client.response._
-
 import java.util
 import java.util.concurrent.TimeUnit
 
+import org.apache.linkis.ujes.client.UJESClient
+
 object LinkisClientTest {
-  // 1. build config: linkis gateway url
-  val clientConfig = DWSClientConfigBuilder.newBuilder()
-    .addServerUrl("http://127.0.0.1:9001/";) //set linkis-mg-gateway url: 
http://{ip}:{port}
-    .connectionTimeout(30000) //connectionTimeOut
-    .discoveryEnabled(false) //disable discovery
-    .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
-    .loadbalancerEnabled(true) // enable loadbalance
-    .maxConnectionSize(5) // set max Connection
-    .retryEnabled(false) // set retry
-    .readTimeout(30000) //set read timeout
-    .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
//AuthenticationStrategy Linkis authen suppory static and Token
-    .setAuthTokenKey("hadoop") // set submit user
-    .setAuthTokenValue("hadoop") // set passwd or token 
(setAuthTokenValue("BML-AUTH"))
-    .setDWSVersion("v1") //link rest version v1
-    .build();
-
-  // 2. new Client(Linkis Client) by clientConfig
-  val client = UJESClient(clientConfig)
-
-  def main(args: Array[String]): Unit = {
-    val user = "hadoop" // execute user user needs to be consistent with the 
value of AuthTokenKey
-    val executeCode = "df=spark.sql(\"show tables\")\n" +
-      "show(df)"; // code support:sql/hql/py/scala
-    try {
-      // 3. build job and execute
-      println("user : " + user + ", code : [" + executeCode + "]")
-      // It is recommended to use submit, which supports the transfer of task 
labels
-      val jobExecuteResult = toSubmit(user, executeCode)
-      println("execId: " + jobExecuteResult.getExecID + ", taskId: " + 
jobExecuteResult.taskID)
-      // 4. get job info
-      var jobInfoResult = client.getJobInfo(jobExecuteResult)
-      where logFromLen = 0
-      val logSize = 100
-      val sleepTimeMills: Int = 1000
-      while (!jobInfoResult.isCompleted) {
+        // 1. build config: linkis gateway url
+        val clientConfig = DWSClientConfigBuilder.newBuilder()
+        .addServerUrl("http://127.0.0.1:9001/";) //set linkis-mg-gateway url: 
http://{ip}:{port}
+        .connectionTimeout(30000) //connectionTimeOut
+        .discoveryEnabled(false) //disable discovery
+        .discoveryFrequency(1, TimeUnit.MINUTES) // discovery frequency
+        .loadbalancerEnabled(true) // enable loadbalance
+        .maxConnectionSize(5) // set max Connection
+        .retryEnabled(false) // set retry
+        .readTimeout(30000) //set read timeout
+        .setAuthenticationStrategy(new StaticAuthenticationStrategy()) 
//AuthenticationStrategy Linkis authen suppory static and Token
+        .setAuthTokenKey("hadoop") // set submit user
+        .setAuthTokenValue("hadoop") // set passwd or token 
(setAuthTokenValue("BML-AUTH"))
+        .setDWSVersion("v1") //link rest version v1
+        .build();
+
+        // 2. new Client(Linkis Client) by clientConfig
+        val client = UJESClient(clientConfig)
+
+        def main(args: Array[String]): Unit = {
+        val user = "hadoop" // execute user user needs to be consistent with 
the value of AuthTokenKey
+        val executeCode = "df=spark.sql(\"show tables\")\n" +
+        "show(df)"; // code support:sql/hql/py/scala
+        try {
+        // 3. build job and execute
+        println("user : " + user + ", code : [" + executeCode + "]")
+        // It is recommended to use submit, which supports the transfer of 
task labels
+        val jobExecuteResult = toSubmit(user, executeCode)
+        println("execId: " + jobExecuteResult.getExecID + ", taskId: " + 
jobExecuteResult.taskID)
+        // 4. get job info
+        var jobInfoResult = client.getJobInfo(jobExecuteResult)
+        var logFromLen = 0
+        val logSize = 100
+        val sleepTimeMills: Int = 1000
+        while (!jobInfoResult.isCompleted) {
         // 5. get progress and log
         val progress = client.progress(jobExecuteResult)
         println("progress: " + progress.getProgress)
@@ -302,59 +303,60 @@ object LinkisClientTest {
         val logArray = logObj.getLog
         // 0: info 1: warn 2: error 3: all
         if (logArray != null && logArray.size >= 4 && 
StringUtils.isNotEmpty(logArray.get(3))) {
-          println(s"log: ${logArray.get(3)}")
+        println(s"log: ${logArray.get(3)}")
         }
         Utils.sleepQuietly(sleepTimeMills)
         jobInfoResult = client.getJobInfo(jobExecuteResult)
-      }
-      if (!jobInfoResult.isSucceed) {
+        }
+        if (!jobInfoResult.isSucceed) {
         println("Failed to execute job: " + jobInfoResult.getMessage)
         throw new Exception(jobInfoResult.getMessage)
-      }
-
-      // 6. Get the result set list (if the user submits multiple SQLs at a 
time,
-      // multiple result sets will be generated)
-      val jobInfo = client.getJobInfo(jobExecuteResult)
-      val resultSetList = jobInfoResult.getResultSetList(client)
-      println("All result set list:")
-      resultSetList.foreach(println)
-      val oneResultSet = jobInfo.getResultSetList(client).head
-      // 7. get resultContent
-      val resultSetResult: ResultSetResult = 
client.resultSet(ResultSetAction.builder.setPath(oneResultSet).setUser(jobExecuteResult.getUser).build)
-      println("metadata: " + resultSetResult.getMetadata) // column name type
-      println("res: " + resultSetResult.getFileContent) //row data
-    } catch {
-      case e: Exception => {
+        }
+
+        // 6. Get the result set list (if the user submits multiple SQLs at a 
time,
+        // multiple result sets will be generated)
+        val jobInfo = client.getJobInfo(jobExecuteResult)
+        val resultSetList = jobInfoResult.getResultSetList(client)
+        println("All result set list:")
+        resultSetList.foreach(println)
+        val oneResultSet = jobInfo.getResultSetList(client).head
+        // 7. get resultContent
+        val resultSetResult: ResultSetResult = 
client.resultSet(ResultSetAction.builder.setPath(oneResultSet).setUser(jobExecuteResult.getUser).build)
+        println("metadata: " + resultSetResult.getMetadata) // column name type
+        println("res: " + resultSetResult.getFileContent) //row data
+        } catch {
+        case e: Exception => {
         e.printStackTrace() //please use log
-      }
-    }
-    IOUtils.closeQuietly(client)
-  }
-
-
-  def toSubmit(user: String, code: String): JobExecuteResult = {
-    // 1. build  params
-    // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
-    val labels: util.Map[String, Any] = new util.HashMap[String, Any]
-    labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // required 
engineType Label
-    labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName"); // 
The requested user and application name, both parameters must be missing, where 
APPName cannot contain "-", it is recommended to replace it with "_"
-    labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // specify the script 
type
-
-    val startupMap = new java.util.HashMap[String, Any]()
-    // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
-    startupMap.put("spark.executor.instances", 2);
-    // setting linkis params
-    startupMap.put("wds.linkis.rm.yarnqueue", "default");
-    // 2. build jobSubmitAction
-    val jobSubmitAction = JobSubmitAction.builder
-      .addExecuteCode(code)
-      .setStartupParams(startupMap)
-      .setUser(user) //submit user
-      .addExecuteUser(user) //execute user
-      .setLabels(labels) .
-      .build
-    // 3. to execute
-    client.submit(jobSubmitAction)
-  }
-}
+        }
+        }
+        IOUtils.closeQuietly(client)
+        }
+
+
+        def toSubmit(user: String, code: String): JobExecuteResult = {
+        // 1. build  params
+        // set label map 
:EngineTypeLabel/UserCreatorLabel/EngineRunTypeLabel/Tenant
+        val labels: util.Map[String, AnyRef] = new util.HashMap[String, AnyRef]
+        labels.put(LabelKeyConstant.ENGINE_TYPE_KEY, "spark-2.4.3"); // 
required engineType Label
+        labels.put(LabelKeyConstant.USER_CREATOR_TYPE_KEY, user + "-APPName"); 
// The requested user and application name, both parameters must be missing, 
where APPName cannot contain "-", it is recommended to replace it with "_"
+        labels.put(LabelKeyConstant.CODE_TYPE_KEY, "py"); // specify the 
script type
+
+        val startupMap = new java.util.HashMap[String, AnyRef]()
+        // Support setting engine native parameters,For example: parameters of 
engines such as spark/hive
+        val instances: Integer = 2
+        startupMap.put("spark.executor.instances", instances)
+        // setting linkis params
+        startupMap.put("wds.linkis.rm.yarnqueue", "default");
+        // 2. build jobSubmitAction
+        val jobSubmitAction = JobSubmitAction.builder
+        .addExecuteCode(code)
+        .setStartupParams(startupMap)
+        .setUser(user) //submit user
+        .addExecuteUser(user) //execute user
+        .setLabels(labels)
+        .build
+        // 3. to execute
+        client.submit(jobSubmitAction)
+        }
+        }
 ```


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to