This is an automated email from the ASF dual-hosted git repository.
peacewong pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/incubator-linkis-website.git
The following commit(s) were added to refs/heads/dev by this push:
new a0b70d7 fix typo
new c1b142c Merge pull request #71 from legendtkl/dev
a0b70d7 is described below
commit a0b70d762e8f0d3e053364e15561715ec60bbf3b
Author: taokelu <[email protected]>
AuthorDate: Wed Dec 22 14:04:28 2021 +0800
fix typo
---
community/how-to-release.md | 4 ++--
community/how-to-verify.md | 2 +-
docs/architecture/commons/message_scheduler.md | 2 +-
docs/architecture/commons/rpc.md | 2 +-
docs/architecture/difference_between_1.0_and_0.x.md | 6 +++---
.../job_submission_preparation_and_execution_process.md | 6 +++---
docs/architecture/overview.md | 8 ++++----
7 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/community/how-to-release.md b/community/how-to-release.md
index 0a7a7e8..204378a 100644
--- a/community/how-to-release.md
+++ b/community/how-to-release.md
@@ -16,7 +16,7 @@ Please refer to the following link to find more ASF release
guidelines:
- [Apache Release Policy](http://www.apache.org/dev/release.html)
- [Maven Release
Info](http://www.apache.org/dev/publishing-maven-artifacts.html)
-Both apache's maven and SVN repositorys use GPG signatures to verify the
legitimacy of material files
+Both apache's maven and SVN repositories use GPG signatures to verify the
legitimacy of material files
## 1 Tool preparation
(Required when this publisher is releasing for the first time)
@@ -672,7 +672,7 @@ Edit the version number and version description, and click
`Publish release`
### 7.6 Update download page
-The linkis official website download address should point to the official
apache address
+The Linkis official website download address should point to the official
apache address
After waiting and confirming that the new release version is synchronized to
the Apache mirror, update the following page:
diff --git a/community/how-to-verify.md b/community/how-to-verify.md
index 5043fbb..cf0c287 100644
--- a/community/how-to-verify.md
+++ b/community/how-to-verify.md
@@ -80,7 +80,7 @@ gpg --import KEYS # Import KEYS to local
### 2.3 Check sha512 hash
> After calculating the sha512 hash locally, verify that it is consistent with
> the dist. If you upload a binary package, you also need to check the sha512
> hash of the binary package
-> Mac OS/Linux
+> macOS/Linux
```shell
for i in *.tar.gz; do echo $i; gpg --print-md SHA512 $i; done
diff --git a/docs/architecture/commons/message_scheduler.md
b/docs/architecture/commons/message_scheduler.md
index 39ff980..b6ebca3 100644
--- a/docs/architecture/commons/message_scheduler.md
+++ b/docs/architecture/commons/message_scheduler.md
@@ -14,5 +14,5 @@ sidebar_position: 1
* ImplicitRegistry: Register the corresponding Implicit module, and store the
resolved ImplicitMethod in a Map container.
* Converter: Start to scan the non-interface non-abstract subclass of
RequestMethod and store it in the Map, parse the Restful and match the related
RequestProtocol.
* Publisher: Realize the publishing scheduling function, find the
ServiceMethod matching the RequestProtocol in the Registry, and encapsulate it
as a Job for submission scheduling.
-* Scheduler: Scheduling implementation, using Linkis-Sceduler to execute the
job and return the MessageJob object.
+* Scheduler: Scheduling implementation, using Linkis-Scheduler to execute the
job and return the MessageJob object.
* TxManager: Complete transaction management, perform transaction management
on job execution, and judge whether to commit or rollback after the job
execution ends.
\ No newline at end of file
diff --git a/docs/architecture/commons/rpc.md b/docs/architecture/commons/rpc.md
index e2d8b59..c59f750 100644
--- a/docs/architecture/commons/rpc.md
+++ b/docs/architecture/commons/rpc.md
@@ -15,7 +15,7 @@ The functions of the main modules are introduced as follows:
* Eureka: service registration center, user management service, service
discovery.
* Sender: Service request interface, the sender uses Sender to request service
from the receiver.
* Receiver: The service request receives the corresponding interface, and the
receiver responds to the service through this interface.
-* Interceptor: Sender sender will pass the user's request to the interceptor.
The interceptor intercepts the request and performs additional functional
processing on the request. The broadcast interceptor is used to broadcast
operations on the request, the retry interceptor is used to retry the
processing of failed requests, and the cache interceptor is used to read and
cache simple and unchanged requests. , And the default interceptor that
provides the default implementation.
+* Interceptor: Sender will pass the user's request to the interceptor. The
interceptor intercepts the request and performs additional functional
processing on the request. The broadcast interceptor is used to broadcast
operations on the request, the retry interceptor is used to retry the
processing of failed requests, and the cache interceptor is used to read and
cache simple and unchanged requests. , And the default interceptor that
provides the default implementation.
* Decoder, Encoder: used for request encoding and decoding.
* Feign: is a lightweight framework for http request calls, a declarative
WebService client program, used for Linkis-RPC bottom communication.
* Listener: monitor module, mainly used to monitor broadcast requests.
\ No newline at end of file
diff --git a/docs/architecture/difference_between_1.0_and_0.x.md
b/docs/architecture/difference_between_1.0_and_0.x.md
index f8051e1..286a991 100644
--- a/docs/architecture/difference_between_1.0_and_0.x.md
+++ b/docs/architecture/difference_between_1.0_and_0.x.md
@@ -7,7 +7,7 @@ sidebar_position: 1
## 1. Brief Description
First of all, the Entrance and
EngineConnManager (formerly EngineManager) services under the Linkis1.0
architecture are completely unrelated to the engine. That is, under the
Linkis1.0 architecture, each engine does not need to be implemented and started
the corresponding Entrance and EngineConnManager, and Linkis1.0’s Each Entrance
and EngineConnManager can be shared by all engines.
Secondly, Linkis1.0 added the
Linkis-Manager service to provide external AppManager (application management),
ResourceManager (resource management, the original ResourceManager service) and
LabelManager (label management) capabilities.
- Then, in order to reduce the
difficulty of implementing and deploying a new engine, Linkis 1.0 re-architects
a module called EngineConnPlugin. Each new engine only needs to implement the
EngineConnPlugin interface.Linkis EngineConnPluginServer supports dynamic
loading of EngineConnPlugin (new engine) in the form of a plug-in. Once
EngineConnPluginServer is successfully loaded, EngineConnManager can quickly
start an instance of the engine fo [...]
+ Then, in order to reduce the
difficulty of implementing and deploying a new engine, Linkis 1.0 re-architects
a module called EngineConnPlugin. Each new engine only needs to implement the
EngineConnPlugin interface. Linkis EngineConnPluginServer supports dynamic
loading of EngineConnPlugin (new engine) in the form of a plug-in. Once
EngineConnPluginServer is successfully loaded, EngineConnManager can quickly
start an instance of the engine f [...]
Finally, all the microservices
of Linkis are summarized and classified, which are generally divided into three
major levels: public enhancement services, computing governance services and
microservice governance services, from the code hierarchy, microservice naming
and installation directory structure, etc. To standardize the microservice
system of Linkis1.0.
## 2. Main Feature
1. **Strengthen computing governance**, Linkis 1.0 mainly strengthens the
comprehensive management and control capabilities of computing governance from
engine management, label management, ECM management, and resource management.
It is based on the powerful management and control design concept of labeling.
This makes Linkis 1.0 a solid step towards multi-IDC, multi-cluster, and
multi-container.
@@ -30,9 +30,9 @@ sidebar_position: 1
AppManager will coordinate and
manage all EngineConnManager and EngineConn, and the life cycle of EngineConn
application, reuse, creation, switching, and destruction will be handed over to
AppManager for management.
The LabelManager will provide
cross-IDC and cross-cluster EngineConn and EngineConnManager routing and
management capabilities based on multi-level combined tags.
## 5. Introduction To Linkis EngineConnPlugin
- EngineConnPlugin is mainly
used to reduce the cost of access and deployment of new computing storage. It
truly enables users to “just need to implement a class to connect to a new
computing storage engine; just execute a script to quickly deploy a new engine
".
+ EngineConnPlugin is mainly
used to reduce the cost of access and deployment of new computing storage. It
truly enables users to "just need to implement a class to connect to a new
computing storage engine; just execute a script to quickly deploy a new
engine".
### 5.1 New Engine Implementation Comparison
- The following are the relevant
interfaces and classes that the user Linkis0.X needs to implement to implement
a new engine:
+ The following are the relevant
interfaces and classes that the user Linkis 0.X needs to implement a new
engine:

The following is Linkis 1.0.0,
which implements a new engine, the interfaces and classes that users need to
implement:

diff --git
a/docs/architecture/job_submission_preparation_and_execution_process.md
b/docs/architecture/job_submission_preparation_and_execution_process.md
index 6b91f0b..cd7f483 100644
--- a/docs/architecture/job_submission_preparation_and_execution_process.md
+++ b/docs/architecture/job_submission_preparation_and_execution_process.md
@@ -15,7 +15,7 @@ Among them:
- Entrance, as the entrance to the submission stage, provides task reception,
scheduling and job information forwarding capabilities. It is the unified
entrance for all computing tasks. It will forward computing tasks to
Orchestrator for scheduling and execution.
- Orchestrator, as the entrance to the preparation phase, mainly provides job
analysis, orchestration and execution capabilities.
-- Linkis Manager: The management center of computing governance capabilities.
Its main responsibilities are as follow:
+- Linkis Manager: The management center of computing governance capabilities.
Its main responsibilities are as follows:
1. ResourceManager:Not only has the resource management capabilities of Yarn
and Linkis EngineConnManager, but also provides tag-based multi-level resource
allocation and recovery capabilities, allowing ResourceManager to have full
resource management capabilities across clusters and across computing resource
types;
2. AppManager: Coordinate and manage all EngineConnManager and EngineConn,
including the life cycle of EngineConn application, reuse, creation, switching,
and destruction to AppManager for management;
@@ -78,7 +78,7 @@ The main process is as follows:
- Parser: Complete the analysis of ASTJob. Split ASTJob into an AST tree
composed of ASTJob and ASTStage.
- Validator: Complete the inspection and information supplement of ASTJob and
ASTStage, such as code inspection, necessary Label information supplement, etc.
- Planner: Convert an AST tree into a Logical tree. The Logical tree at this
time has been composed of LogicalTask, which contains all the execution logic
of the entire computing task.
-- Optimizer: Convert a Logical tree to a Physica tree and optimize the
Physical tree.
+- Optimizer: Convert a Logical tree to a Physical tree and optimize the
Physical tree.
In a physical tree, the majority of nodes are computing strategy logic. Only
the middle ExecTask truly encapsulates the execution logic which will be
further submitted to and executed at EngineConn. As shown below:
@@ -122,7 +122,7 @@ Let us go back to the Execution stage, where we focus on
the execution logic of
1. As mentioned earlier, the first step in the preparation phase is to obtain
a usable EngineConn from LinkisManager. After ExecTask gets this EngineConn, it
will submit the user's computing task to EngineConn through an RPC request.
2. After EngineConn receives the computing task, it will asynchronously submit
it to the underlying computing storage engine through the thread pool, and then
immediately return an execution ID.
-3. After ExecTask gets this execution ID, it can then use the this ID to
asynchronously pull the execution status of the computing task (such as:
status, progress, log, result set, etc.).
+3. After ExecTask gets this execution ID, it can then use the ID to
asynchronously pull the execution status of the computing task (such as:
status, progress, log, result set, etc.).
4. At the same time, EngineConn will monitor the execution of the underlying
computing storage engine in real time through multiple registered Listeners. If
the computing storage engine does not support registering Listeners, EngineConn
will start a daemon thread for the computing task and periodically pull the
execution status from the computing storage engine.
5. EngineConn will pull the execution status back to the microservice where
Orchestrator is located in real time through RCP request.
6. After the Receiver of the microservice receives the execution status, it
will broadcast it through the ListenerBus, and the Orchestrator Execution will
consume the event and dynamically update the execution status of the Physical
tree.
diff --git a/docs/architecture/overview.md b/docs/architecture/overview.md
index 0379f7b..c6f4273 100644
--- a/docs/architecture/overview.md
+++ b/docs/architecture/overview.md
@@ -13,11 +13,11 @@ The specific responsibilities of each category are as
follows:
1. Public enhancement services are the material library services, context
services, data source services and public services that Linkis 0.X has provided.
2. The microservice governance services are Spring Cloud Gateway, Eureka and
Open Feign already provided by Linkis 0.X, and Linkis 1.0 will also provide
support for Nacos
-3. Computing governance services are the core focus of Linkis 1.0, from
submission, preparation to execution, overall three stages to comprehensively
upgrade Linkis's ability to perform control over user tasks.
+3. Computing governance services are the core focus of Linkis 1.0, from
submission, preparation to execution, overall three stages to comprehensively
upgrade Linkis' ability to perform control over user tasks.
The following is a directory listing of Linkis1.0 architecture documents:
1. The characteristics of Linkis1.0's architecture , please read [The
difference between Linkis1.0 and Linkis0.x](difference_between_1.0_and_0.x.md).
-2. Linkis1.0 public enhancement service related documents, please read [Public
Enhancement Service](public_enhancement_services/overview.md).
-3. Linkis1.0 microservice governance related documents, please read
[Microservice Governance](microservice_governance_services/overview.md).
-4. Linkis1.0 computing governance service related documents, please read
[Computation Governance Service](computation_governance_services/overview.md).
\ No newline at end of file
+2. Linkis 1.0 public enhancement service related documents, please read
[Public Enhancement Service](public_enhancement_services/overview.md).
+3. Linkis 1.0 microservice governance related documents, please read
[Microservice Governance](microservice_governance_services/overview.md).
+4. Linkis 1.0 computing governance service related documents, please read
[Computation Governance Service](computation_governance_services/overview.md).
\ No newline at end of file
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]