This is an automated email from the ASF dual-hosted git repository.

jin pushed a commit to branch fix-docker
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git


The following commit(s) were added to refs/heads/fix-docker by this push:
     new d1003c3f enhance typo lint / grammar
d1003c3f is described below

commit d1003c3f962e3b7a8a62523bf5bd125297db8006
Author: imbajin <j...@apache.org>
AuthorDate: Thu Dec 12 18:46:22 2024 +0800

    enhance typo lint / grammar
---
 content/en/docs/introduction/README.md           |  6 ++--
 content/en/docs/quickstart/hugegraph-ai.md       |  6 ++--
 content/en/docs/quickstart/hugegraph-client.md   |  8 ++---
 content/en/docs/quickstart/hugegraph-computer.md |  8 ++---
 content/en/docs/quickstart/hugegraph-hubble.md   | 18 +++++------
 content/en/docs/quickstart/hugegraph-loader.md   | 40 ++++++++++++------------
 content/en/docs/quickstart/hugegraph-server.md   |  4 +--
 7 files changed, 45 insertions(+), 45 deletions(-)

diff --git a/content/en/docs/introduction/README.md 
b/content/en/docs/introduction/README.md
index d9d71b23..8b4bf869 100644
--- a/content/en/docs/introduction/README.md
+++ b/content/en/docs/introduction/README.md
@@ -31,7 +31,7 @@ The functions of this system include but are not limited to:
 - Supports batch import of data from multiple data sources (including local 
files, HDFS files, MySQL databases, and other data sources), and supports 
import of multiple file formats (including TXT, CSV, JSON, and other formats)
 - With a visual operation interface, it can be used for operation, analysis, 
and display diagrams, reducing the threshold for users to use
 - Optimized graph interface: shortest path (Shortest Path), K-step connected 
subgraph (K-neighbor), K-step to reach the adjacent point (K-out), personalized 
recommendation algorithm PersonalRank, etc.
-- Implemented based on Apache TinkerPop3 framework, supports Gremlin graph 
query language
+- Implemented based on the Apache TinkerPop3 framework, supports Gremlin graph 
query language
 - Support attribute graph, attributes can be added to vertices and edges, and 
support rich attribute types
 - Has independent schema metadata information, has powerful graph modeling 
capabilities, and facilitates third-party system integration
 - Support multi-vertex ID strategy: support primary key ID, support automatic 
ID generation, support user-defined string ID, support user-defined digital ID
@@ -44,8 +44,8 @@ The functions of this system include but are not limited to:
 
 - [HugeGraph-Server](/docs/quickstart/hugegraph-server): HugeGraph-Server is 
the core part of the HugeGraph project, containing Core, Backend, API and other 
submodules;
   - Core: Implements the graph engine, connects to the Backend module 
downwards, and supports the API module upwards;
-  - Backend: Implements the storage of graph data to the backend, supports 
backends including: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL and 
PostgreSQL, users can choose one according to the actual situation;
-  - API: Built-in REST Server, provides RESTful API to users, and is fully 
compatible with Gremlin queries. (Supports distributed storage and computation 
pushdown)
+  - Backend: Implements the storage of graph data to the backend, supports 
backends including Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL and 
PostgreSQL, users can choose one according to the actual situation;
+  - API: Built-in REST Server provides RESTful API to users and is fully 
compatible with Gremlin queries. (Supports distributed storage and computation 
pushdown)
 - [HugeGraph-Toolchain](https://github.com/apache/hugegraph-toolchain): 
(Toolchain)
   - [HugeGraph-Client](/docs/quickstart/hugegraph-client): HugeGraph-Client 
provides a RESTful API client for connecting to HugeGraph-Server, currently 
only the Java version is implemented, users of other languages can implement it 
themselves;
   - [HugeGraph-Loader](/docs/quickstart/hugegraph-loader): HugeGraph-Loader is 
a data import tool based on HugeGraph-Client, which transforms ordinary text 
data into vertices and edges of the graph and inserts them into the graph 
database;
diff --git a/content/en/docs/quickstart/hugegraph-ai.md 
b/content/en/docs/quickstart/hugegraph-ai.md
index 03bec9b8..f6fbcc97 100644
--- a/content/en/docs/quickstart/hugegraph-ai.md
+++ b/content/en/docs/quickstart/hugegraph-ai.md
@@ -50,7 +50,7 @@ with large models, integration with graph machine learning 
components, etc., to
 
 7. After running the web demo, the config file `.env` will be automatically 
generated at the path `hugegraph-llm/.env`.    Additionally, a prompt-related 
configuration file `config_prompt.yaml` will also be generated at the path 
`hugegraph-llm/src/hugegraph_llm/resources/demo/config_prompt.yaml`.
 
-    You can modify the content on the web page, and it will be automatically 
saved to the configuration file after the corresponding feature is triggered.  
You can also modify the file directly without restarting the web application;  
simply refresh the page to load your latest changes.
+    You can modify the content on the web page, and it will be automatically 
saved to the configuration file after the corresponding feature is triggered.  
You can also modify the file directly without restarting the web application; 
refresh the page to load your latest changes.
 
     (Optional)To regenerate the config file, you can use `config.generate` 
with `-u` or `--update`.
     ```bash
@@ -77,13 +77,13 @@ with large models, integration with graph machine learning 
components, etc., to
 - Docs:
   - text: Build rag index from plain text
   - file: Upload file(s) which should be <u>TXT</u> or <u>.docx</u> (Multiple 
files can be selected together)
-- [Schema](https://hugegraph.apache.org/docs/clients/restful-api/schema/): 
(Accept **2 types**)
+- [Schema](https://hugegraph.apache.org/docs/clients/restful-api/schema/): 
(Except **2 types**)
   - User-defined Schema (JSON format, follow the 
[template](https://github.com/apache/incubator-hugegraph-ai/blob/aff3bbe25fa91c3414947a196131be812c20ef11/hugegraph-llm/src/hugegraph_llm/config/config_data.py#L125)
 
   to modify it)
   - Specify the name of the HugeGraph graph instance, it will automatically 
get the schema from it (like 
   **"hugegraph"**)
 - Graph extract head: The user-defined prompt of graph extracting
-- If already exist the graph data, you should click "**Rebuild vid Index**" to 
update the index
+- If it already exists the graph data, you should click "**Rebuild vid 
Index**" to update the index
 
 ![gradio-config](/docs/images/gradio-kg.png)
 
diff --git a/content/en/docs/quickstart/hugegraph-client.md 
b/content/en/docs/quickstart/hugegraph-client.md
index f81000a3..5506e742 100644
--- a/content/en/docs/quickstart/hugegraph-client.md
+++ b/content/en/docs/quickstart/hugegraph-client.md
@@ -6,7 +6,7 @@ weight: 4
 
 ### 1 Overview Of Hugegraph
 
-[HugeGraph-Client](https://github.com/apache/hugegraph-toolchain) sends HTTP 
request to HugeGraph-Server to obtain and parse the execution result of Server. 
+[HugeGraph-Client](https://github.com/apache/hugegraph-toolchain) sends HTTP 
request to HugeGraph-Server to get and parse the execution result of Server. 
 We support HugeGraph-Client for 
Java/Go/[Python](https://github.com/apache/incubator-hugegraph-ai/tree/main/hugegraph-python-client)
 language.
 You can use [Client-API](/cn/docs/clients/hugegraph-client) to write code to 
operate HugeGraph, such as adding, deleting, modifying, and querying schema and 
graph data, or executing gremlin statements.
 
@@ -14,7 +14,7 @@ You can use [Client-API](/cn/docs/clients/hugegraph-client) 
to write code to ope
 
 ### 2 What You Need
 
-- Java 11 (also support Java 8)
+- Java 11 (also supports Java 8)
 - Maven 3.5+
 
 ### 3 How To Use
@@ -22,7 +22,7 @@ You can use [Client-API](/cn/docs/clients/hugegraph-client) 
to write code to ope
 The basic steps to use HugeGraph-Client are as follows:
 
 - Build a new Maven project by IDEA or Eclipse
-- Add HugeGraph-Client dependency in pom file;
+- Add HugeGraph-Client dependency in a pom file;
 - Create an object to invoke the interface of HugeGraph-Client
 
 See the complete example in the following section for the detail.
@@ -34,7 +34,7 @@ See the complete example in the following section for the 
detail.
 Using IDEA or Eclipse to create the project:
 
 - [Build by 
Eclipse](http://www.vogella.com/tutorials/EclipseMaven/article.html)
-- [Build by Intellij 
Idea](https://vaadin.com/docs/-/part/framework/getting-started/getting-started-idea.html)
+- [Build by IntelliJ 
IDEA](https://vaadin.com/docs/-/part/framework/getting-started/getting-started-idea.html)
 
 #### 4.2 Add Hugegraph-Client Dependency In POM
 
diff --git a/content/en/docs/quickstart/hugegraph-computer.md 
b/content/en/docs/quickstart/hugegraph-computer.md
index 1be7fc33..f2a77d4e 100644
--- a/content/en/docs/quickstart/hugegraph-computer.md
+++ b/content/en/docs/quickstart/hugegraph-computer.md
@@ -6,12 +6,12 @@ weight: 6
 
 ## 1 HugeGraph-Computer Overview
 
-The 
[`HugeGraph-Computer`](https://github.com/apache/incubator-hugegraph-computer) 
is a distributed graph processing system for HugeGraph (OLAP). It is an 
implementation of [Pregel](https://kowshik.github.io/JPregel/pregel_paper.pdf). 
It runs on Kubernetes framework.
+The 
[`HugeGraph-Computer`](https://github.com/apache/incubator-hugegraph-computer) 
is a distributed graph processing system for HugeGraph (OLAP). It is an 
implementation of [Pregel](https://kowshik.github.io/JPregel/pregel_paper.pdf). 
It runs on a Kubernetes framework.
 
 ### Features
 
 - Support distributed MPP graph computing, and integrates with HugeGraph as 
graph input/output storage.
-- Based on BSP(Bulk Synchronous Parallel) model, an algorithm performs 
computing through multiple parallel iterations, every iteration is a superstep.
+- Based on BSP (Bulk Synchronous Parallel) model, an algorithm performs 
computing through multiple parallel iterations, every iteration is a superstep.
 - Auto memory management. The framework will never be OOM(Out of Memory) since 
it will split some data to disk if it doesn't have enough memory to hold all 
the data.
 - The part of edges or the messages of super node can be in memory, so you 
will never lose it.
 - You can load the data from HDFS or HugeGraph, or any other system.
@@ -82,7 +82,7 @@ bin/start-computer.sh -d local -r worker
 
 3.1.5.1 Enable `OLAP` index query for server
 
-If OLAP index is not enabled, it needs to enable, more reference: 
[modify-graphs-read-mode](/docs/clients/restful-api/graphs/#634-modify-graphs-read-mode-this-operation-requires-administrator-privileges)
+If OLAP index is not enabled, it needs to enable. More reference: 
[modify-graphs-read-mode](/docs/clients/restful-api/graphs/#634-modify-graphs-read-mode-this-operation-requires-administrator-privileges)
 
 ```http
 PUT http://localhost:8080/graphs/hugegraph/graph_read_mode
@@ -98,7 +98,7 @@ curl 
"http://localhost:8080/graphs/hugegraph/graph/vertices?page&limit=3"; | gunz
 
 ### 3.2 Run PageRank algorithm in Kubernetes
 
-> To run algorithm with HugeGraph-Computer you need to deploy HugeGraph-Server 
first
+> To run algorithm with HugeGraph-Computer, you need to deploy 
HugeGraph-Server first
 
 #### 3.2.1 Install HugeGraph-Computer CRD
 
diff --git a/content/en/docs/quickstart/hugegraph-hubble.md 
b/content/en/docs/quickstart/hugegraph-hubble.md
index 499b8152..e491eaa5 100644
--- a/content/en/docs/quickstart/hugegraph-hubble.md
+++ b/content/en/docs/quickstart/hugegraph-hubble.md
@@ -28,7 +28,7 @@ The metadata modeling module realizes the construction and 
management of graph m
 
 ##### Graph Analysis
 
-By inputting the graph traversal language Gremlin, high-performance general 
analysis of graph data can be realized, and functions such as customized 
multidimensional path query of vertices can be provided, and three kinds of 
graph result display methods are provided, including: graph form, table form, 
Json form, and multidimensional display. The data form meets the needs of 
various scenarios used by users. It provides functions such as running records 
and collection of common statements, [...]
+By inputting the graph traversal language Gremlin, high-performance general 
analysis of graph data can be realized, and functions such as customized 
multidimensional path query of vertices can be provided, and three kinds of 
graph result display methods are provided, including: graph form, table form, 
Json form, and multidimensional display. The data form meets the needs of 
various scenarios used by users. It provides functions such as running records 
and collection of common statements, [...]
 
 ##### Task Management
 
@@ -90,7 +90,7 @@ services:
 >
 > 1. The docker image of hugegraph-hubble is a convenience release to start 
 > hugegraph-hubble quickly, but not **official distribution** artifacts. You 
 > can find more details from [ASF Release Distribution 
 > Policy](https://infra.apache.org/release-distribution.html#dockerhub).
 > 
-> 2. Recommand to use `release tag`(like `1.5.0`) for the stable version. Use 
`latest` tag to experience the newest functions in development.
+> 2. Recommend to use `release tag`(like `1.5.0`) for the stable version. Use 
`latest` tag to experience the newest functions in development.
 
 #### 2.2 Download the Toolchain binary package
 
@@ -148,7 +148,7 @@ Run `hubble`
 bin/start-hubble.sh -d
 ```
 
-### 3  Platform Workflow
+### 3  Platform Workflows
 
 The module usage process of the platform is as follows:
 
@@ -176,7 +176,7 @@ Create graph by filling in the content as follows:
 > **Special Note**: If you are starting `hubble` with Docker, and `hubble` and 
 > the server are on the same host. When configuring the hostname for the graph 
 > on the Hubble web page, please do not directly set it to 
 > `localhost/127.0.0.1`. If `hubble` and `server` is in the same docker 
 > network, we **recommend** using the `container_name` (in our example, it is 
 > `graph`) as the hostname, and `8080` as the port. Or you can use the **host 
 > IP** as the hostname, and the port is configured by the h [...]
 
 ##### 4.1.2    Graph Access
-Realize the information access of the graph space. After entering, you can 
perform operations such as multidimensional query analysis, metadata 
management, data import, and algorithm analysis of the graph.
+Realize the information access to the graph space. After entering, you can 
perform operations such as multidimensional query analysis, metadata 
management, data import, and algorithm analysis of the graph.
 
 <center>
   <img src="/docs/images/images-hubble/312图访问.png" alt="image">
@@ -401,7 +401,7 @@ By switching the entrance on the left, flexibly switch the 
operation space of mu
 
 
 ##### 4.4.3 Graph Analysis and Processing
-HugeGraph supports Gremlin, a graph traversal query language of Apache 
TinkerPop3. Gremlin is a general graph database query language. By entering 
Gremlin statements and clicking execute, you can perform query and analysis 
operations on graph data, and create and delete vertices/edges. , vertex/edge 
attribute modification, etc.
+HugeGraph supports Gremlin, a graph traversal query language of Apache 
TinkerPop3. Gremlin is a general graph database query language. By entering 
Gremlin statements and clicking execute, you can perform query and analysis 
operations on graph data, and create and delete vertices/edges. vertex/edge 
attribute modification, etc.
 
 After Gremlin query, below is the graph result display area, which provides 3 
kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
 
@@ -426,11 +426,11 @@ Support zoom, center, full screen, export and other 
operations.
 
 
 ##### 4.4.4 Data Details
-Click the vertex/edge entity to view the data details of the vertex/edge, 
including: vertex/edge type, vertex ID, attribute and corresponding value, 
expand the information display dimension of the graph, and improve the 
usability.
+Click the vertex/edge entity to view the data details of the vertex/edge, 
including vertex/edge type, vertex ID, attribute and corresponding value, 
expand the information display dimension of the graph, and improve the 
usability.
 
 
 ##### 4.4.5 Multidimensional Path Query of Graph Results
-In addition to the global query, in-depth customized query and hidden 
operations can be performed for the vertices in the query result to realize 
customized mining of graph results.
+In addition to the global query, an in-depth customized query and hidden 
operations can be performed for the vertices in the query result to realize 
customized mining of graph results.
 
 Right-click a vertex, and the menu entry of the vertex appears, which can be 
displayed, inquired, hidden, etc.
 - Expand: Click to display the vertices associated with the selected point.
@@ -493,7 +493,7 @@ Left navigation:
 - algorithm: OLAP algorithm task
 - remove_schema: remove metadata
 - rebuild_index: rebuild the index
-2. The list displays the asynchronous task information of the current graph, 
including: task ID, task name, task type, creation time, time-consuming, 
status, operation, and realizes the management of asynchronous tasks.
+2. The list displays the asynchronous task information of the current graph, 
including task ID, task name, task type, creation time, time-consuming, status, 
operation, and realizes the management of asynchronous tasks.
 3. Support filtering by task type and status
 4. Support searching for task ID and task name
 5. Asynchronous tasks can be deleted or deleted in batches
@@ -525,7 +525,7 @@ Click to view the entry to jump to the task management 
list, as follows:
 
 
 4. View the results
-- The results are displayed in the form of json
+- The results are displayed in the form of JSON
 
 
 ##### 4.5.4 OLAP algorithm tasks
diff --git a/content/en/docs/quickstart/hugegraph-loader.md 
b/content/en/docs/quickstart/hugegraph-loader.md
index 0d747bee..a39e3b5e 100644
--- a/content/en/docs/quickstart/hugegraph-loader.md
+++ b/content/en/docs/quickstart/hugegraph-loader.md
@@ -10,7 +10,7 @@ HugeGraph-Loader is the data import component of HugeGraph, 
which can convert da
 
 Currently supported data sources include:
 - Local disk file or directory, supports TEXT, CSV and JSON format files, 
supports compressed files
-- HDFS file or directory, supports compressed files
+- HDFS file or directory supports compressed files
 - Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL 
Server
 
 Local disk files and HDFS files support resumable uploads.
@@ -159,7 +159,7 @@ The data sources currently supported by HugeGraph-Loader 
include:
 
 The user can specify a local disk file as the data source. If the data is 
scattered in multiple files, a certain directory is also supported as the data 
source, but multiple directories are not supported as the data source for the 
time being.
 
-For example: my data is scattered in multiple files, part-0, part-1 ... 
part-n. To perform the import, it must be ensured that they are placed in one 
directory. Then in the loader's mapping file, specify `path` as the directory.
+For example, my data is scattered in multiple files, part-0, part-1 ... 
part-n. To perform the import, it must be ensured that they are placed in one 
directory. Then in the loader's mapping file, specify `path` as the directory.
 
 Supported file formats include:
 
@@ -199,11 +199,11 @@ Currently supported compressed file types include: GZIP, 
BZ2, XZ, LZMA, SNAPPY_R
 
 ###### 3.2.1.3 Mainstream relational database
 
-The loader also supports some relational databases as data sources, and 
currently supports MySQL, PostgreSQL, Oracle and SQL Server.
+The loader also supports some relational databases as data sources, and 
currently supports MySQL, PostgreSQL, Oracle, and SQL Server.
 
 However, the requirements for the table structure are relatively strict at 
present. If **association query** needs to be done during the import process, 
such a table structure is not allowed. The associated query means: after 
reading a row of the table, it is found that the value of a certain column 
cannot be used directly (such as a foreign key), and you need to do another 
query to determine the true value of the column.
 
-For example: Suppose there are three tables, person, software and created
+For example, Suppose there are three tables, person, software and created
 
 ```
 // person schema
@@ -274,9 +274,9 @@ The mapping file of the input source is used to describe 
how to establish the ma
 
 Specifically, each mapping block contains **an input source** and multiple 
**vertex mapping** and **edge mapping** blocks, and the input source block 
corresponds to the `local disk file or directory`, ` HDFS file or directory` 
and `relational database` are responsible for describing the basic information 
of the data source, such as where the data is, what format, what is the 
delimiter, etc. The vertex map/edge map is bound to the input source, which 
columns of the input source can be sel [...]
 
-In the simplest terms, each mapping block describes: where is the file to be 
imported, which type of vertices/edges each line of the file is to be used as, 
which columns of the file need to be imported, and the corresponding 
vertices/edges of these columns. what properties etc.
+In the simplest terms, each mapping block describes: where is the file to be 
imported, which type of vertices/edges each line of the file is to be used as 
which columns of the file need to be imported, and the corresponding 
vertices/edges of these columns. what properties, etc.
 
-> Note: The format of the mapping file before version 0.11.0 and the format 
after 0.11.0 has changed greatly. For the convenience of expression, the 
mapping file (format) before 0.11.0 is called version 1.0, and the version 
after 0.11.0 is version 2.0 . And unless otherwise specified, the "map file" 
refers to version 2.0.
+> Note: The format of the mapping file before version 0.11.0 and the format 
after 0.11.0 has changed greatly. For the convenience of expression, the 
mapping file (format) before 0.11.0 is called version 1.0, and the version 
after 0.11.0 is version 2.0. And unless otherwise specified, the "map file" 
refers to version 2.0.
 
 
 
@@ -310,7 +310,7 @@ In the simplest terms, each mapping block describes: where 
is the file to be imp
 Two versions of the mapping file are given directly here (the above graph 
model and data file are described)
 
 <details>
-<summary>Click to expand/collapse mapping file for version 2.0</summary>
+<summary>Click to expand/collapse the mapping file for version 2.0</summary>
 
 ```json
 {
@@ -518,7 +518,7 @@ Two versions of the mapping file are given directly here 
(the above graph model
 <br/>
 
 <details>
-<summary>Click to expand/collapse mapping file for version 1.0</summary>
+<summary>Click to expand/collapse the mapping file for version 1.0</summary>
 
 ```json
 {
@@ -578,7 +578,7 @@ Two versions of the mapping file are given directly here 
(the above graph model
 </details>
 <br/>
 
-The 1.0 version of the mapping file is centered on the vertex and edge, and 
sets the input source; while the 2.0 version is centered on the input source, 
and sets the vertex and edge mapping. Some input sources (such as a file) can 
generate both vertices and edges. If you write in the 1.0 format, you need to 
write an input block in each of the vertex and edge mapping blocks. The two 
input blocks are exactly the same ; and the 2.0 version only needs to write 
input once. Therefore, compare [...]
+The 1.0 version of the mapping file is centered on the vertex and edge, and 
sets the input source; while the 2.0 version is centered on the input source, 
and sets the vertex and edge mapping. Some input sources (such as a file) can 
generate both vertices and edges. If you write in the 1.0 format, you need to 
write an input block in each of the vertex and edge mapping blocks. The two 
input blocks are exactly the same; and the 2.0 version only needs to write 
input once. Therefore, compared [...]
 
 In the bin directory of hugegraph-loader-{version}, there is a script tool 
`mapping-convert.sh` that can directly convert the mapping file of version 1.0 
to version 2.0. The usage is as follows:
 
@@ -597,7 +597,7 @@ Input sources are currently divided into four categories: 
FILE, HDFS, JDBC and K
 - id: The id of the input source. This field is used to support some internal 
functions. It is not required (it will be automatically generated if it is not 
filled in). It is strongly recommended to write it, which is very helpful for 
debugging;
 - skip: whether to skip the input source, because the JSON file cannot add 
comments, if you do not want to import an input source during a certain import, 
but do not want to delete the configuration of the input source, you can set it 
to true to skip it, the default is false, not required;
 - input: input source map block, composite structure
-    - type: input source type, file or FILE must be filled;
+    - type: an input source type, file or FILE must be filled;
     - path: the path of the local file or directory, the absolute path or the 
relative path relative to the mapping file, it is recommended to use the 
absolute path, required;
     - file_filter: filter files with compound conditions from `path`, compound 
structure, currently only supports configuration extensions, represented by 
child node `extensions`, the default is "*", which means to keep all files;
     - format: the format of the local file, the optional values ​​are CSV, 
TEXT and JSON, which must be uppercase and required;               
@@ -689,7 +689,7 @@ schema: required
 - delimiter: delimiter of the file line, default is comma "," as delimiter, 
JSON files do not need to specify, optional;
 - charset: encoding charset of the file, default is UTF-8, optional;
 - date_format: customized date format, default value is yyyy-MM-dd HH:mm:ss, 
optional; if the date is presented in the form of timestamp, this item must be 
written as timestamp (fixed);
-- extra_date_formats: a customized list of other date formats, empty by 
default, optional; each item in the list is an alternate date format to the 
date_format specified date format;
+- extra_date_formats: a customized list of another date formats, empty by 
default, optional; each item in the list is an alternate date format to the 
date_format specified date format;
 - time_zone: set which time zone the date data is in, default is GMT+8, 
optional;
 - skipped_line: the line you want to skip, composite structure, currently can 
only configure the regular expression of the line to be skipped, described by 
the child node regex, the default is not to skip any line, optional;
 - early_stop: the record pulled from Kafka broker at a certain time is empty, 
stop the task, default is false, only for debugging, optional;
@@ -819,7 +819,7 @@ Sibling `struct-example/load-progress 2019-10-10 12:30:30`.
 
 > Note: The generation of progress files is independent of whether 
 > --incremental-mode is turned on or not, and a progress file is generated at 
 > the end of each import.
 
-If the data file formats are all legal and the import task is stopped by the 
user (CTRL + C or kill, kill -9 is not supported), that is to say, if there is 
no error record, the next import only needs to be set
+If the data file formats are all legal and the import task is stopped by the 
user (CTRL + C or kill, kill -9 is not supported), that is to say, if there is 
no error record, the next import only needs to be set to 
 Continue for the breakpoint.
 
 But if the limit of --max-parse-errors or --max-insert-errors is reached 
because too much data is invalid or network abnormality is reached, Loader will 
record these original rows that failed to insert into
@@ -827,7 +827,7 @@ In the failed file, after the user modifies the data lines 
in the failed file, s
 Of course, if there is still a problem with the modified data line, it will be 
logged again to the failure file (don't worry about duplicate lines).
 
 Each vertex map or edge map will generate its own failure file when data 
insertion fails. The failure file is divided into a parsing failure file 
(suffix .parse-error) and an insertion failure file (suffix .insert-error).
-They are stored in the `${struct}/current` directory. For example, there is a 
vertex mapping person and an edge mapping knows in the mapping file, each of 
which has some error lines. When the Loader exits, you will see the following 
files in the `${struct}/current` directory:
+They are stored in the `${struct}/current` directory. For example, there is a 
vertex mapping person, and an edge mapping knows in the mapping file, each of 
which has some error lines. When the Loader exits, you will see the following 
files in the `${struct}/current` directory:
 
 - person-b4cd32ab.parse-error: Vertex map person parses wrong data
 - person-b4cd32ab.insert-error: Vertex map person inserts wrong data
@@ -838,7 +838,7 @@ They are stored in the `${struct}/current` directory. For 
example, there is a ve
 
 ##### 3.4.3 logs directory file description
 
-The log and error data during program execution will be written into 
hugegraph-loader.log file.
+The log and error data during program execution will be written into the 
hugegraph-loader.log file.
 
 ##### 3.4.4 Execute command
 
@@ -892,7 +892,7 @@ Edge file: `example/file/edge_created.json`
 #### 4.2 Write schema
 
 <details>
-<summary>Click to expand/collapse schema file: 
example/file/schema.groovy</summary>
+<summary>Click to expand/collapse the schema file: 
example/file/schema.groovy</summary>
 
 ```groovy
 schema.propertyKey("name").asText().ifNotExist().create();
@@ -1026,7 +1026,7 @@ If you just want to try out the loader, you can import 
the built-in example data
 
 If using custom data, before importing data with the loader, we need to copy 
the data into the container.
 
-First, following the steps in [4.1-4.3](#41-prepare-data), we can prepare the 
data and then use `docker cp` to copy the prepared data into the loader 
container.
+First, following the steps in [4.1–4.3](#41-prepare-data), we can prepare the 
data and then use `docker cp` to copy the prepared data into the loader 
container.
 
 Suppose we've prepared the corresponding dataset following the above steps, 
stored in the `hugegraph-dataset` folder with the following file structure:
 
@@ -1055,9 +1055,9 @@ edge_created.json  edge_knows.json  schema.groovy  
struct.json  vertex_person.cs
 
 Taking the built-in example dataset as an example, we can use the following 
command to load the data.
 
-If you need to import your custom dataset, you just need to modify the paths 
for `-f` (data script) and `-s` (schema) configurations.
+If you need to import your custom dataset, you need to modify the paths for 
`-f` (data script) and `-s` (schema) configurations.
 
-"You can refer to [3.4.1 Parameter description](#341-parameter-description) 
for the rest of the parameters.
+You can refer to [3.4.1-Parameter description](#341-parameter-description) for 
the rest of the parameters.
 
 ```bash
 docker exec -it loader bin/hugegraph-loader.sh -g hugegraph -f 
example/file/struct.json -s example/file/schema.groovy -h server -p 8080
@@ -1071,7 +1071,7 @@ docker exec -it loader bin/hugegraph-loader.sh -g 
hugegraph -f /loader/dataset/s
 
 > If `loader` and `server` are in the same Docker network, you can specify `-h 
 > {server_container_name}`; otherwise, you need to specify the IP of the 
 > `server` host (in our example, `server_container_name` is `server`).
 
-Then we can obverse the result:
+Then we can see the result:
 
 ```bash
 HugeGraphLoader worked in NORMAL MODE
@@ -1125,7 +1125,7 @@ The results of the execution will be similar to those 
shown in [4.5.1](#451-use-
 > HugeGraph Toolchain version: toolchain-1.0.0
 > 
 The parameters of `spark-loader` are divided into two parts. Note: Because the 
abbreviations of 
-these two parameter names have overlapping parts, please use the full name of 
the parameter. 
+these two-parameter names have overlapping parts, please use the full name of 
the parameter. 
 And there is no need to guarantee the order between the two parameters.
 - hugegraph parameters (Reference: [hugegraph-loader parameter 
description](https://hugegraph.apache.org/docs/quickstart/hugegraph-loader/#341-parameter-description)
 )
 - Spark task submission parameters (Reference: [Submitting 
Applications](https://spark.apache.org/docs/3.3.0/submitting-applications.html#content))
diff --git a/content/en/docs/quickstart/hugegraph-server.md 
b/content/en/docs/quickstart/hugegraph-server.md
index 439b6247..af585c60 100644
--- a/content/en/docs/quickstart/hugegraph-server.md
+++ b/content/en/docs/quickstart/hugegraph-server.md
@@ -522,7 +522,7 @@ volumes:
   hugegraph-data:
 ```
 
-In this yaml file, configuration parameters related to Cassandra need to be 
passed as environment variables in the format of `hugegraph.<parameter_name>`.
+In this YAML file, configuration parameters related to Cassandra need to be 
passed as environment variables in the format of `hugegraph.<parameter_name>`.
 
 Specifically, in the configuration file `hugegraph.properties` , there are 
settings like `backend=xxx` and `cassandra.host=xxx`. To configure these 
settings during the process of passing environment variables, we need to 
prepend `hugegraph.` to these configurations, like `hugegraph.backend` and 
`hugegraph.cassandra.host`.
 
@@ -532,7 +532,7 @@ The rest of the configurations can be referenced under [4 
config](#4-config)
 
 ##### 5.2.2 Create example graph when starting server
 
-Set the environment variable `PRELOAD=true` when starting Docker in order to 
load data during the execution of the startup script.
+Set the environment variable `PRELOAD=true` when starting Docker to load data 
during the execution of the startup script.
 
 1. Use `docker run`
 

Reply via email to