This is an automated email from the ASF dual-hosted git repository.

achao pushed a commit to branch dev
in repository 
https://gitbox.apache.org/repos/asf/incubator-streampark-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new d0c6921  Polish connector conetnt (#349)
d0c6921 is described below

commit d0c69215ff81aed7b71b87c259406d5a18d55e0d
Author: tison <[email protected]>
AuthorDate: Thu Apr 25 13:37:33 2024 +0800

    Polish connector conetnt (#349)
    
    Signed-off-by: tison <[email protected]>
---
 docs/connector/1-kafka.md                          |  6 +--
 docs/connector/2-jdbc.md                           |  4 +-
 docs/connector/3-clickhouse.md                     |  6 +--
 docs/connector/4-doris.md                          | 25 +++++------
 docs/connector/5-es.md                             | 11 ++---
 docs/connector/6-hbase.md                          | 45 ++++++++++----------
 docs/connector/7-http.md                           | 11 +++--
 docs/connector/8-redis.md                          | 16 ++++---
 .../current/connector/1-kafka.md                   |  8 ++--
 .../current/connector/2-jdbc.md                    |  6 +--
 .../current/connector/3-clickhouse.md              | 10 ++---
 .../current/connector/4-doris.md                   | 24 ++++++-----
 .../current/connector/5-es.md                      | 12 +++---
 .../current/connector/6-hbase.md                   | 49 +++++++++++-----------
 .../current/connector/7-http.md                    | 15 ++++---
 .../current/connector/8-redis.md                   | 25 +++++------
 .../current/flinksql/connector/7-hbase.md          |  2 +-
 17 files changed, 132 insertions(+), 143 deletions(-)

diff --git a/docs/connector/1-kafka.md b/docs/connector/1-kafka.md
index 8fc5b6f..d5f8a6a 100644
--- a/docs/connector/1-kafka.md
+++ b/docs/connector/1-kafka.md
@@ -6,9 +6,9 @@ sidebar_position: 1
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-[Flink 
officially](https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/dev/connectors/kafka.html)
 provides a connector to [Apache Kafka](https://kafka.apache.org) connector for 
reading from or writing to a Kafka topic, providing **exactly once** processing 
semantics
+[Apache Flink 
officially](https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/dev/connectors/kafka.html)
 provides a connector to [Apache Kafka](https://kafka.apache.org) connector for 
reading from or writing to a Kafka topic, providing **exactly once** processing 
semantics.
 
-`KafkaSource` and `KafkaSink` in `StreamPark` are further encapsulated based 
on `kafka connector` from the official website, simplifying the development 
steps, making it easier to read and write data
+`KafkaSource` and `KafkaSink` in `StreamPark` are further encapsulated based 
on `kafka connector` from the official website, simplifying the development 
steps, making it easier to read and write data.
 
 ## Dependencies
 
@@ -17,7 +17,7 @@ import TabItem from '@theme/TabItem';
 
 ```xml
     <dependency>
-        <groupId>org.apache.streampark/groupId>
+        <groupId>org.apache.streampark</groupId>
         <artifactId>streampark-flink-core</artifactId>
         <version>${project.version}</version>
     </dependency>
diff --git a/docs/connector/2-jdbc.md b/docs/connector/2-jdbc.md
index 02992a0..6326633 100755
--- a/docs/connector/2-jdbc.md
+++ b/docs/connector/2-jdbc.md
@@ -7,9 +7,9 @@ sidebar_position: 2
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-Flink officially provides the 
[JDBC](https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/connectors/jdbc.html)
 connector for reading from or writing to JDBC, which can provides 
**AT_LEAST_ONCE** (at least once) processing semantics
+Apache Flink officially provides the 
[JDBC](https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/connectors/jdbc.html)
 connector for reading from or writing to JDBC, which can provides 
**AT_LEAST_ONCE** (at least once) processing semantics.
 
-`StreamPark` implements **EXACTLY_ONCE** (Exactly Once) semantics of 
`JdbcSink` based on two-stage commit, and uses 
[`HikariCP`](https://github.com/brettwooldridge/HikariCP) as connection pool to 
make data reading and write data more easily and accurately
+Apache StreamPark implements **EXACTLY_ONCE** (Exactly Once) semantics of 
`JdbcSink` based on two-stage commit, and uses 
[`HikariCP`](https://github.com/brettwooldridge/HikariCP) as connection pool to 
make data reading and write data more easily and accurately.
 
 ## JDBC Configuration
 
diff --git a/docs/connector/3-clickhouse.md b/docs/connector/3-clickhouse.md
index 579081f..56b8c58 100755
--- a/docs/connector/3-clickhouse.md
+++ b/docs/connector/3-clickhouse.md
@@ -9,13 +9,11 @@ import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
 [ClickHouse](https://clickhouse.com/) is a columnar database management system 
(DBMS) for online analytics (OLAP).
-Currently, Flink does not officially provide a connector for writing to 
ClickHouse and reading from ClickHouse.
+Currently, Apache Flink does not officially provide a connector for writing to 
ClickHouse and reading from ClickHouse.
 Based on the access form supported by [ClickHouse - HTTP 
client](https://clickhouse.com/docs/zh/interfaces/http/)
 and [JDBC driver](https://clickhouse.com/docs/zh/interfaces/jdbc), StreamPark 
encapsulates ClickHouseSink for writing data to ClickHouse in real-time.
 
-`ClickHouse` writes do not support transactions, using JDBC write data to it 
could provide (AT_LEAST_ONCE) semanteme. Using the HTTP client to write 
asynchronously,
-it will retry the asynchronous write multiple times. The failed data will be 
written to external components (Kafka, MySQL, HDFS, HBase),
-the data will be restored manually to achieve final data consistency.
+ClickHouse writes do not support transactions, using JDBC write data to it 
could provide (AT_LEAST_ONCE) semanteme. Using the HTTP client to write 
asynchronously, it will retry the asynchronous write multiple times. The failed 
data will be written to external components (Apache Kafka, MySQL, Apache Hadoop 
HDFS, Apache HBase), the data will be restored manually to achieve final data 
consistency.
 
 ## JDBC synchronous write
 
diff --git a/docs/connector/4-doris.md b/docs/connector/4-doris.md
index 2413f9a..6232fd9 100644
--- a/docs/connector/4-doris.md
+++ b/docs/connector/4-doris.md
@@ -10,34 +10,31 @@ import TabItem from '@theme/TabItem';
 
 ## Apache Doris Connector
 
-[Apache Doris](https://doris.apache.org/) is a high-performance, and real-time 
analytical database,
-which could support high-concurrent point query scenarios.
-StreamPark encapsulates DoirsSink for writing data to Doris in real-time, 
based on  [Doris' stream 
loads](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html)
+[Apache Doris](https://doris.apache.org/) is a high-performance, and real-time 
analytical database, which could support high-concurrent point query scenarios. 
Apache StreamPark encapsulates DoirsSink for writing data to Doris in 
real-time, based on [its stream 
loads](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html).
 
 ### Write with Apache StreamPark™
 
-Use `StreamPark` to write data to `Doris`.  DorisSink only supports JSON 
format (single-layer) writing currently,
-such as: {"id":1,"name":"streampark"} The example of the running program is 
java, as follows:
+`DorisSink` only supports JSON format (single-layer) writing currently, such 
as: `{"id":1,"name":"streampark"}` The example of the running program is Java, 
as follows:
 
-#### configuration list
+#### Configuration list
 
 ```yaml
 doris.sink:
-  fenodes:  127.0.0.1:8030    //doris fe http url
-  database: test            //doris database
-  table: test_tbl           //doris table
+  fenodes:  127.0.0.1:8030    # doris fe http url
+  database: test              # doris database
+  table: test_tbl             # doris table
   user: root
   password: 123456
-  batchSize: 100         //doris sink batch size per streamload
-  intervalMs: 3000      //doris sink the time interval of each streamload
-  maxRetries: 1          //stream load retries
-  streamLoad:              //doris streamload own parameters
+  batchSize: 100          # doris sink batch size per streamload
+  intervalMs: 3000        # doris sink the time interval of each streamload
+  maxRetries: 1           # stream load retries
+  streamLoad:             # doris streamload own parameters
     format: json
     strip_outer_array: true
     max_filter_ratio: 1
 ```
 
-#### write data to Doris
+#### Write data to Doris
 
 <Tabs>
 <TabItem value="Java" label="Java">
diff --git a/docs/connector/5-es.md b/docs/connector/5-es.md
index 08bdaf3..4ebc560 100755
--- a/docs/connector/5-es.md
+++ b/docs/connector/5-es.md
@@ -8,14 +8,9 @@ sidebar_position: 5
 import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
 
 [Elasticsearch](https://www.elastic.co/cn/elasticsearch/) is a distributed, 
RESTful style search and data analysis
-engine.
-[Flink 
officially](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/connectors/)
 provides a connector
-for Elasticsearch, which is used to write data to Elasticsearch, which can 
provide ** at least once** Semantics.
-
-ElasticsearchSink uses TransportClient (before 6.x) or RestHighLevelClient 
(starting with 6.x) to communicate with the
-Elasticsearch cluster.
-`StreamPark` further encapsulates Flink-connector-elasticsearch6, shields 
development details, and simplifies write
-operations for Elasticsearch6 and above.
+engine. [Apache Flink 
officially](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/connectors/)
 provides a connector for Elasticsearch, which is used to write data to 
Elasticsearch, which can provide ** at least once** Semantics.
+
+ElasticsearchSink uses TransportClient (before 6.x) or RestHighLevelClient 
(starting with 6.x) to communicate with the Elasticsearch cluster. Apache 
StreamPark further encapsulates Flink-connector-elasticsearch6, shields 
development details, and simplifies write operations for Elasticsearch6 and 
above.
 
 :::tip hint
 
diff --git a/docs/connector/6-hbase.md b/docs/connector/6-hbase.md
index 00318bc..32f87b5 100755
--- a/docs/connector/6-hbase.md
+++ b/docs/connector/6-hbase.md
@@ -1,5 +1,5 @@
 ---
-id: 'Hbase-Connector'
+id: 'HBase-Connector'
 title: 'Apache HBase Connector'
 sidebar_position: 6
 ---
@@ -7,42 +7,39 @@ sidebar_position: 6
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-[Apache HBase](https://hbase.apache.org/book.html) is a highly reliable, 
high-performance, column-oriented, and scalable distributed storage system. 
Using HBase technology,
-large-scale structured storage clusters can be built on cheap PC Servers. 
Unlike general relational databases,
-HBase is a database suitable for unstructured data storage because HBase 
storage is based on a column rather than a row-based schema.
+[Apache HBase](https://hbase.apache.org/book.html) is a highly reliable, 
high-performance, column-oriented, and scalable distributed storage system. 
Using HBase technology, large-scale structured storage clusters can be built on 
cheap PC Servers. Unlike general relational databases, HBase is a database 
suitable for unstructured data storage because HBase storage is based on a 
column rather than a row-based schema.
 
-Flink does not officially provide a connector for Hbase DataStream. StreamPark 
encapsulates HBaseSource and HBaseSink based on `Hbase-client`.
-It supports automatic connection creation based on configuration and 
simplifies development. StreamPark reading Hbase can record the latest status 
of the read data when the checkpoint is enabled,
+Apache Flink does not officially provide a connector for HBase DataStream. 
Apache StreamPark encapsulates HBaseSource and HBaseSink based on 
`HBase-client`. It supports automatic connection creation based on 
configuration and simplifies development. StreamPark reading HBase can record 
the latest status of the read data when the checkpoint is enabled,
 and the offset corresponding to the source can be restored through the data 
itself. Implement source-side AT_LEAST_ONCE.
 
-HbaseSource implements Flink Async I/O to improve streaming throughput. The 
sink side supports AT_LEAST_ONCE by default.
+HBaseSource implements Flink Async I/O to improve streaming throughput. The 
sink side supports AT_LEAST_ONCE by default.
 EXACTLY_ONCE is supported when checkpointing is enabled.
 
 :::tip hint
+
 StreamPark reading HBASE can record the latest state of the read data when 
checkpoint is enabled.
-Whether the previous state can be restored after the job is resumed depends 
entirely on whether the data itself has an offset identifier,
-which needs to be manually specified in the code. The recovery logic needs to 
be specified in the func parameter of the getDataStream method of HBaseSource.
+Whether the previous state can be restored after the job is resumed depends 
entirely on whether the data itself has an offset identifier, which needs to be 
manually specified in the code. The recovery logic needs to be specified in the 
func parameter of the getDataStream method of HBaseSource.
+
 :::
 
 ## Dependency of HBase writing
-HBase Maven Dependency
+
+HBase Maven Dependency:
+
 ```xml
 <dependency>
-<groupId>org.apache.hbase</groupId>
-<artifactId>hbase-client</artifactId>
-<version>${hbase.version}</version>
+  <groupId>org.apache.hbase</groupId>
+  <artifactId>hbase-client</artifactId>
+  <version>${hbase.version}</version>
 </dependency>
-```
-```xml
-
 <dependency>
-<groupId>org.apache.hbase</groupId>
-<artifactId>hbase-common</artifactId>
-<version>${hbase.version}</version>
+  <groupId>org.apache.hbase</groupId>
+  <artifactId>hbase-common</artifactId>
+  <version>${hbase.version}</version>
 </dependency>
 ```
 
-## Regular way to write and read Hbase
+## Regular way to write and read HBase
 ### 1.Create database and table
      create 'Student', {NAME => 'Stulnfo', VERSIONS => 3}, {NAME =>'Grades', 
BLOCKCACHE => true}
 ### 2.Write demo and Read demo
@@ -240,10 +237,10 @@ class HBaseWriter extends RichSinkFunction<String> {
 </Tabs>
 
 Reading and writing HBase in this way is cumbersome and inconvenient. 
`StreamPark` follows the concept of convention over configuration and automatic 
configuration.
-Users only need to configure Hbase connection parameters and Flink operating 
parameters. StreamPark will automatically assemble source and sink,
+Users only need to configure HBase connection parameters and Flink operating 
parameters. StreamPark will automatically assemble source and sink,
 which greatly simplifies development logic and improves development efficiency 
and maintainability.
 
-## write and read Hbase with Apache StreamPark™
+## write and read HBase with Apache StreamPark™
 
 ### 1. Configure policies and connection information
 
@@ -260,7 +257,7 @@ hbase:
 
 ### 2. Read and write HBase
 
-Writing to Hbase with StreamPark is very simple, the code is as follows:
+Writing to HBase with StreamPark is very simple, the code is as follows:
 
 <Tabs>
 <TabItem value="read HBase">
@@ -391,7 +388,7 @@ class HBaseSource(@(transient@param) val ctx: 
StreamingContext, property: Proper
 
 }
 ```
-StreamPark HbaseSource implements flink Async I/O, which is used to improve 
the throughput of Streaming: first create a DataStream,
+StreamPark HBaseSource implements flink Async I/O, which is used to improve 
the throughput of Streaming: first create a DataStream,
 then create an HBaseRequest and call requestOrdered() or requestUnordered() to 
create an asynchronous stream, as follows:
 ```scala
 class HBaseRequest[T: TypeInformation](@(transient@param) private val stream: 
DataStream[T], property: Properties = new Properties()) {
diff --git a/docs/connector/7-http.md b/docs/connector/7-http.md
index abcd2ef..ef180a5 100755
--- a/docs/connector/7-http.md
+++ b/docs/connector/7-http.md
@@ -1,6 +1,6 @@
 ---
-id: 'Http-Connector'
-title: 'Http Connector'
+id: 'HTTP-Connector'
+title: 'HTTP Connector'
 original: true
 sidebar_position: 7
 ---
@@ -8,8 +8,8 @@ sidebar_position: 7
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-Some background services receive data through HTTP requests. In this scenario, 
Flink can write result data through HTTP
-requests. Currently, Flink officially does not provide a connector for writing 
data through HTTP requests. StreamPark
+Some background services receive data through HTTP requests. In this scenario, 
Apache Flink can write result data through HTTP
+requests. Currently, Flink officially does not provide a connector for writing 
data through HTTP requests. Apache StreamPark
 encapsulates HttpSink to write data asynchronously in real-time based on 
asynchttpclient.
 
 `HttpSink` writes do not support transactions, writing data to the target 
service provides AT_LEAST_ONCE semantics. Data
@@ -21,7 +21,6 @@ will be restored manually to achieve final data consistency.
 Asynchronous writing uses asynchttpclient as the client, you need to import 
the jar of asynchttpclient first.
 
 ```xml
-
 <dependency>
     <groupId>org.asynchttpclient</groupId>
     <artifactId>async-http-client</artifactId>
@@ -33,7 +32,7 @@ Asynchronous writing uses asynchttpclient as the client, you 
need to import the
 
 ### http asynchronous write support type
 
-HttpSink supports get , post , patch , put , delete , options , trace of http 
protocol. Corresponding to the method of
+HttpSink supports get, post, patch, put, delete, options, trace of http 
protocol. Corresponding to the method of
 the same name of HttpSink, the specific information is as follows:
 
 <TabItem value="Scala" label="Scala">
diff --git a/docs/connector/8-redis.md b/docs/connector/8-redis.md
index c465909..269489b 100644
--- a/docs/connector/8-redis.md
+++ b/docs/connector/8-redis.md
@@ -9,19 +9,24 @@ import TabItem from '@theme/TabItem';
 
 [Redis](http://www.redis.cn/) is an open source in-memory data structure 
storage system that can be used as a database, cache, and messaging middleware. 
It supports many types of data structures such as strings, hashes, lists, sets, 
ordered sets and range queries, bitmaps, hyperlogloglogs and geospatial index 
radius queries. Redis has built-in transactions and various levels of disk 
persistence, and provides high availability through Redis Sentinel and Cluster.
 
- Flink does not officially provide a connector for writing reids 
data.StreamPark is based on [Flink Connector 
Redis](https://bahir.apache.org/docs/flink/current/flink-streaming-redis/)
+Apache Flink does not officially provide a connector for writing reids data. 
Apache StreamPark is based on [Flink Connector 
Redis](https://bahir.apache.org/docs/flink/current/flink-streaming-redis/).
+
 It encapsulates RedisSink, configures redis connection parameters, and 
automatically creates redis connections to simplify development. Currently, 
RedisSink supports the following connection methods: single-node mode, sentinel 
mode, and cluster mode because it does not support transactions.
 
-StreamPark uses Redis' **MULTI** command to open a transaction and the 
**EXEC** command to commit a transaction, see the link for details:
-http://www.redis.cn/topics/transactions.html , using RedisSink supports 
AT_LEAST_ONCE (at least once) processing semantics by default. EXACTLY_ONCE 
semantics are supported with checkpoint enabled.
+StreamPark uses Redis' **MULTI** command to open a transaction and the 
**EXEC** command to commit a transaction, see the link for details: 
http://www.redis.cn/topics/transactions.html, using RedisSink supports 
AT_LEAST_ONCE processing semantics by default. EXACTLY_ONCE semantics are 
supported with checkpoint enabled.
 
 :::tip tip
-redis is a key,value type database, AT_LEAST_ONCE semantics flink job with 
abnormal restart the latest data will overwrite the previous version of data to 
achieve the final data consistency. If an external program reads the data 
during the restart, there is a risk of inconsistency with the final data.
+
+Redis is a key-value database, AT_LEAST_ONCE semantics flink job with abnormal 
restart the latest data will overwrite the previous version of data to achieve 
the final data consistency. If an external program reads the data during the 
restart, there is a risk of inconsistency with the final data.
+
 EXACTLY_ONCE semantics will write to redis in batch when the flink job 
checkpoint is completed as a whole, and there will be a delay of checkpoint 
interval. Please choose the appropriate semantics according to the business 
scenario.
+
 :::
 
 ## Redis Write Dependency
-Flink Connector Redis officially provides two kinds, the following two api are 
the same, StreamPark is using org.apache.bahir dependency.
+
+Flink Connector Redis officially provides two kinds, the following two api are 
the same, StreamPark is using `org.apache.bahir` dependency.
+
 ```xml
 <dependency>
     <groupId>org.apache.bahir</groupId>
@@ -29,6 +34,7 @@ Flink Connector Redis officially provides two kinds, the 
following two api are t
     <version>1.0</version>
 </dependency>
 ```
+
 ```xml
 <dependency>
     <groupId>org.apache.flink</groupId>
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/1-kafka.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/1-kafka.md
index 878b53f..b881c0c 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/1-kafka.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/1-kafka.md
@@ -6,13 +6,13 @@ sidebar_position: 1
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-[Flink 
官方](https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/dev/connectors/kafka.html)提供了[Apache
 Kafka](http://kafka.apache.org)的连接器,用于从 Kafka topic 中读取或者向其中写入数据,可提供 **精确一次** 
的处理语义
+[Apache Flink 
官方](https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/dev/connectors/kafka.html)提供了
 [Apache Kafka](http://kafka.apache.org) 的连接器,用于从 Kafka 
主题中读取或者向其中写入数据,可提供**精确一次**的处理语义。
 
-`StreamPark`中`KafkaSource`和`KafkaSink`基于官网的`kafka 
connector`进一步封装,屏蔽很多细节,简化开发步骤,让数据的读取和写入更简单
+Apache StreamPark 中 `KafkaSource` 和 `KafkaSink` 基于官网的 Kafka Connector 
进一步封装,屏蔽了很多细节,简化开发步骤,让数据的读取和写入更简单。
 
 ## 依赖
 
-[Apache 
Flink](https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/dev/connectors/kafka.html)
 集成了通用的 Kafka 连接器,它会尽力与 Kafka client 的最新版本保持同步。该连接器使用的 Kafka client 版本可能会在 
Flink 版本之间发生变化。 当前 Kafka client 向后兼容 0.10.0 或更高版本的 Kafka broker。 有关 Kafka 
兼容性的更多细节,请参考 
[Kafka](https://kafka.apache.org/protocol.html#protocol_compatibility) 官方文档。
+[Apache 
Flink](https://ci.apache.org/projects/flink/flink-docs-release-1.12/zh/dev/connectors/kafka.html)
 集成了通用的 Kafka 连接器,它会尽力与 Kafka client 的最新版本保持同步。该连接器使用的 Kafka client 版本可能会在 
Flink 版本之间发生变化。当前 Kafka client 向后兼容 0.10.0 或更高版本的 Kafka broker。有关 Kafka 
兼容性的更多细节,请参考 [Apache 
Kafka](https://kafka.apache.org/protocol.html#protocol_compatibility) 的官方文档。
 
 ```xml
     <!--必须要导入的依赖-->
@@ -31,7 +31,7 @@ import TabItem from '@theme/TabItem';
 
 ```
 
-同时在开发阶段,以下的依赖也是必要的
+同时在开发阶段,以下的依赖也是必要的:
 
 ```xml
     <!--以下scope为provided的依赖也是必须要导入的-->
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/2-jdbc.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/2-jdbc.md
index 3ae5cf9..0f2bc7c 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/2-jdbc.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/2-jdbc.md
@@ -7,13 +7,13 @@ sidebar_position: 2
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-Flink 官方 
提供了[JDBC](https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/connectors/jdbc.html)的连接器,用于从
 JDBC 中读取或者向其中写入数据,可提供 **AT_LEAST_ONCE** (至少一次)的处理语义
+Apache Flink 
官方提供了[JDBC](https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/connectors/jdbc.html)的连接器,用于从
 JDBC 中读取或者向其中写入数据,可提供至少一次(**AT_LEAST_ONCE**)的处理语义。
 
-`StreamPark`中基于两阶段提交实现了 **EXACTLY_ONCE** 
(精确一次)语义的`JdbcSink`,并且采用[`HikariCP`](https://github.com/brettwooldridge/HikariCP)为连接池,让数据的读取和写入更简单更准确
+Apache StreamPark 中基于两阶段提交实现了精确一次(**EXACTLY_ONCE**)语义的 `JdbcSink` 类,并且采用 
[`HikariCP`](https://github.com/brettwooldridge/HikariCP) 为连接池,让数据的读取和写入更简单更准确。
 
 ## JDBC 信息配置
 
-在`StreamPark`中`JDBC Connector`的实现用到了[` HikariCP 
`](https://github.com/brettwooldridge/HikariCP)连接池,相关的配置在`jdbc`的namespace下,约定的配置如下:
+在 Apache StreamPark 中,JDBC Connector 的实现用到了 
[`HikariCP`](https://github.com/brettwooldridge/HikariCP) 连接池,相关的配置在`jdbc` 的 
namespace 下,约定的配置如下:
 
 ```yaml
 jdbc:
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/3-clickhouse.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/3-clickhouse.md
index 2d59021..f5c04ce 100755
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/3-clickhouse.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/3-clickhouse.md
@@ -8,16 +8,14 @@ sidebar_position: 3
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-[ClickHouse](https://clickhouse.com/)是一个用于联机分析(OLAP)的列式数据库管理系统(DBMS),主要面向OLAP场景。目前flink官方未提供写入
-读取clickhouse数据的连接器。StreamPark 基于ClickHouse 
支持的访问形式[HTTP客户端](https://clickhouse.com/docs/zh/interfaces/http/)、
-[JDBC驱动](https://clickhouse.com/docs/zh/interfaces/jdbc/)封装了ClickHouseSink用于向clickhouse实时写入数据。
+[ClickHouse](https://clickhouse.com/) 是一个用于联机分析(OLAP)的列式数据库管理系统,主要面向 OLAP 
场景。目前 Apache Flink 官方未提供写入
+读取 ClickHouse 数据的连接器。Apache StreamPark 基于 ClickHouse 支持的访问形式 [HTTP 
客户端](https://clickhouse.com/docs/zh/interfaces/http/)、[JDBC 
驱动](https://clickhouse.com/docs/zh/interfaces/jdbc/)封装了 `ClickHouseSink` 用于向 
ClickHouse 实时写入数据。
 
-`ClickHouse`写入不支持事务,使用 JDBC 向其中写入数据可提供 AT_LEAST_ONCE (至少一次)的处理语义。使用 HTTP客户端 
异步写入,对异步写入重试多次
-失败的数据会写入外部组件(kafka,mysql,hdfs,hbase),最终通过人为介入来恢复数据,实现最终数据一致。
+ClickHouse 写入不支持事务,使用 JDBC 向其中写入数据可提供至少一次的处理语义。使用 HTTP 
客户端异步写入,对异步写入重试多次失败的数据会写入外部组件,最终通过人为介入来恢复数据,实现最终数据一致。
 
 ## JDBC 同步写入
 
-[ClickHouse](https://clickhouse.com/)提供了[JDBC驱动](https://clickhouse.com/docs/zh/interfaces/jdbc/),需要先导入clickhouse的jdbc驱动包
+[ClickHouse](https://clickhouse.com/) 提供了 [JDBC 
驱动](https://clickhouse.com/docs/zh/interfaces/jdbc/),需要先导入 ClickHouse 的 JDBC 
驱动包:
 
 ```xml
 <dependency>
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/4-doris.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/4-doris.md
index ae2f161..cd85d00 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/4-doris.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/4-doris.md
@@ -8,27 +8,28 @@ sidebar_position: 4
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-[Apache Doris](https://doris.apache.org/)是一款基于大规模并行处理技术的分布式 SQL 数据库,主要面向 OLAP 
场景。
-StreamPark 基于Doris的[stream 
load](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html)封装了DoirsSink用于向Doris实时写入数据。
+[Apache Doris](https://doris.apache.org/) 是一款基于大规模并行处理技术的分布式 SQL 数据库,主要面向 OLAP 
场景。
+Apache StreamPark 基于 Doris 的 [stream 
load](https://doris.apache.org/administrator-guide/load-data/stream-load-manual.html)
 封装了 DoirsSink 用于向 Doris 实时写入数据。
 
 ### Apache StreamPark™ 方式写入
 
-用`StreamPark`写入 `doris`的数据, 目前 DorisSink 只支持 JSON 
格式(单层)写入,如:{"id":1,"name":"streampark"}
-运行程序样例为java,如下:
+目前 DorisSink 只支持 JSON 格式(单层)写入,如 `{"id":1,"name":"streampark"}`。
+
+示例程序是 Java 程序,具体如下。
 
 #### 配置信息
 
 ```yaml
 doris.sink:
-  fenodes:  127.0.0.1:8030    //doris fe http 请求地址
-  database: test            //doris database
-  table: test_tbl           //doris table
+  fenodes:  127.0.0.1:8030    # doris fe http 请求地址
+  database: test              # doris database
+  table: test_tbl             # doris table
   user: root
   password: 123456
-  batchSize: 100         //doris sink 每次streamload的批次大小
-  intervalMs: 3000      //doris sink 每次streamload的时间间隔
-  maxRetries: 1          //stream load的重试次数
-  streamLoad:              //doris streamload 自身的参数
+  batchSize: 100         # doris sink 每次 streamload 的批次大小
+  intervalMs: 3000       # doris sink 每次 streamload 的时间间隔
+  maxRetries: 1          # stream load 的重试次数
+  streamLoad:            # doris streamload 自身的参数
     format: json
     strip_outer_array: true
     max_filter_ratio: 1
@@ -74,4 +75,5 @@ public class DorisJavaApp {
 
 建议设置 batchSize 来批量插入数据提高性能,同时为了提高实时性,支持间隔时间 intervalMs 来批次写入<br></br>
 可以通过设置 maxRetries 来增加streamload的重试次数。
+
 :::
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/5-es.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/5-es.md
index 440c745..2132a75 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/5-es.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/5-es.md
@@ -8,16 +8,14 @@ sidebar_position: 5
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-[Elasticsearch](https://www.elastic.co/cn/elasticsearch/) 是一个分布式、RESTful 
风格的搜索和数据分析引擎。
-[Flink 
官方](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/connectors/)提供了[Elasticsearch](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/connectors/datastream/elasticsearch/)的连接器,用于向
 elasticsearch 中写入数据,可提供 **至少一次** 的处理语义
+[Elasticsearch](https://www.elastic.co/cn/elasticsearch/) 是一个分布式的、RESTful 
风格的搜索和数据分析引擎。[Apache Flink 
官方](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/connectors/)提供了
 
[Elasticsearch](https://nightlies.apache.org/flink/flink-docs-release-1.14/zh/docs/connectors/datastream/elasticsearch/)
 的连接器,用于向 ElasticSearch 中写入数据,可提供 **至少一次** 的处理语义。
 
-ElasticsearchSink 使用 TransportClient(6.x 之前)或者 RestHighLevelClient(6.x 开始)和 
Elasticsearch 集群进行通信,
-`StreamPark`对 flink-connector-elasticsearch6 
进一步封装,屏蔽开发细节,简化Elasticsearch6及以上的写入操作。
+ElasticsearchSink 使用 TransportClient(6.x 之前)或者 RestHighLevelClient(6.x 开始)和 
Elasticsearch 集群进行通信,Apache StreamPark 对 flink-connector-elasticsearch6 
进一步封装,屏蔽开发细节,简化 Elasticsearch6 及以上的写入操作。
 
 :::tip 提示
-因为Flink Connector Elasticsearch 
不同版本之间存在冲突`StreamPark`暂时仅支持Elasticsearch6及以上的写入操作,如需写入Elasticsearch5需要使用者排除
-flink-connector-elasticsearch6 依赖,引入 flink-connector-elasticsearch5依赖 创建
-org.apache.flink.streaming.connectors.elasticsearch5.ElasticsearchSink 实例写入数据。
+
+因为 Flink Connector Elasticsearch 不同版本之间存在冲突,StreamPark 暂时仅支持 Elasticsearch6 
及以上的写入操作,如需写入 Elasticsearch5 集群,需要使用者排除 `flink-connector-elasticsearch6` 依赖,引入 
`flink-connector-elasticsearch5` 依赖。创建 
`org.apache.flink.streaming.connectors.elasticsearch5.ElasticsearchSink` 实例写入数据。
+
 :::
 
 ## Elasticsearch 写入依赖
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/6-hbase.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/6-hbase.md
index 3de3592..03122d4 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/6-hbase.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/6-hbase.md
@@ -1,5 +1,5 @@
 ---
-id: 'Hbase-Connector'
+id: 'HBase-Connector'
 title: 'Apache HBase Connector'
 sidebar_position: 6
 ---
@@ -7,37 +7,36 @@ sidebar_position: 6
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-[Apache 
HBase](https://hbase.apache.org/book.html)是一个高可靠性、高性能、面向列、可伸缩的分布式存储系统,利用HBase技术可在廉价PC
 Server
-上搭建起大规模结构化存储集群。 HBase不同于一般的关系数据库,它是一个适合于非结构化数据存储的数据库,HBase基于列的而不是基于行的模式。
+[Apache HBase](https://hbase.apache.org/book.html) 
是一个高可靠性、高性能、面向列、可伸缩的分布式存储系统,利用 HBase 
技术可在廉价服务器上搭建起大规模结构化存储集群。HBase不同于一般的关系数据库,它是一个适合于非结构化数据存储的数据库,HBase 
基于列的而不是基于行的模式。
 
-flink官方未提供Hbase DataStream的连接器。StreamPark 
基于`Hbase-client`封装了HBaseSource、HBaseSink,支持依据配置自动创建连接,简化开发。
-StreamPark 
读取Hbase在开启chekpoint情况下可以记录读取数据的最新状态,通过数据本身标识可以恢复source对应偏移量。实现source端AT_LEAST_ONCE(至少一次语义)。
-HbaseSource 实现了flink Async I/O,用于提升streaming的吞吐量,sink端默认支持AT_LEAST_ONCE 
(至少一次)的处理语义。在开启checkpoint情况下支持EXACTLY_ONCE()精确一次语义。
+Apache Flink 官方未提供 HBase DataStream 的连接器。Apache StreamPark 基于 HBase client 封装了 
HBaseSource、HBaseSink,支持依据配置自动创建连接,简化开发。StreamPark 读取 HBase 在开启 chekpoint 
情况下可以记录读取数据的最新状态,通过数据本身标识可以恢复 source 对应偏移量。实现 source 端至少一次语义。
+
+HBaseSource 实现了 Flink 的 Async I/O 接口,可以提升流处理的吞吐量。Sink 端默认支持至少一次的处理语义。在开启 
checkpoint 情况下支持精确一次语义。
 
 :::tip 提示
-StreamPark 
读取HBASE在开启chekpoint情况下可以记录读取数据的最新状态,作业恢复后从是否可以恢复之前状态完全取决于数据本身是否有偏移量的标识,需要在代码手动指定。
-在HBaseSource的getDataStream方法func参数指定恢复逻辑。
+
+StreamPark 读取 HBase 在开启 chekpoint 
情况下可以记录读取数据的最新状态,作业恢复后从是否可以恢复之前状态完全取决于数据本身是否有偏移量的标识,需要在代码手动指定。在 HBaseSource 的 
getDataStream 方法 func 参数指定恢复逻辑。
+
 :::
 
 ## HBase写入依赖
-HBase Maven依赖
+
+HBase Maven 依赖:
+
 ```xml
 <dependency>
-<groupId>org.apache.hbase</groupId>
-<artifactId>hbase-client</artifactId>
-<version>${hbase.version}</version>
+  <groupId>org.apache.hbase</groupId>
+  <artifactId>hbase-client</artifactId>
+  <version>${hbase.version}</version>
 </dependency>
-```
-```xml
-
 <dependency>
-<groupId>org.apache.hbase</groupId>
-<artifactId>hbase-common</artifactId>
-<version>${hbase.version}</version>
+  <groupId>org.apache.hbase</groupId>
+  <artifactId>hbase-common</artifactId>
+  <version>${hbase.version}</version>
 </dependency>
 ```
 
-## 常规方式写入读取Hbase
+## 常规方式写入读取HBase
 ### 1.创建库表
      create 'Student', {NAME => 'Stulnfo', VERSIONS => 3}, {NAME =>'Grades', 
BLOCKCACHE => true}
 ### 2.写入读取demo
@@ -234,9 +233,9 @@ class HBaseWriter extends RichSinkFunction<String> {
 
 </Tabs>
 
-以方式读写Hbase较繁琐,非常的不灵敏。`StreamPark`使用约定大于配置、自动配置的方式只需要配置Hbase连接参数、flink运行参数,StreamPark
 会自动组装source和sink,极大的简化开发逻辑,提升开发效率和维护性。
+以方式读写HBase较繁琐,非常的不灵敏。`StreamPark`使用约定大于配置、自动配置的方式只需要配置HBase连接参数、flink运行参数,StreamPark
 会自动组装source和sink,极大的简化开发逻辑,提升开发效率和维护性。
 
-## Apache StreamPark™ 读写 Hbase
+## Apache StreamPark™ 读写 HBase
 
 ### 1. 配置策略和连接信息
 
@@ -251,8 +250,8 @@ hbase:
 
 ```
 
-### 2. 读写入Hbase
-用 StreamPark 写入Hbase非常简单,代码如下:
+### 2. 读写入HBase
+用 StreamPark 写入HBase非常简单,代码如下:
 
 <Tabs>
 <TabItem value="读取HBase">
@@ -358,7 +357,7 @@ object HBaseSinkApp extends FlinkStreaming {
 </TabItem>
 </Tabs>
 
-StreamPark 写入Hbase 需要创建HBaseQuery的方法、指定将查询结果转化为需要对象的方法、标识是否在运行、传入运行参数。具体如下:
+StreamPark 写入HBase 需要创建HBaseQuery的方法、指定将查询结果转化为需要对象的方法、标识是否在运行、传入运行参数。具体如下:
 ```scala
 /**
  * @param ctx
@@ -384,7 +383,7 @@ class HBaseSource(@(transient@param) val ctx: 
StreamingContext, property: Proper
 
 }
 ```
-StreamPark HbaseSource 实现了flink Async I/O 用于提升Streaming的吞吐量,先创建 DataStream 
然后创建 HBaseRequest 调用
+StreamPark HBaseSource 实现了flink Async I/O 用于提升Streaming的吞吐量,先创建 DataStream 
然后创建 HBaseRequest 调用
 requestOrdered() 或者 requestUnordered() 创建异步流,建如下代码:
 ```scala
 class HBaseRequest[T: TypeInformation](@(transient@param) private val stream: 
DataStream[T], property: Properties = new Properties()) {
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/7-http.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/7-http.md
index 0a6aca1..2e10e29 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/7-http.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/7-http.md
@@ -1,6 +1,6 @@
 ---
-id: 'Http-Connector'
-title: 'Http Connector'
+id: 'HTTP-Connector'
+title: 'HTTP Connector'
 original: true
 sidebar_position: 7
 ---
@@ -8,15 +8,14 @@ sidebar_position: 7
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-一些后台服务通过http请求接收数据,这种场景下flink可以通过http请求写入结果数据,目前flink官方未提供通过http请求写入
-数据的连接器。StreamPark 基于asynchttpclient封装了HttpSink异步实时写入数据。
+一些后台服务通过 HTTP 请求接收数据,这种场景下 Apache Flink 可以通过 HTTP 请求写入结果数据,目前 Apache Flink 
官方未提供通过 HTTP 请求写入
+数据的连接器。Apache StreamPark 基于 asynchttpclient 封装了 HttpSink 异步实时写入数据。
 
-`HttpSink`写入不支持事务,向目标服务写入数据可提供 AT_LEAST_ONCE 
(至少一次)的处理语义。异步写入重试多次失败的数据会写入外部组件(kafka,mysql,hdfs,hbase)
-,最终通过人为介入来恢复数据,达到最终数据一致。
+`HttpSink` 
写入不支持事务,向目标服务写入数据可提供至少一次的处理语义。异步写入重试多次失败的数据会写入外部组件,最终通过人为介入来恢复数据,达到最终数据一致。
 
+## HTTP 异步写入
 
-## http异步写入
-异步写入采用 asynchttpclient 作为客户端,需要先导入 asynchttpclient 的jar
+异步写入采用 asynchttpclient 作为客户端,需要先导入 asynchttpclient 相关依赖:
 
 ```xml
 <dependency>
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/8-redis.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/8-redis.md
index 0199239..83eb77c 100755
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/8-redis.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/connector/8-redis.md
@@ -7,24 +7,24 @@ sidebar_position: 8
 import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 
-[Redis](http://www.redis.cn/)是一个开源内存数据结构存储系统,它可以用作数据库、缓存和消息中间件。 它支持多种类型的数据
-结构,如 字符串(strings), 散列(hashes), 列表(lists), 集合(sets), 有序集合(sorted sets) 与范围查询, 
bitmaps,
-hyperloglogs 和 地理空间(geospatial) 索引半径查询。 Redis 内置了事务(transactions) 和不同级别的 
磁盘持久化(persistence),
-并通过 Redis哨兵(Sentinel)和自动 分区(Cluster)提供高可用性(high availability)。
+[Redis](http://www.redis.cn/)是一个开源内存数据结构存储系统,它可以用作数据库、缓存和消息中间件。 
它支持多种类型的数据结构,如字符串(strings), 散列(hashes), 列表(lists), 集合(sets), 有序集合(sorted 
sets)与范围查询,bitmaps、hyperloglogs 和地理空间(geospatial) 索引半径查询。 Redis 
内置了事务(transactions) 和不同级别的磁盘持久化(persistence),并通过 
Redis哨兵(Sentinel)和自动分区(Cluster)提供高可用性(high availability)。
 
-flink官方未提供写入reids数据的连接器。StreamPark 基于[Flink Connector 
Redis](https://bahir.apache.org/docs/flink/current/flink-streaming-redis/)
-封装了RedisSink、配置redis连接参数,即可自动创建redis连接简化开发。目前RedisSink支持连接方式有:单节点模式、哨兵模式,因集群模式不支持事务,目前未支持。
+Apache Flink 官方未提供写入 Reids 数据的连接器。Apache StreamPark 基于 [Flink Connector 
Redis](https://bahir.apache.org/docs/flink/current/flink-streaming-redis/) 封装了 
RedisSink、配置 redis 连接参数,即可自动创建 redis 连接简化开发。目前 RedisSink 
支持连接方式有:单节点模式、哨兵模式,因集群模式不支持事务,目前未支持。
 
-StreamPark 使用Redis的 **MULTI** 命令开启事务,**EXEC** 命令提交事务,细节见链接:
-http://www.redis.cn/topics/transactions.html ,使用RedisSink 默认支持AT_LEAST_ONCE 
(至少一次)的处理语义。在开启checkpoint情况下支持EXACTLY_ONCE语义。
+StreamPark 使用Redis的 **MULTI** 命令开启事务,**EXEC** 
命令提交事务,细节见链接:http://www.redis.cn/topics/transactions.html。RedisSink 默认支持 
AT_LEAST_ONCE 的处理语义,在开启 checkpoint 情况下支持 EXACTLY_ONCE 语义。
 
 :::tip 提示
-redis 
为key,value类型数据库,AT_LEAST_ONCE语义下flink作业出现异常重启后最新的数据会覆盖上一版本数据,达到最终数据一致。如果有外部程序在重启期间读取了数据会有和最终数据不一致的风险。
+
+Redis 是 key-value 类型的数据库,AT_LEAST_ONCE 语义下 Flink 
作业出现异常重启后最新的数据会覆盖上一版本数据,达到最终数据一致。如果有外部程序在重启期间读取了数据会有和最终数据不一致的风险。
+
 
EXACTLY_ONCE语义下会在flink作业checkpoint整体完成情况下批量写入redis,会有一个checkpoint时间间隔的延时。请根据业务场景选择合适语义。
+
 :::
 
 ## Redis写入依赖
-Flink Connector Redis 官方提供两种,以下两种api均相同,StreamPark 使用的是org.apache.bahir依赖
+
+Flink Connector Redis 官方提供两种,以下两种 API 均相同,StreamPark 使用的是 `org.apache.bahir` 
依赖:
+
 ```xml
 <dependency>
     <groupId>org.apache.bahir</groupId>
@@ -32,6 +32,7 @@ Flink Connector Redis 官方提供两种,以下两种api均相同,StreamPark
     <version>1.0</version>
 </dependency>
 ```
+
 ```xml
 <dependency>
     <groupId>org.apache.flink</groupId>
@@ -40,9 +41,9 @@ Flink Connector Redis 官方提供两种,以下两种api均相同,StreamPark
 </dependency>
 ```
 
-## 常规方式写Redis
+## 常规方式写 Redis
 
-常规方式下使用Flink Connector Redis写入数据的方式如下:
+常规方式下使用 Flink Connector Redis 写入数据的方式如下:
 
 ### 1.接入source
 
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/flinksql/connector/7-hbase.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/flinksql/connector/7-hbase.md
index 356e229..8729969 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/flinksql/connector/7-hbase.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/flinksql/connector/7-hbase.md
@@ -1,6 +1,6 @@
 ---
 id: '7-hbase'
-title: 'Hbase'
+title: 'HBase'
 sidebar_position: 7
 ---
 


Reply via email to