This is an automated email from the ASF dual-hosted git repository.

luzhijing pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 45df3b28e59 [docs](deadlink) Fix Dead Link of EN Version (#665)
45df3b28e59 is described below

commit 45df3b28e59d54530c609c08c86aad7ac8b5064f
Author: KassieZ <[email protected]>
AuthorDate: Mon May 20 13:49:06 2024 +0700

    [docs](deadlink) Fix Dead Link of EN Version (#665)
---
 blog/tpch.md                                       |  6 +-
 docs/admin-manual/audit-plugin.md                  |  4 +-
 .../cluster-management/elastic-expansion.md        |  2 +-
 docs/admin-manual/cluster-management/fqdn.md       |  4 +-
 docs/admin-manual/config/be-config.md              | 28 +++----
 docs/admin-manual/config/fe-config.md              | 96 +++++++++++-----------
 docs/admin-manual/data-admin/backup.md             |  6 +-
 docs/admin-manual/data-admin/restore.md            |  8 +-
 docs/admin-manual/maint-monitor/disk-capacity.md   |  2 +-
 .../maint-monitor/metadata-operation.md            |  2 +-
 .../maint-monitor/tablet-repair-and-balance.md     |  2 +-
 .../memory-management/be-oom-analysis.md           |  9 +-
 docs/admin-manual/query-admin/sql-interception.md  |  8 +-
 docs/admin-manual/resource-admin/compute-node.md   |  2 +-
 docs/admin-manual/resource-admin/workload-group.md |  2 -
 docs/admin-manual/small-file-mgr.md                |  8 +-
 docs/benchmark/tpcds.md                            |  6 +-
 docs/benchmark/tpch.md                             |  6 +-
 docs/ecosystem/dbt-doris-adapter.md                | 12 +--
 docs/ecosystem/flink-doris-connector.md            |  6 +-
 docs/ecosystem/hive-bitmap-udf.md                  |  8 +-
 docs/ecosystem/hive-hll-udf.md                     |  6 +-
 docs/faq/install-faq.md                            |  4 +-
 docs/faq/sql-faq.md                                |  4 +-
 .../string-functions/{like => fuzzy-match}/like.md |  0
 .../{like => fuzzy-match}/not-like.md              |  0
 .../string-functions/{like => fuzzy-match}/like.md |  0
 .../{like => fuzzy-match}/not-like.md              |  0
 sidebars.json                                      |  6 +-
 versioned_docs/version-1.2/benchmark/tpch.md       |  4 +-
 .../cluster-management/elastic-expansion.md        |  2 +-
 .../admin-manual/config/fe-config-template.md      |  2 +-
 .../admin-manual/maint-monitor/disk-capacity.md    |  2 +-
 .../admin-manual/resource-admin/workload-group.md  |  4 +-
 .../version-2.0/ecosystem/flink-doris-connector.md |  2 +-
 .../version-2.1/admin-manual/audit-plugin.md       |  4 +-
 .../cluster-management/elastic-expansion.md        |  2 +-
 .../admin-manual/cluster-management/fqdn.md        |  4 +-
 .../version-2.1/admin-manual/config/be-config.md   | 28 +++----
 .../version-2.1/admin-manual/config/fe-config.md   | 96 +++++++++++-----------
 .../version-2.1/admin-manual/data-admin/backup.md  |  6 +-
 .../version-2.1/admin-manual/data-admin/restore.md | 10 +--
 .../admin-manual/maint-monitor/disk-capacity.md    |  2 +-
 .../maint-monitor/metadata-operation.md            |  2 +-
 .../maint-monitor/tablet-repair-and-balance.md     |  2 +-
 .../memory-management/be-oom-analysis.md           |  7 +-
 .../admin-manual/query-admin/sql-interception.md   |  8 +-
 .../admin-manual/resource-admin/compute-node.md    |  2 +-
 .../admin-manual/resource-admin/workload-group.md  |  2 -
 .../version-2.1/admin-manual/small-file-mgr.md     |  8 +-
 versioned_docs/version-2.1/benchmark/tpcds.md      |  6 +-
 versioned_docs/version-2.1/benchmark/tpch.md       |  6 +-
 .../version-2.1/ecosystem/dbt-doris-adapter.md     | 12 +--
 .../version-2.1/ecosystem/flink-doris-connector.md |  6 +-
 .../version-2.1/ecosystem/hive-bitmap-udf.md       |  8 +-
 .../version-2.1/ecosystem/hive-hll-udf.md          |  6 +-
 versioned_docs/version-2.1/faq/install-faq.md      |  4 +-
 versioned_docs/version-2.1/faq/sql-faq.md          |  4 +-
 58 files changed, 243 insertions(+), 255 deletions(-)

diff --git a/blog/tpch.md b/blog/tpch.md
index f9f77b89799..8a857a92279 100644
--- a/blog/tpch.md
+++ b/blog/tpch.md
@@ -55,7 +55,7 @@ On 22 queries on the TPC-H standard test data set, we 
conducted a comparison tes
 - Doris Deployed 3BEs and 1FE
 - Kernel Version: Linux version 5.4.0-96-generic (buildd@lgw01-amd64-051)
 - OS version: CentOS 7.8
-- Doris software version: Apache Doris 1.2.0-rc01、 Apache Doris 1.1.3 、 Apache 
Doris 0.15.0 RC04
+- Doris software version: Apache Doris 1.2.0-rc01, Apache Doris 1.1.3 , Apache 
Doris 0.15.0 RC04
 - JDK: openjdk version "11.0.14" 2022-01-18
 
 ## 3. Test Data Volume
@@ -75,7 +75,7 @@ The TPCH 100G data generated by the simulation of the entire 
test are respective
 
 ## 4. Test SQL
 
-TPCH 22 test query statements : 
[TPCH-Query-SQL](https://github.com/apache/incubator-doris/tree/master/tools/tpch-tools/queries)
+TPCH 22 test query statements : 
[TPCH-Query-SQL](https://github.com/apache/incubator-doris/tree/master/tools/tpch-tools/queries)
 
 **Notice:**
 
@@ -128,7 +128,7 @@ Here we use Apache Doris 1.2.0-rc01, Apache Doris 1.1.3 and 
Apache Doris 0.15.0
 
 ## 6. Environmental Preparation
 
-Please refer to the [official document](../install/install-deploy.md) to 
install and deploy Doris to obtain a normal running Doris cluster (at least 1 
FE 1 BE, 1 FE 3 BE is recommended).
+Please refer to the [official 
document](https://doris.apache.org/docs/install/cluster-deployment/standard-deployment/)
 to install and deploy Doris to obtain a normal running Doris cluster (at least 
1 FE 1 BE, 1 FE 3 BE is recommended).
 
 ## 7. Data Preparation
 
diff --git a/docs/admin-manual/audit-plugin.md 
b/docs/admin-manual/audit-plugin.md
index e2997bb5a94..e70edb1dc87 100644
--- a/docs/admin-manual/audit-plugin.md
+++ b/docs/admin-manual/audit-plugin.md
@@ -90,7 +90,7 @@ The audit log plug-in framework is enabled by default in 
Doris and is controlled
     * plugin.conf: plugin configuration file.
 
 You can place this file on an http download server or copy(or unzip) it to the 
specified directory of all FEs. Here we use the latter.  
-The installation of this plugin can be found in 
[INSTALL](../sql-manual/sql-reference/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
+The installation of this plugin can be found in 
[INSTALL](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
 After executing install, the AuditLoader directory will be automatically 
generated.
 
 3. Modify plugin.conf
@@ -211,7 +211,7 @@ Install the audit loader plugin:
 INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
 ```
 
-Detailed command reference: 
[INSTALL-PLUGIN.md](../sql-manual/sql-reference/Database-Administration-Statements/INSTALL-PLUGIN)
+Detailed command reference: 
[INSTALL-PLUGIN.md](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN)
 
 After successful installation, you can see the installed plug-ins through 
`SHOW PLUGINS`, and the status is `INSTALLED`.
 
diff --git a/docs/admin-manual/cluster-management/elastic-expansion.md 
b/docs/admin-manual/cluster-management/elastic-expansion.md
index 0a5f8ebc90d..86f14f2c191 100644
--- a/docs/admin-manual/cluster-management/elastic-expansion.md
+++ b/docs/admin-manual/cluster-management/elastic-expansion.md
@@ -106,7 +106,7 @@ You can also view the BE node through the front-end page 
connection: ``http://fe
 
 All of the above methods require Doris's root user rights.
 
-The expansion and scaling process of BE nodes does not affect the current 
system operation and the tasks being performed, and does not affect the 
performance of the current system. Data balancing is done automatically. 
Depending on the amount of data available in the cluster, the cluster will be 
restored to load balancing in a few hours to a day. For cluster load, see the 
[Tablet Load Balancing Document](../cluster-management/elastic-expansion.md).
+The expansion and scaling process of BE nodes does not affect the current 
system operation and the tasks being performed, and does not affect the 
performance of the current system. Data balancing is done automatically. 
Depending on the amount of data available in the cluster, the cluster will be 
restored to load balancing in a few hours to a day. For cluster load, see the 
[Tablet Load Balancing Document](../cluster-management/load-balancing).
 
 ### Add BE nodes
 
diff --git a/docs/admin-manual/cluster-management/fqdn.md 
b/docs/admin-manual/cluster-management/fqdn.md
index 798131d294b..345a6173167 100644
--- a/docs/admin-manual/cluster-management/fqdn.md
+++ b/docs/admin-manual/cluster-management/fqdn.md
@@ -56,14 +56,14 @@ After Doris supports FQDN, communication between nodes is 
entirely based on FQDN
    ```
 4. Verification: It can 'ping fe2' on FE1, and can resolve the correct IP 
address and ping it, indicating that the network environment is available.
 5. fe.conf settings for each FE node ` enable_ fqdn_ mode = true`.
-6. Refer to[Standard deployment](../../install/standard-deployment.md)
+6. Refer to[Standard 
deployment](../../install/cluster-deployment/standard-deployment)
 7. Select several machines to deploy broker on six machines as needed, and 
execute `ALTER SYSTEM ADD BROKER broker_name "fe1:8000","be1:8000",...;`.
 
 ### Deployment of Doris for K8S
 
 After an unexpected restart of the Pod, K8s cannot guarantee that the Pod's IP 
will not change, but it can ensure that the domain name remains unchanged. 
Based on this feature, when Doris enables FQDN, it can ensure that the Pod can 
still provide services normally after an unexpected restart.
 
-Please refer to the method for deploying Doris in K8s[Kubernetes 
Deployment](../../install/k8s-deploy/operator-deploy.md)
+Please refer to the method for deploying Doris in K8s[Kubernetes 
Deployment](../../install/cluster-deployment/k8s-deploy/install-operator)
 
 ### Server change IP
 
diff --git a/docs/admin-manual/config/be-config.md 
b/docs/admin-manual/config/be-config.md
index 52fe5abc030..b5cb83898f9 100644
--- a/docs/admin-manual/config/be-config.md
+++ b/docs/admin-manual/config/be-config.md
@@ -158,8 +158,8 @@ There are two ways to configure BE configuration items:
 
   eg.2: 
`storage_root_path=/home/disk1/doris,medium:hdd;/home/disk2/doris,medium:ssd`
 
-    - 1./home/disk1/doris,medium:hdd,indicates that the storage medium is HDD;
-    - 2./home/disk2/doris,medium:ssd,indicates that the storage medium is SSD;
+    - 1./home/disk1/doris,medium:hdd, indicates that the storage medium is HDD;
+    - 2./home/disk2/doris,medium:ssd, indicates that the storage medium is SSD;
 
 * Default value: ${DORIS_HOME}/storage
 
@@ -346,7 +346,7 @@ There are two ways to configure BE configuration items:
 #### `doris_max_scan_key_num`
 
 * Type: int
-* Description: Used to limit the maximum number of scan keys that a scan node 
can split in a query request. When a conditional query request reaches the scan 
node, the scan node will try to split the conditions related to the key column 
in the query condition into multiple scan key ranges. After that, these scan 
key ranges will be assigned to multiple scanner threads for data scanning. A 
larger value usually means that more scanner threads can be used to increase 
the parallelism of the s [...]
+* Description: Used to limit the maximum number of scan keys that a scan node 
can split in a query request. When a conditional query request reaches the scan 
node, the scan node will try to split the conditions related to the key column 
in the query condition into multiple scan key ranges. After that, these scan 
key ranges will be assigned to multiple scanner threads for data scanning. A 
larger value usually means that more scanner threads can be used to increase 
the parallelism of the s [...]
   - When the concurrency cannot be improved in high concurrency scenarios, try 
to reduce this value and observe the impact.
 * Default value: 48
 
@@ -400,7 +400,7 @@ There are two ways to configure BE configuration items:
 #### `max_pushdown_conditions_per_column`
 
 * Type: int
-* Description: Used to limit the maximum number of conditions that can be 
pushed down to the storage engine for a single column in a query request. 
During the execution of the query plan, the filter conditions on some columns 
can be pushed down to the storage engine, so that the index information in the 
storage engine can be used for data filtering, reducing the amount of data that 
needs to be scanned by the query. Such as equivalent conditions, conditions in 
IN predicates, etc. In most  [...]
+* Description: Used to limit the maximum number of conditions that can be 
pushed down to the storage engine for a single column in a query request. 
During the execution of the query plan, the filter conditions on some columns 
can be pushed down to the storage engine, so that the index information in the 
storage engine can be used for data filtering, reducing the amount of data that 
needs to be scanned by the query. Such as equivalent conditions, conditions in 
IN predicates, etc. In most  [...]
 * Default value: 1024
 
 * Example
@@ -1066,18 +1066,18 @@ BaseCompaction:546859:
 #### `generate_cache_cleaner_task_interval_sec`
 
 * Type:int64
-* Description:Cleaning interval of cache files, in seconds
-* Default:43200(12 hours)
+* Description: Cleaning interval of cache files, in seconds
+* Default: 43200 (12 hours)
 
 #### `path_gc_check`
 
 * Type:bool
-* Description:Whether to enable the recycle scan data thread check
+* Description: Whether to enable the recycle scan data thread check
 * Default:true
 
 #### `path_gc_check_interval_second`
 
-* Description:Recycle scan data thread check interval
+* Description: Recycle scan data thread check interval
 * Default:86400 (s)
 
 #### `path_gc_check_step`
@@ -1094,7 +1094,7 @@ BaseCompaction:546859:
 
 #### `scan_context_gc_interval_min`
 
-* Description:This configuration is used for the context gc thread scheduling 
cycle. Note: The unit is minutes, and the default is 5 minutes
+* Description: This configuration is used for the context gc thread scheduling 
cycle. Note: The unit is minutes, and the default is 5 minutes
 * Default:5
 
 ### Storage
@@ -1114,7 +1114,7 @@ BaseCompaction:546859:
 #### `disk_stat_monitor_interval`
 
 * Description: Disk status check interval
-* Default value: 5(s)
+* Default value: 5 (s)
 
 #### `max_free_io_buffers`
 
@@ -1165,7 +1165,7 @@ BaseCompaction:546859:
 #### `storage_flood_stage_usage_percent`
 
 * Description: The storage_flood_stage_usage_percent and 
storage_flood_stage_left_capacity_bytes configurations limit the maximum usage 
of the capacity of the data directory.
-* Default value: 90 (90%)
+* Default value: 90 (90%)
 
 #### `storage_medium_migrate_count`
 
@@ -1245,7 +1245,7 @@ BaseCompaction:546859:
 
 #### `tablet_meta_checkpoint_min_interval_secs`
 
-* Description: TabletMeta Checkpoint线程轮询的时间间隔
+* Description: TabletMeta Checkpoint 线程轮询的时间间隔
 * Default value: 600 (s)
 
 #### `tablet_meta_checkpoint_min_new_rowsets_num`
@@ -1422,7 +1422,7 @@ Indicates how many tablets failed to load in the data 
directory. At the same tim
 #### `max_download_speed_kbps`
 
 * Description: Maximum download speed limit
-* Default value: 50000 (kb/s)
+* Default value: 50000 (kb/s)
 
 #### `download_low_speed_time`
 
@@ -1493,7 +1493,7 @@ Indicates how many tablets failed to load in the data 
directory. At the same tim
 
 #### `group_commit_memory_rows_for_max_filter_ratio`
 
-* Description: The `max_filter_ratio` limit can only work if the total rows of 
`group commit` is less than this value. See [Group 
Commit](../../data-operate/import/import-way/group-commit-manual.md) for more 
details
+* Description: The `max_filter_ratio` limit can only work if the total rows of 
`group commit` is less than this value. See [Group 
Commit](../../data-operate/import/group-commit-manual.md) for more details
 * Default: 10000
 
 #### `default_tzfiles_path`
diff --git a/docs/admin-manual/config/fe-config.md 
b/docs/admin-manual/config/fe-config.md
index c7127cba8c2..4abb1bfd04b 100644
--- a/docs/admin-manual/config/fe-config.md
+++ b/docs/admin-manual/config/fe-config.md
@@ -48,7 +48,7 @@ There are two ways to view the configuration items of FE:
 
 2. View by command
 
-    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-reference/Database-Administration-Statements/SHOW-CONFIG.md):
+    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-statements/Database-Administration-Statements/SHOW-CONFIG.md):
 
     `SHOW FRONTEND CONFIG;`
 
@@ -85,7 +85,7 @@ There are two ways to configure FE configuration items:
 
 3. Dynamic configuration via HTTP protocol
 
-    For details, please refer to [Set Config 
Action](../http-actions/fe/set-config-action.md)
+    For details, please refer to [Set Config Action](../fe/set-config-action)
 
     This method can also persist the modified configuration items. The 
configuration items will be persisted in the `fe_custom.conf` file and will 
still take effect after FE is restarted.
 
@@ -177,13 +177,13 @@ Num of thread to handle grpc events in grpc_threadmgr.
 
 Default:10  (s)
 
-The replica ack timeout when writing to bdbje , When writing some relatively 
large logs, the ack time may time out, resulting in log writing failure.  At 
this time, you can increase this value appropriately.
+The replica ack timeout when writing to bdbje , When writing some relatively 
large logs, the ack time may time out, resulting in log writing failure.  At 
this time, you can increase this value appropriately.
 
 #### `bdbje_lock_timeout_second`
 
 Default:5
 
-The lock timeout of bdbje operation, If there are many LockTimeoutException in 
FE WARN log, you can try to increase this value
+The lock timeout of bdbje operation, If there are many LockTimeoutException in 
FE WARN log, you can try to increase this value
 
 #### `bdbje_heartbeat_timeout_second`
 
@@ -195,7 +195,7 @@ The heartbeat timeout of bdbje between master and follower. 
the default is 30 se
 
 Default:SIMPLE_MAJORITY
 
-OPTION:ALL, NONE, SIMPLE_MAJORITY
+OPTION: ALL, NONE, SIMPLE_MAJORITY
 
 Replica ack policy of bdbje. more info, see: 
http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/Durability.ReplicaAckPolicy.html
 
@@ -236,7 +236,7 @@ This is helpful when you try to stop the Master FE for a 
relatively long time fo
 
 #### `meta_delay_toleration_second`
 
-Default:300 (5 min)
+Default: 300 (5 min)
 
 Non-master FE will stop offering service  if meta data delay gap exceeds 
*meta_delay_toleration_second*
 
@@ -324,7 +324,7 @@ Default:true
 
 IsMutable:true
 
-The multi cluster feature will be deprecated in version 0.12 ,set this config 
to true will disable all operations related to cluster feature, include:
+The multi cluster feature will be deprecated in version 0.12 , set this config 
to true will disable all operations related to cluster feature, include:
 
 1. create/drop cluster
 2. add free backend/add backend to cluster/decommission cluster balance
@@ -416,7 +416,7 @@ Default value: 0.0.0.0
 
 Default:none
 
-Declare a selection strategy for those servers have many ips.  Note that there 
should at most one ip match this list.  this is a list in semicolon-delimited 
format, in CIDR notation, e.g. 10.10.10.0/24 , If no ip match this rule, will 
choose one randomly.
+Declare a selection strategy for those servers have many ips.  Note that there 
should at most one ip match this list.  this is a list in semicolon-delimited 
format, in CIDR notation, e.g. 10.10.10.0/24 , If no ip match this rule, will 
choose one randomly.
 
 #### `http_port`
 
@@ -481,7 +481,7 @@ The thrift server max worker threads
 
 Default:1024
 
-The backlog_num for thrift server , When you enlarge this backlog_num, you 
should ensure it's value larger than the linux /proc/sys/net/core/somaxconn 
config
+The backlog_num for thrift server , When you enlarge this backlog_num, you 
should ensure it's value larger than the linux /proc/sys/net/core/somaxconn 
config
 
 #### `thrift_client_timeout_ms`
 
@@ -557,7 +557,7 @@ MasterOnly:true
 
 #### `max_backend_down_time_second`
 
-Default:3600  (1 hour)
+Default: 3600  (1 hour)
 
 IsMutable:true
 
@@ -637,7 +637,7 @@ Default:30000  (ms)
 
 IsMutable:true
 
-The timeout of executing async remote fragment.  In normal case, the async 
remote fragment will be executed in a short time. If system are under high load 
condition,try to set this timeout longer.
+The timeout of executing async remote fragment.  In normal case, the async 
remote fragment will be executed in a short time. If system are under high load 
condition, try to set this timeout longer.
 
 #### `auth_token`
 
@@ -647,7 +647,7 @@ Cluster token used for internal authentication.
 
 #### `enable_http_server_v2`
 
-Default:The default is true after the official 0.14.0 version is released, and 
the default is false before
+Default: The default is true after the official 0.14.0 version is released, 
and the default is false before
 
 HTTP Server V2 is implemented by SpringBoot. It uses an architecture that 
separates the front and back ends. Only when HTTPv2 is enabled can users use 
the new front-end UI interface.
 
@@ -1009,7 +1009,7 @@ Default:1
 
 IsMutable:true
 
-Colocote join PlanFragment instance的memory_limit = exec_mem_limit / min 
(query_colocate_join_memory_limit_penalty_factor, instance_num)
+Colocote join PlanFragment instance 的 memory_limit = exec_mem_limit / min 
(query_colocate_join_memory_limit_penalty_factor, instance_num)
 
 #### `rewrite_count_distinct_to_bitmap_hll`
 
@@ -1115,7 +1115,7 @@ IsMutable:true
 
 MasterOnly:true
 
-Max number of load jobs, include PENDING、ETL、LOADING、QUORUM_FINISHED. If 
exceed this number, load job is not allowed to be submitted
+Max number of load jobs, include PENDING, ETL, LOADING, QUORUM_FINISHED. If 
exceed this number, load job is not allowed to be submitted
 
 #### `db_used_data_quota_update_interval_secs`
 
@@ -1257,7 +1257,7 @@ IsMutable:true
 
 MasterOnly:true
 
-Default number of waiting jobs for routine load and version 2 of load , This 
is a desired number.  In some situation, such as switch the master, the current 
number is maybe more than desired_max_waiting_jobs.
+Default number of waiting jobs for routine load and version 2 of load , This 
is a desired number.  In some situation, such as switch the master, the current 
number is maybe more than desired_max_waiting_jobs.
 
 #### `disable_hadoop_load`
 
@@ -1345,7 +1345,7 @@ Min stream load timeout applicable to all type of load
 
 #### `max_stream_load_timeout_second`
 
-Default:259200 (3 day)
+Default: 259200 (3 day)
 
 IsMutable:true
 
@@ -1355,7 +1355,7 @@ This configuration is specifically used to limit timeout 
setting for stream load
 
 #### `max_load_timeout_second`
 
-Default:259200 (3 day)
+Default: 259200 (3 day)
 
 IsMutable:true
 
@@ -1365,7 +1365,7 @@ Max load timeout applicable to all type of load except 
for stream load
 
 #### `stream_load_default_timeout_second`
 
-Default:86400 * 3 (3 day)
+Default: 86400 * 3 (3 day)
 
 IsMutable:true
 
@@ -1396,7 +1396,7 @@ When HTTP header `memtable_on_sink_node` is not set.
 
 #### `insert_load_default_timeout_second`
 
-Default:3600(1 hour)
+Default: 3600 (1 hour)
 
 IsMutable:true
 
@@ -1406,7 +1406,7 @@ Default insert load timeout
 
 #### `mini_load_default_timeout_second`
 
-Default:3600(1 hour)
+Default: 3600 (1 hour)
 
 IsMutable:true
 
@@ -1416,7 +1416,7 @@ Default non-streaming mini load timeout
 
 #### `broker_load_default_timeout_second`
 
-Default:14400(4 hour)
+Default: 14400 (4 hour)
 
 IsMutable:true
 
@@ -1426,7 +1426,7 @@ Default broker load timeout
 
 #### `spark_load_default_timeout_second`
 
-Default:86400  (1 day)
+Default: 86400  (1 day)
 
 IsMutable:true
 
@@ -1436,7 +1436,7 @@ Default spark load timeout
 
 #### `hadoop_load_default_timeout_second`
 
-Default:86400 * 3   (3 day)
+Default: 86400 * 3   (3 day)
 
 IsMutable:true
 
@@ -1530,7 +1530,7 @@ In the case of high concurrent writes, if there is a 
large backlog of jobs and c
 
 #### `streaming_label_keep_max_second`
 
-Default:43200 (12 hour)
+Default: 43200 (12 hour)
 
 IsMutable:true
 
@@ -1540,7 +1540,7 @@ For some high-frequency load work, such as: INSERT, 
STREAMING LOAD, ROUTINE_LOAD
 
 #### `label_clean_interval_second`
 
-Default:1 * 3600  (1 hour)
+Default:1 * 3600  (1 hour)
 
 Load label cleaner will run every *label_clean_interval_second* to clean the 
outdated jobs.
 
@@ -1564,7 +1564,7 @@ Whether it is a configuration item unique to the Master 
FE node: true
 
 Data synchronization job running status check.
 
-Default: 10(s)
+Default: 10 (s)
 
 #### `max_sync_task_threads_num`
 
@@ -1620,7 +1620,7 @@ Number of tablets per export query plan
 
 #### `export_task_default_timeout_second`
 
-Default:2 * 3600   (2 hour)
+Default: 2 * 3600   (2 hour)
 
 IsMutable:true
 
@@ -1654,7 +1654,7 @@ The max size of one sys log and audit log
 
 #### `sys_log_dir`
 
-Default:DorisFE.DORIS_HOME_DIR + "/log"
+Default: DorisFE.DORIS_HOME_DIR + "/log"
 
 sys_log_dir:
 
@@ -1667,7 +1667,7 @@ fe.warn.log  all WARNING and ERROR log of FE process.
 
 Default:INFO
 
-log level:INFO, WARN, ERROR, FATAL
+log level: INFO, WARN, ERROR, FATAL
 
 #### `sys_log_roll_num`
 
@@ -1741,7 +1741,7 @@ Slow query contains all queries which cost exceed 
*qe_slow_log_ms*
 
 #### `qe_slow_log_ms`
 
-Default:5000 (5 seconds)
+Default: 5000 (5 seconds)
 
 If the response time of a query exceed this threshold, it will be recorded in 
audit log as slow_query.
 
@@ -1749,8 +1749,8 @@ If the response time of a query exceed this threshold, it 
will be recorded in au
 
 Default:DAY
 
-DAY:  logsuffix is :yyyyMMdd
-HOUR: logsuffix is :yyyyMMddHH
+DAY:  logsuffix is : yyyyMMdd
+HOUR: logsuffix is : yyyyMMddHH
 
 #### `audit_log_delete_age`
 
@@ -1838,7 +1838,7 @@ Set to true so that Doris will automatically use blank 
replicas to fill tablets
 
 #### `min_clone_task_timeout_sec` `And max_clone_task_timeout_sec`
 
-Default:Minimum 3 minutes, maximum two hours
+Default: Minimum 3 minutes, maximum two hours
 
 IsMutable:true
 
@@ -1876,7 +1876,7 @@ IsMutable:true
 
 MasterOnly:true
 
-Valid only if use PartitionRebalancer,
+Valid only if use PartitionRebalancer,
 
 #### `partition_rebalance_move_expire_after_access`
 
@@ -1938,7 +1938,7 @@ if set to true, TabletScheduler will not do disk balance.
 
 #### `balance_load_score_threshold`
 
-Default:0.1 (10%)
+Default: 0.1 (10%)
 
 IsMutable:true
 
@@ -1948,7 +1948,7 @@ the threshold of cluster balance score, if a backend's 
load score is 10% lower t
 
 #### `capacity_used_percent_high_water`
 
-Default:0.75  (75%)
+Default: 0.75  (75%)
 
 IsMutable:true
 
@@ -1958,7 +1958,7 @@ The high water of disk capacity used percent. This is 
used for calculating load
 
 #### `clone_distribution_balance_threshold`
 
-Default:0.2
+Default: 0.2
 
 IsMutable:true
 
@@ -1968,7 +1968,7 @@ Balance threshold of num of replicas in Backends.
 
 #### `clone_capacity_balance_threshold`
 
-Default:0.2
+Default: 0.2
 
 IsMutable:true
 
@@ -2179,7 +2179,7 @@ MasterOnly:true
 
 #### `catalog_trash_expire_second`
 
-Default:86400L (1 day)
+Default: 86400L (1 day)
 
 IsMutable:true
 
@@ -2212,7 +2212,7 @@ Is it a configuration item unique to the Master FE node: 
true
 
 #### `check_consistency_default_timeout_second`
 
-Default:600 (10 minutes)
+Default: 600 (10 minutes)
 
 IsMutable:true
 
@@ -2294,7 +2294,7 @@ Maximal timeout for delete job, in seconds.
 
 #### `alter_table_timeout_second`
 
-Default:86400 * 30(1 month)
+Default: 86400 * 30 (1 month)
 
 IsMutable:true
 
@@ -2462,9 +2462,9 @@ Default:{
 
 #### `yarn_config_dir`
 
-Default:DorisFE.DORIS_HOME_DIR + "/lib/yarn-config"
+Default: DorisFE.DORIS_HOME_DIR + "/lib/yarn-config"
 
-Default yarn config file directory ,Each time before running the yarn command, 
we need to check that the  config file exists under this path, and if not, 
create them.
+Default yarn config file directory , Each time before running the yarn 
command, we need to check that the  config file exists under this path, and if 
not, create them.
 
 #### `yarn_client_path`
 
@@ -2492,7 +2492,7 @@ Default spark home dir
 
 #### `spark_dpp_version`
 
-Default:1.0.0
+Default: 1.0.0
 
 Default spark dpp version
 
@@ -2500,13 +2500,13 @@ Default spark dpp version
 
 #### `tmp_dir`
 
-Default:DorisFE.DORIS_HOME_DIR + "/temp_dir"
+Default: DorisFE.DORIS_HOME_DIR + "/temp_dir"
 
 temp dir is used to save intermediate results of some process, such as backup 
and restore process.  file in this dir will be cleaned after these process is 
finished.
 
 #### `custom_config_dir`
 
-Default:DorisFE.DORIS_HOME_DIR + "/conf"
+Default: DorisFE.DORIS_HOME_DIR + "/conf"
 
 Custom configuration file directory
 
@@ -2579,7 +2579,7 @@ This threshold is to avoid piling up too many report task 
in FE, which may cause
 
 #### `backup_job_default_timeout_ms`
 
-Default:86400 * 1000  (1 day)
+Default: 86400 * 1000  (1 day)
 
 IsMutable:true
 
@@ -2659,7 +2659,7 @@ IsMutable:true
 
 MasterOnly:false
 
-Whether to push the filter conditions with functions down to MYSQL, when 
execute query of ODBC、JDBC external tables
+Whether to push the filter conditions with functions down to MYSQL, when 
execute query of ODBC, JDBC external tables
 
 #### `jdbc_drivers_dir`
 
diff --git a/docs/admin-manual/data-admin/backup.md 
b/docs/admin-manual/data-admin/backup.md
index 91e0f4dfd25..881b23c1c83 100644
--- a/docs/admin-manual/data-admin/backup.md
+++ b/docs/admin-manual/data-admin/backup.md
@@ -160,7 +160,7 @@ ALTER TABLE tbl1 SET ("dynamic_partition.enable"="true")
    1 row in set (0.15 sec)
    ```
 
-For the detailed usage of BACKUP, please refer to 
[here](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md).
+For the detailed usage of BACKUP, please refer to 
[here](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/BACKUP.md).
 
 ## Best Practices
 
@@ -192,7 +192,7 @@ It is recommended to import the new and old clusters in 
parallel for a period of
 
    1. CREATE REPOSITORY
 
-      Create a remote repository path for backup or restore. This command 
needs to use the Broker process to access the remote storage. Different brokers 
need to provide different parameters. For details, please refer to [Broker 
documentation](../../advanced/broker.md), or you can directly back up to 
support through the S3 protocol For the remote storage of AWS S3 protocol, or 
directly back up to HDFS, please refer to [Create Remote Warehouse 
Documentation](../../sql-manual/sql-reference [...]
+      Create a remote repository path for backup or restore. This command 
needs to use the Broker process to access the remote storage. Different brokers 
need to provide different parameters. For details, please refer to [Broker 
documentation](../../data-operate/import/broker-load-manual), or you can 
directly back up to support through the S3 protocol For the remote storage of 
AWS S3 protocol, or directly back up to HDFS, please refer to [Create Remote 
Warehouse Documentation](../../sql- [...]
 
    2. BACKUP
 
@@ -247,4 +247,4 @@ It is recommended to import the new and old clusters in 
parallel for a period of
 
 ## More Help
 
- For more detailed syntax and best practices used by BACKUP, please refer to 
the 
[BACKUP](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md)
 command manual, You can also type `HELP BACKUP` on the MySql client command 
line for more help.
+ For more detailed syntax and best practices used by BACKUP, please refer to 
the 
[BACKUP](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/BACKUP.md)
 command manual, You can also type `HELP BACKUP` on the MySql client command 
line for more help.
diff --git a/docs/admin-manual/data-admin/restore.md 
b/docs/admin-manual/data-admin/restore.md
index f47a2ebb256..6e5cf192630 100644
--- a/docs/admin-manual/data-admin/restore.md
+++ b/docs/admin-manual/data-admin/restore.md
@@ -126,7 +126,7 @@ The restore operation needs to specify an existing backup 
in the remote warehous
    1 row in set (0.01 sec)
    ```
 
-For detailed usage of RESTORE, please refer to 
[here](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md).
+For detailed usage of RESTORE, please refer to 
[here](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/RESTORE.md).
 
 ## Related Commands
 
@@ -182,12 +182,12 @@ The commands related to the backup and restore function 
are as follows. For the
 
 1. Restore Report An Error:[20181: invalid md5 of downloaded file: 
/data/doris.HDD/snapshot/20220607095111.862.86400/19962/668322732/19962.hdr, 
expected: f05b63cca5533ea0466f62a9897289b5, get: 
d41d8cd98f00b204e9800998ecf8427e]
 
-   If the number of copies of the table backed up and restored is 
inconsistent, you need to specify the number of copies when executing the 
restore command. For specific commands, please refer to 
[RESTORE](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE)
 command manual
+   If the number of copies of the table backed up and restored is 
inconsistent, you need to specify the number of copies when executing the 
restore command. For specific commands, please refer to 
[RESTORE](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/RESTORE)
 command manual
 
 2. Restore Report An Error:[COMMON_ERROR, msg: Could not set meta version to 
97 since it is lower than minimum required version 100]
 
-   Backup and restore are not caused by the same version, use the specified 
meta_version to read the metadata of the previous backup. Note that this 
parameter is used as a temporary solution and is only used to restore the data 
backed up by the old version of Doris. The latest version of the backup data 
already contains the meta version, so there is no need to specify it. For the 
specific solution to the above error, specify meta_version = 100. For specific 
commands, please refer to [RES [...]
+   Backup and restore are not caused by the same version, use the specified 
meta_version to read the metadata of the previous backup. Note that this 
parameter is used as a temporary solution and is only used to restore the data 
backed up by the old version of Doris. The latest version of the backup data 
already contains the meta version, so there is no need to specify it. For the 
specific solution to the above error, specify meta_version = 100. For specific 
commands, please refer to [RES [...]
 
 ## More Help
 
-For more detailed syntax and best practices used by RESTORE, please refer to 
the 
[RESTORE](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE)
 command manual, You can also type `HELP RESTORE` on the MySql client command 
line for more help.
+For more detailed syntax and best practices used by RESTORE, please refer to 
the 
[RESTORE](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/RESTORE)
 command manual, You can also type `HELP RESTORE` on the MySql client command 
line for more help.
diff --git a/docs/admin-manual/maint-monitor/disk-capacity.md 
b/docs/admin-manual/maint-monitor/disk-capacity.md
index f211faf1f6a..0703f95f29b 100644
--- a/docs/admin-manual/maint-monitor/disk-capacity.md
+++ b/docs/admin-manual/maint-monitor/disk-capacity.md
@@ -162,6 +162,6 @@ When the disk capacity is higher than High Watermark or 
even Flood Stage, many o
 
         ```rm -rf data/0/12345/```
 
-    * Delete tablet metadata refer to [Tablet metadata management 
tool](tablet-meta-tool.md)
+    * Delete tablet metadata refer to [Tablet metadata management 
tool](./tablet-meta-tool.md)
 
         ```./lib/meta_tool --operation=delete_header 
--root_path=/path/to/root_path --tablet_id=12345 --schema_hash= 352781111```
diff --git a/docs/admin-manual/maint-monitor/metadata-operation.md 
b/docs/admin-manual/maint-monitor/metadata-operation.md
index f98e5c9bd56..f92e2786b12 100644
--- a/docs/admin-manual/maint-monitor/metadata-operation.md
+++ b/docs/admin-manual/maint-monitor/metadata-operation.md
@@ -357,7 +357,7 @@ The third level can display the value information of the 
specified key.
 
 ## Best Practices
 
-The deployment recommendation of FE is described in the Installation and 
[Deployment Document](../../install/standard-deployment.md). Here are some 
supplements.
+The deployment recommendation of FE is described in the Installation and 
[Deployment Document](../../install/cluster-deployment/standard-deployment.md). 
Here are some supplements.
 
 * **If you don't know the operation logic of FE metadata very well, or you 
don't have enough experience in the operation and maintenance of FE metadata, 
we strongly recommend that only one FOLLOWER-type FE be deployed as MASTER in 
practice, and the other FEs are OBSERVER, which can reduce many complex 
operation and maintenance problems.** Don't worry too much about the failure of 
MASTER single point to write metadata. First, if you configure it properly, FE 
as a java process is very diff [...]
 
diff --git a/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md 
b/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md
index 62010673141..cdcb8380d3f 100644
--- a/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md
+++ b/docs/admin-manual/maint-monitor/tablet-repair-and-balance.md
@@ -28,7 +28,7 @@ under the License.
 
 Beginning with version 0.9.0, Doris introduced an optimized replica management 
strategy and supported a richer replica status viewing tool. This document 
focuses on Doris data replica balancing, repair scheduling strategies, and 
replica management operations and maintenance methods. Help users to more 
easily master and manage the replica status in the cluster.
 
-> Repairing and balancing copies of tables with Colocation attributes can be 
referred to 
[HERE](../../query-acceleration/join-optimization/colocation-join.md)
+> Repairing and balancing copies of tables with Colocation attributes can be 
referred to [HERE](../../query/join-optimization/colocation-join.md)
 
 ## Noun Interpretation
 
diff --git a/docs/admin-manual/memory-management/be-oom-analysis.md 
b/docs/admin-manual/memory-management/be-oom-analysis.md
index 165b60259f9..622234dcb63 100644
--- a/docs/admin-manual/memory-management/be-oom-analysis.md
+++ b/docs/admin-manual/memory-management/be-oom-analysis.md
@@ -24,14 +24,12 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# BE OOM Analysis
 
-<version since="1.2.0">
 
 Ideally, in [Memory Limit Exceeded 
Analysis](./memory-limit-exceeded-analysis.md), we regularly detect the 
remaining available memory of the operating system and respond in time when the 
memory is insufficient , such as triggering the memory GC to release the cache 
or cancel the memory overrun query, but because refreshing process memory 
statistics and memory GC both have a certain lag, and it is difficult for us to 
completely catch all large memory applications, there are still OOM risk.
 
 ## Solution
-Refer to [BE Configuration Items](../../../admin-manual/config/be-config.md) 
to reduce `mem_limit` and increase `max_sys_mem_available_low_water_mark_bytes` 
in `be.conf`.
+Refer to [BE Configuration Items](../../admin-manual/config/be-config) to 
reduce `mem_limit` and increase `max_sys_mem_available_low_water_mark_bytes` in 
`be.conf`.
 
 ## Memory analysis
 If you want to further understand the memory usage location of the BE process 
before OOM and reduce the memory usage of the process, you can refer to the 
following steps to analyze.
@@ -67,7 +65,7 @@ Memory Tracker Summary:
     MemTrackerLimiter Label=DeleteBitmap AggCache, Type=global, Limit=-1.00 
B(-1 B), Used=0(0 B), Peak=0(0 B)
 ```
 
-3. When the end of be/log/be.INFO before OOM contains the system memory 
exceeded log, refer to [Memory Limit Exceeded 
Analysis](./memory-limit-exceeded-analysis.md). The log analysis method in md) 
looks at the memory usage of each category of the process. If the current 
`type=query` memory usage is high, if the query before OOM is known, continue 
to step 4, otherwise continue to step 5; if the current `type=load` memory 
usage is more, continue to step 6, if the current `type= Global `mem [...]
+3. When the end of be/log/be.INFO before OOM contains the system memory 
exceeded log, refer to [Memory Limit Exceeded 
Analysis](./memory-limit-exceeded-analysis.md). The log analysis method in md 
looks at the memory usage of each category of the process. If the current 
`type=query` memory usage is high, if the query before OOM is known, continue 
to step 4, otherwise continue to step 5; if the current `type=load` memory 
usage is more, continue to step 6, if the current `type= Global `memo [...]
 
 4. `type=query` query memory usage is high, and the query before OOM is known, 
such as test cluster or scheduled task, restart the BE node, refer to [Memory 
Tracker](./memory-tracker.md) View real-time memory tracker statistics, retry 
the query after `set global enable_profile=true`, observe the memory usage 
location of specific operators, confirm whether the query memory usage is 
reasonable, and further consider optimizing SQL memory usage, such as adjusting 
the join order .
 
@@ -75,10 +73,9 @@ Memory Tracker Summary:
 
 6. `type=load` imports a lot of memory.
 
-7. When the `type=global` memory is used for a long time, continue to check 
the `type=global` detailed statistics in the second half of the `Memory Tracker 
Summary` log. When DataPageCache, IndexPageCache, SegmentCache, ChunkAllocator, 
LastSuccessChannelCache, etc. use a lot of memory, refer to [BE Configuration 
Item](../../../admin-manual/config/be-config.md) to consider modifying the size 
of the cache; when Orphan memory usage is too large, Continue the analysis as 
follows.
+7. When the `type=global` memory is used for a long time, continue to check 
the `type=global` detailed statistics in the second half of the `Memory Tracker 
Summary` log. When DataPageCache, IndexPageCache, SegmentCache, ChunkAllocator, 
LastSuccessChannelCache, etc. use a lot of memory, refer to [BE Configuration 
Item](../../admin-manual/config/be-config.md) to consider modifying the size of 
the cache; when Orphan memory usage is too large, Continue the analysis as 
follows.
   - If the sum of the tracker statistics of `Parent Label=Orphan` only 
accounts for a small part of the Orphan memory, it means that there is 
currently a large amount of memory that has no accurate statistics, such as the 
memory of the brpc process. At this time, you can consider using the heap 
profile [Memory Tracker]( 
https://doris.apache.org/community/developer-guide/debug-tool) to further 
analyze memory locations.
   - If the tracker statistics of `Parent Label=Orphan` account for most of 
Orphan’s memory, when `Label=TabletManager` uses a lot of memory, further check 
the number of tablets in the cluster. If there are too many tablets, delete 
them and they will not be used table or data; when `Label=StorageEngine` uses 
too much memory, further check the number of segment files in the cluster, and 
consider manually triggering compaction if the number of segment files is too 
large;
 
 8. If `be/log/be.INFO` does not print the `Memory Tracker Summary` log before 
OOM, it means that BE did not detect the memory limit in time, observe Grafana 
memory monitoring to confirm the memory growth trend of BE before OOM, if OOM 
is reproducible, consider adding `memory_debug=true` in `be.conf`, after 
restarting the cluster, the cluster memory statistics will be printed every 
second, observe the last `Memory Tracker Summary` log before OOM, and continue 
to step 3 for analysis;
 
-</version>
diff --git a/docs/admin-manual/query-admin/sql-interception.md 
b/docs/admin-manual/query-admin/sql-interception.md
index 0ce33e744de..515cf913b4e 100644
--- a/docs/admin-manual/query-admin/sql-interception.md
+++ b/docs/admin-manual/query-admin/sql-interception.md
@@ -37,7 +37,7 @@ Support SQL block rule by user level:
 ## Rule
 
 SQL block rule CRUD
-- create SQL block rule,For more creation syntax see[CREATE SQL BLOCK 
RULE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-SQL-BLOCK-RULE.md)
+- create SQL block rule,For more creation syntax see [CREATE SQL BLOCK 
RULE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-SQL-BLOCK-RULE.md)
     - sql: Regex pattern, Special characters need to be translated, "NULL" by 
default
     - sqlHash: Sql hash value, Used to match exactly, We print it in 
fe.audit.log, This parameter is the only choice between sql and sql, "NULL" by 
default
     - partition_num: Max number of partitions will be scanned by a scan node, 
0L by default
@@ -70,12 +70,12 @@ ERROR 1064 (HY000): errCode = 2, detailMessage = sql match 
regex sql block rule:
 CREATE SQL_BLOCK_RULE test_rule2 PROPERTIES("partition_num" = "30", 
"cardinality"="10000000000","global"="false","enable"="true")
 ```
 
-- show configured SQL block rules, or show all rules if you do not specify a 
rule name,Please see the specific grammar [SHOW SQL BLOCK 
RULE](../../sql-manual/sql-reference/Show-Statements/SHOW-SQL-BLOCK-RULE.md)
+- show configured SQL block rules, or show all rules if you do not specify a 
rule name,Please see the specific grammar [SHOW SQL BLOCK 
RULE](../../sql-manual/sql-statements/Show-Statements/SHOW-SQL-BLOCK-RULE.md)
 
 ```sql
 SHOW SQL_BLOCK_RULE [FOR RULE_NAME]
 ```
-- alter SQL block rule, Allows changes 
sql/sqlHash/global/enable/partition_num/tablet_num/cardinality anyone,Please 
see the specific grammar[ALTER SQL BLOCK  
RULE](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-SQL-BLOCK-RULE.md)
+- alter SQL block rule, Allows changes 
sql/sqlHash/global/enable/partition_num/tablet_num/cardinality anyone,Please 
see the specific grammar[ALTER SQL BLOCK  
RULE](../../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-SQL-BLOCK-RULE.md)
     - sql and sqlHash cannot be set both. It means if sql or sqlHash is set in 
a rule, another property will never be allowed to be altered
     - sql/sqlHash and partition_num/tablet_num/cardinality cannot be set 
together. For example, partition_num is set in a rule, then sql or sqlHash will 
never be allowed to be altered.
 ```sql
@@ -86,7 +86,7 @@ ALTER SQL_BLOCK_RULE test_rule PROPERTIES("sql"="select \\* 
from test_table","en
 ALTER SQL_BLOCK_RULE test_rule2 PROPERTIES("partition_num" = 
"10","tablet_num"="300","enable"="true")
 ```
 
-- drop SQL block rule, Support multiple rules, separated by `,`,Please see the 
specific grammar[DROP SQL BLOCK 
RULE](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-SQL-BLOCK-RULE.md)
+- drop SQL block rule, Support multiple rules, separated by `,`,Please see the 
specific grammar[DROP SQL BLOCK 
RULE](../../sql-manual/sql-statements/Data-Definition-Statements/Drop/DROP-SQL-BLOCK-RULE.md)
 ```sql
 DROP SQL_BLOCK_RULE test_rule1,test_rule2
 ```
diff --git a/docs/admin-manual/resource-admin/compute-node.md 
b/docs/admin-manual/resource-admin/compute-node.md
index 6d6f18a3d85..6b1e8d7cd01 100644
--- a/docs/admin-manual/resource-admin/compute-node.md
+++ b/docs/admin-manual/resource-admin/compute-node.md
@@ -133,7 +133,7 @@ Moreover, as compute nodes are stateless Backend (BE) 
nodes, they can be easily
 
 3. Can compute nodes and mix nodes configure a file cache directory?
 
-    [File cache](./filecache.md) accelerates subsequent queries for the same 
data by caching data files from recently accessed remote storage systems (HDFS 
or object storage).
+    [File cache](../../lakehouse/filecache) accelerates subsequent queries for 
the same data by caching data files from recently accessed remote storage 
systems (HDFS or object storage).
     
     Both compute and mix nodes can set up a file cache directory, which needs 
to be created in advance.
     
diff --git a/docs/admin-manual/resource-admin/workload-group.md 
b/docs/admin-manual/resource-admin/workload-group.md
index 9294af55019..332dbf42152 100644
--- a/docs/admin-manual/resource-admin/workload-group.md
+++ b/docs/admin-manual/resource-admin/workload-group.md
@@ -24,9 +24,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# WORKLOAD GROUP
 
-<version since="dev"></version>
 
 The workload group can limit the use of compute and memory resources on a 
single be node for tasks within the group. Currently, query binding to workload 
groups is supported.
 
diff --git a/docs/admin-manual/small-file-mgr.md 
b/docs/admin-manual/small-file-mgr.md
index 0c73379f86e..e1a6fc0fe78 100644
--- a/docs/admin-manual/small-file-mgr.md
+++ b/docs/admin-manual/small-file-mgr.md
@@ -47,7 +47,7 @@ File management has three main commands: `CREATE FILE`, `SHOW 
FILE` and `DROP FI
 
 ### CREATE FILE
 
-This statement is used to create and upload a file to the Doris cluster. For 
details, see [CREATE 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.md).
+This statement is used to create and upload a file to the Doris cluster. For 
details, see [CREATE 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FILE.md).
 
 Examples:
 
@@ -75,7 +75,7 @@ Examples:
 
 ### SHOW FILE
 
-This statement can view the files that have been created successfully. For 
details, see [SHOW 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.md).
+This statement can view the files that have been created successfully. For 
details, see [SHOW 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Drop/DROP-FILE.md).
 
 Examples:
 
@@ -87,7 +87,7 @@ Examples:
 
 ### DROP FILE
 
-This statement can view and delete an already created file. For specific 
operations, see [DROP 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.md).
+This statement can view and delete an already created file. For specific 
operations, see [DROP 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Drop/DROP-FILE.md).
 
 Examples:
 
@@ -129,4 +129,4 @@ Because the file meta-information and content are stored in 
FE memory. So by def
 
 ## More Help
 
-For more detailed syntax and best practices used by the file manager, see 
[CREATE 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.md),
 [DROP 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.md) 
and [SHOW FILE](../sql-manual/sql-reference/Show-Statements/SHOW-FILE.md) 
command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and 
`HELP SHOW FILE` in the MySql client command line to get more help information.
+For more detailed syntax and best practices used by the file manager, see 
[CREATE 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FILE.md),
 [DROP 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Drop/DROP-FILE.md)
 and [SHOW FILE](../sql-manual/sql-statements/Show-Statements/SHOW-FILE.md) 
command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and 
`HELP SHOW FILE` in the MySql client command line to get more help information.
diff --git a/docs/benchmark/tpcds.md b/docs/benchmark/tpcds.md
index 4e13b162d72..a90df892d90 100644
--- a/docs/benchmark/tpcds.md
+++ b/docs/benchmark/tpcds.md
@@ -52,7 +52,7 @@ On 99 queries on the TPC-DS standard test data set, we 
conducted a comparison te
 - Doris Deployed 3BEs and 1FE
 - Kernel Version: Linux version 5.4.0-96-generic (buildd@lgw01-amd64-051)
 - OS version: Ubuntu 20.04 LTS (Focal Fossa)
-- Doris software version: Apache Doris 2.1.1-rc03、 Apache Doris 2.0.6.
+- Doris software version: Apache Doris 2.1.1-rc03, Apache Doris 2.0.6.
 - JDK: openjdk version "1.8.0_131"
 
 ## 3. Test Data Volume
@@ -88,7 +88,7 @@ The TPC-DS 1000G data generated by the simulation of the 
entire test are respect
 
 ## 4. Test SQL
 
-TPC-DS 99 test query statements : 
[TPC-DS-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpcds-tools/queries/sf1000)
+TPC-DS 99 test query statements : 
[TPC-DS-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpcds-tools/queries/sf1000)
 
 ## 5. Test Results
 
@@ -199,7 +199,7 @@ Here we use Apache Doris 2.1.1-rc03 and Apache Doris 2.0.6 
for comparative testi
 
 ## 6. Environmental Preparation
 
-Please refer to the [official document](../install/standard-deployment.md) to 
install and deploy Doris to obtain a normal running Doris cluster (at least 1 
FE 1 BE, 1 FE 3 BE is recommended).
+Please refer to the [official 
document](../install/cluster-deployment/standard-deployment.md) to install and 
deploy Doris to obtain a normal running Doris cluster (at least 1 FE 1 BE, 1 FE 
3 BE is recommended).
 
 ## 7. Data Preparation
 
diff --git a/docs/benchmark/tpch.md b/docs/benchmark/tpch.md
index ae772d91455..171b195b780 100644
--- a/docs/benchmark/tpch.md
+++ b/docs/benchmark/tpch.md
@@ -49,7 +49,7 @@ On 22 queries on the TPC-H standard test data set, we 
conducted a comparison tes
 - Doris Deployed 3BEs and 1FE
 - Kernel Version: Linux version 5.4.0-96-generic (buildd@lgw01-amd64-051)
 - OS version: Ubuntu 20.04 LTS (Focal Fossa)
-- Doris software version: Apache Doris 2.1.1-rc03、 Apache Doris 2.0.6.
+- Doris software version: Apache Doris 2.1.1-rc03, Apache Doris 2.0.6.
 - JDK: openjdk version "1.8.0_131"
 
 ## 3. Test Data Volume
@@ -69,7 +69,7 @@ The TPCH 1000G data generated by the simulation of the entire 
test are respectiv
 
 ## 4. Test SQL
 
-TPCH 22 test query statements : 
[TPCH-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpch-tools/queries/sf1000)
+TPCH 22 test query statements : 
[TPCH-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpch-tools/queries/sf1000)
 
 
 ## 5. Test Results
@@ -105,7 +105,7 @@ Here we use Apache Doris 2.1.1-rc03 and Apache Doris 2.0.6 
for comparative testi
 
 ## 6. Environmental Preparation
 
-Please refer to the [official document](../install/standard-deployment.md) to 
install and deploy Doris to obtain a normal running Doris cluster (at least 1 
FE 1 BE, 1 FE 3 BE is recommended).
+Please refer to the [official 
document](../install/cluster-deployment/standard-deployment.md) to install and 
deploy Doris to obtain a normal running Doris cluster (at least 1 FE 1 BE, 1 FE 
3 BE is recommended).
 
 ## 7. Data Preparation
 
diff --git a/docs/ecosystem/dbt-doris-adapter.md 
b/docs/ecosystem/dbt-doris-adapter.md
index 0f15e7d7f42..39cae5b9d82 100644
--- a/docs/ecosystem/dbt-doris-adapter.md
+++ b/docs/ecosystem/dbt-doris-adapter.md
@@ -29,7 +29,7 @@ under the License.
 [DBT(Data Build Tool)](https://docs.getdbt.com/docs/introduction) is a 
component that focuses on doing T (Transform) in ELT (extraction, loading, 
transformation) - the "transformation data" link
 The `dbt-doris` adapter is developed based on `dbt-core` 1.5.0 and relies on 
the `mysql-connector-python` driver to convert data to doris.
 
-git:https://github.com/apache/doris/tree/master/extension/dbt-doris
+git: https://github.com/apache/doris/tree/master/extension/dbt-doris
 
 ## version
 
@@ -41,15 +41,15 @@ 
git:https://github.com/apache/doris/tree/master/extension/dbt-doris
 ## dbt-doris adapter Instructions
 
 ### dbt-doris adapter install
-use pip install:
+use pip install:
 ```shell
 pip install dbt-doris
 ```
-check version:
+check version:
 ```shell
 dbt --version
 ```
-if command not found: dbt:
+if command not found: dbt:
 ```shell
 ln -s /usr/local/python3/bin/dbt /usr/bin/dbt
 ```
@@ -63,7 +63,7 @@ Users need to prepare the following information to init dbt 
project
 | name     |  default | meaning                                                
                                                                                
   |  
 
|----------|------|-------------------------------------------------------------------------------------------------------------------------------------------|
 | project  |      | project name                                               
                                                                               
| 
-| database |      | Enter the corresponding number to select the adapter 
(选择doris)                                                                       
     | 
+| database |      | Enter the corresponding number to select the adapter(选择 
doris)                                                                          
  | 
 | host     |      | doris host                                                 
                                                                               
| 
 | port     | 9030 | doris MySQL Protocol Port                                  
                                                                               |
 | schema   |      | In dbt-doris, it is equivalent to database, Database name  
                                                                               |
@@ -114,7 +114,7 @@ When using the `table` materialization mode, your model is 
rebuilt as a table at
 For the tablet materialization of dbt, dbt-doris uses the following steps to 
ensure the atomicity of data changes:
 1. first create a temporary table: `create table this_table_temp as {{ model 
sql}}`.
 2. Determine whether `this_table` does not exist, that is, it is created for 
the first time, execute `rename`, and change the temporary table to the final 
table.
-3. if already exists, then `alter table this_table REPLACE WITH TABLE 
this_table_temp PROPERTIES('swap' = 'False')`,This operation can exchange the 
table name and delete the `this_table_temp` temporary 
table,[this](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md)
 guarantees the atomicity of this operation through the transaction mechanism 
of the Doris.
+3. if already exists, then `alter table this_table REPLACE WITH TABLE 
this_table_temp PROPERTIES('swap' = 'False')`,This operation can exchange the 
table name and delete the `this_table_temp` temporary 
table,[this](../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md)
 guarantees the atomicity of this operation through the transaction mechanism 
of the Doris.
 
 ``` 
 Advantages: table query speed will be faster than view.
diff --git a/docs/ecosystem/flink-doris-connector.md 
b/docs/ecosystem/flink-doris-connector.md
index f0878edf8e5..c9a2576d277 100644
--- a/docs/ecosystem/flink-doris-connector.md
+++ b/docs/ecosystem/flink-doris-connector.md
@@ -741,7 +741,7 @@ WITH (
    'sink.label-prefix' = 'doris_label',
    'sink.properties.columns' = 'dt,page,user_id,user_id=to_bitmap(user_id)'
 )
-````
+```
 4. **errCode = 2, detailMessage = Label [label_0_1] has already been used, 
relate to txn [19650]**
 
 In the Exactly-Once scenario, the Flink Job must be restarted from the latest 
Checkpoint/Savepoint, otherwise the above error will be reported.
@@ -754,13 +754,13 @@ At this time, it cannot be started from the checkpoint, 
and the expiration time
 
 6. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
-This is because the concurrent import of the same library exceeds 100, which 
can be solved by adjusting the parameter `max_running_txn_num_per_db` of 
fe.conf. For details, please refer to 
[max_running_txn_num_per_db](https://doris.apache.org/zh-CN/docs/dev/admin-manual/config/fe-config/#max_running_txn_num_per_db)
+This is because the concurrent import of the same library exceeds 100, which 
can be solved by adjusting the parameter `max_running_txn_num_per_db` of 
fe.conf. For details, please refer to 
[max_running_txn_num_per_db](../admin-manual/config/fe-config#max_running_txn_num_per_db)
 
 At the same time, if a task frequently modifies the label and restarts, it may 
also cause this error. In the 2pc scenario (Duplicate/Aggregate model), the 
label of each task needs to be unique, and when restarting from the checkpoint, 
the Flink task will actively abort the txn that has been successfully 
precommitted before and has not been committed. Frequently modifying the label 
and restarting will cause a large number of txn that have successfully 
precommitted to fail to be aborted, o [...]
 
 7. **How to ensure the order of a batch of data when Flink writes to the Uniq 
model?**
 
-You can add sequence column configuration to ensure that, for details, please 
refer to 
[sequence](https://doris.apache.org/zh-CN/docs/dev/data-operate/update-delete/sequence-column-manual)
+You can add sequence column configuration to ensure that, for details, please 
refer to [sequence](../data-operate/update/update-of-unique-model.md)
 
 8. **The Flink task does not report an error, but the data cannot be 
synchronized? **
 
diff --git a/docs/ecosystem/hive-bitmap-udf.md 
b/docs/ecosystem/hive-bitmap-udf.md
index 16b9d569e65..3fce7a1d78c 100644
--- a/docs/ecosystem/hive-bitmap-udf.md
+++ b/docs/ecosystem/hive-bitmap-udf.md
@@ -53,10 +53,10 @@ CREATE TABLE IF NOT EXISTS `hive_table`(
 ) comment  'comment'
 ```
 
-### Hive Bitmap UDF Usage:
+### Hive Bitmap UDF Usage:
 
    Hive Bitmap UDF used in Hive/Spark,First, you need to compile fe to get 
hive-udf-jar-with-dependencies.jar.
-   Compilation preparation:If you have compiled the ldb source code, you can 
directly compile fe,If you have compiled the ldb source code, you can compile 
it directly. If you have not compiled the ldb source code, you need to manually 
install thrift,
+   Compilation preparation:If you have compiled the ldb source code, you can 
directly compile fe,If you have compiled the ldb source code, you can compile 
it directly. If you have not compiled the ldb source code, you need to manually 
install thrift,
    Reference:[Setting Up dev env for 
FE](https://doris.apache.org/community/developer-guide/fe-idea-dev/).
 
 ```sql
@@ -160,6 +160,6 @@ PROPERTIES (
 insert into doris_bitmap_table select k1, k2, k3, bitmap_from_base64(uuid) 
from hive.test.hive_bitmap_table;
 ```
 
-### Method 2:Spark Load
+### Method 2: Spark Load
 
- see details: [Spark 
Load](../data-operate/import/import-way/spark-load-manual.md) -> Basic 
operation -> Create load(Example 3: when the upstream data source is hive 
binary type table)
+ see details: [Spark 
Load](https://doris.apache.org/zh-CN/docs/1.2/data-operate/import/import-way/spark-load-manual)
 -> Basic operation -> Create load(Example 3: when the upstream data source is 
hive binary type table)
diff --git a/docs/ecosystem/hive-hll-udf.md b/docs/ecosystem/hive-hll-udf.md
index 058f0b224db..89584eff00c 100644
--- a/docs/ecosystem/hive-hll-udf.md
+++ b/docs/ecosystem/hive-hll-udf.md
@@ -26,7 +26,7 @@ under the License.
 
 # Hive HLL UDF
 
- The Hive HLL UDF provides a set of UDFs for generating HLL operations in Hive 
tables, which are identical to Doris HLL. Hive HLL can be imported into Doris 
through Spark HLL Load. For more information about HLL, please refer to Using 
HLL for Approximate Deduplication.:[Approximate Deduplication Using 
HLL](../query/duplicate/using-hll.md)
+ The Hive HLL UDF provides a set of UDFs for generating HLL operations in Hive 
tables, which are identical to Doris HLL. Hive HLL can be imported into Doris 
through Spark HLL Load. For more information about HLL, please refer to Using 
HLL for Approximate Deduplication.:[Approximate Deduplication Using 
HLL](../query/duplicate/using-hll.md)
 
  Function Introduction:
   1. UDAF
@@ -39,7 +39,7 @@ under the License.
 
     · hll_cardinality: Returns the number of distinct elements added to the 
HLL, similar to the bitmap_count function
 
- Main Purpose:
+ Main Purpose:
   1. Reduce data import time to Doris by eliminating the need for dictionary 
construction and HLL pre-aggregation
   2. Save Hive storage by compressing data using HLL, significantly reducing 
storage costs compared to Bitmap statistics
   3. Provide flexible HLL operations in Hive, including union and cardinality 
statistics, and allow the resulting HLL to be directly imported into Doris
@@ -249,4 +249,4 @@ select k3, 
hll_cardinality(hll_union(hll_from_base64(uuid))) from hive.hive_test
 
 ### Method 2: Spark Load
 
- See details: [Spark 
Load](../data-operate/import/import-way/spark-load-manual.md) -> Basic 
operation -> Creating Load (Example 3: when the upstream data source is hive 
binary type table)
+ See details: [Spark 
Load](https://doris.apache.org/zh-CN/docs/1.2/data-operate/import/import-way/spark-load-manual)
 -> Basic operation -> Creating Load (Example 3: when the upstream data source 
is hive binary type table)
diff --git a/docs/faq/install-faq.md b/docs/faq/install-faq.md
index b0ff867bd93..b1c401a0361 100644
--- a/docs/faq/install-faq.md
+++ b/docs/faq/install-faq.md
@@ -267,7 +267,7 @@ This is a bug in bdbje that has not yet been resolved. In 
this case, you can onl
 
 ### Q12. Doris compile and install JDK version incompatibility problem
 
-When compiling Doris using Docker, start FE after compiling and installing, 
and the exception message `java.lang.Suchmethoderror: java.nio.ByteBuffer.limit 
(I)Ljava/nio/ByteBuffer;` appears, this is because the default in Docker It is 
JDK 11. If your installation environment is using JDK8, you need to switch the 
JDK environment to JDK8 in Docker. For the specific switching method, please 
refer to [Compile 
Documentation](../install/source-install/compilation-general.md)
+When compiling Doris using Docker, start FE after compiling and installing, 
and the exception message `java.lang.Suchmethoderror: java.nio.ByteBuffer.limit 
(I)Ljava/nio/ByteBuffer;` appears, this is because the default in Docker It is 
JDK 11. If your installation environment is using JDK8, you need to switch the 
JDK environment to JDK8 in Docker. For the specific switching method, please 
refer to [Compile 
Documentation](../install/source-install/compilation-with-docker)
 
 ### Q13. Error starting FE or unit test locally Cannot find external parser 
table action_table.dat
 Run the following command
@@ -285,7 +285,7 @@ In doris 1.0 onwards, openssl has been upgraded to 1.1 and 
is built into the dor
 ```
 ERROR 1105 (HY000): errCode = 2, detailMessage = driver connect Error: HY000 
[MySQL][ODBC 8.0(w) Driver]SSL connection error: Failed to set ciphers to use 
(2026)
 ```
-The solution is to use the `Connector/ODBC 8.0.28` version of ODBC Connector 
and select `Linux - Generic` in the operating system, this version of ODBC 
Driver uses openssl version 1.1. Or use a lower version of ODBC connector, e.g. 
[Connector/ODBC 
5.3.14](https://dev.mysql.com/downloads/connector/odbc/5.3.html). For details, 
see the [ODBC exterior documentation](../lakehouse/external-table/odbc.md).
+The solution is to use the `Connector/ODBC 8.0.28` version of ODBC Connector 
and select `Linux - Generic` in the operating system, this version of ODBC 
Driver uses openssl version 1.1. Or use a lower version of ODBC connector, e.g. 
[Connector/ODBC 
5.3.14](https://dev.mysql.com/downloads/connector/odbc/5.3.html). For details, 
see the [ODBC exterior 
documentation](https://doris.apache.org/docs/1.2/lakehouse/external-table/odbc).
 
 You can verify the version of openssl used by MySQL ODBC Driver by
 
diff --git a/docs/faq/sql-faq.md b/docs/faq/sql-faq.md
index 9e38eced91e..769c773f62f 100644
--- a/docs/faq/sql-faq.md
+++ b/docs/faq/sql-faq.md
@@ -65,7 +65,7 @@ For example, the table is defined as k1, v1. A batch of 
imported data is as foll
 
 Then maybe the result of copy 1 is `1, "abc"`, and the result of copy 2 is `1, 
"def"`. As a result, the query results are inconsistent.
 
-To ensure that the data sequence between different replicas is unique, you can 
refer to the [Sequence 
Column](../data-operate/update-delete/sequence-column-manual.md) function.
+To ensure that the data sequence between different replicas is unique, you can 
refer to the [Sequence 
Column](../data-operate/update/update-of-unique-model.md) function.
 
 ### Q5. The problem of querying bitmap/hll type data returns NULL
 
@@ -95,7 +95,7 @@ If the `curl 77: Problem with the SSL CA cert` error appears 
in the be.INFO log.
 2. Copy the certificate to the specified location: `sudo cp /tmp/cacert.pem 
/etc/ssl/certs/ca-certificates.crt`
 3. Restart the BE node.
 
-### Q7. import error:"Message": "[INTERNAL_ERROR]single replica load is 
disabled on BE."
+### Q7. import error:"Message": "[INTERNAL_ERROR]single replica load is 
disabled on BE."
 
 1. Make sure this parameters `enable_single_replica_load` in be.conf is set 
true
 2.  Restart the BE node.
diff --git a/docs/sql-manual/sql-functions/string-functions/like/like.md 
b/docs/sql-manual/sql-functions/string-functions/fuzzy-match/like.md
similarity index 100%
rename from docs/sql-manual/sql-functions/string-functions/like/like.md
rename to docs/sql-manual/sql-functions/string-functions/fuzzy-match/like.md
diff --git a/docs/sql-manual/sql-functions/string-functions/like/not-like.md 
b/docs/sql-manual/sql-functions/string-functions/fuzzy-match/not-like.md
similarity index 100%
rename from docs/sql-manual/sql-functions/string-functions/like/not-like.md
rename to docs/sql-manual/sql-functions/string-functions/fuzzy-match/not-like.md
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/string-functions/like/like.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/string-functions/fuzzy-match/like.md
similarity index 100%
rename from 
i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/string-functions/like/like.md
rename to 
i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/string-functions/fuzzy-match/like.md
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/string-functions/like/not-like.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/string-functions/fuzzy-match/not-like.md
similarity index 100%
rename from 
i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/string-functions/like/not-like.md
rename to 
i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-functions/string-functions/fuzzy-match/not-like.md
diff --git a/sidebars.json b/sidebars.json
index 825c1b48f1c..6cf7246b378 100644
--- a/sidebars.json
+++ b/sidebars.json
@@ -747,8 +747,8 @@
                                     "type": "category",
                                     "label": "Fuzzy Match",
                                     "items": [
-                                        
"sql-manual/sql-functions/string-functions/like/like",
-                                        
"sql-manual/sql-functions/string-functions/like/not-like"
+                                        
"sql-manual/sql-functions/string-functions/fuzzy-match/like",
+                                        
"sql-manual/sql-functions/string-functions/fuzzy-match/not-like"
                                     ]
                                 },
                                 {
@@ -1524,4 +1524,4 @@
             ]
         }
     ]
-}
+}
\ No newline at end of file
diff --git a/versioned_docs/version-1.2/benchmark/tpch.md 
b/versioned_docs/version-1.2/benchmark/tpch.md
index b3a73f55e91..985929ed26d 100644
--- a/versioned_docs/version-1.2/benchmark/tpch.md
+++ b/versioned_docs/version-1.2/benchmark/tpch.md
@@ -51,7 +51,7 @@ On 22 queries on the TPC-H standard test data set, we 
conducted a comparison tes
 - Doris Deployed 3BEs and 1FE
 - Kernel Version: Linux version 5.4.0-96-generic (buildd@lgw01-amd64-051)
 - OS version: CentOS 7.8
-- Doris software version: Apache Doris 1.2.0-rc01、 Apache Doris 1.1.3 、 Apache 
Doris 0.15.0 RC04
+- Doris software version: Apache Doris 1.2.0-rc01, Apache Doris 1.1.3 , Apache 
Doris 0.15.0 RC04
 - JDK: openjdk version "11.0.14" 2022-01-18
 
 ## 3. Test Data Volume
@@ -71,7 +71,7 @@ The TPCH 100G data generated by the simulation of the entire 
test are respective
 
 ## 4. Test SQL
 
-TPCH 22 test query statements : 
[TPCH-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpch-tools/queries)
+TPCH 22 test query statements : 
[TPCH-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpch-tools/queries)
 
 **Notice:**
 
diff --git 
a/versioned_docs/version-2.0/admin-manual/cluster-management/elastic-expansion.md
 
b/versioned_docs/version-2.0/admin-manual/cluster-management/elastic-expansion.md
index f7a8fd8cc44..b71973d859c 100644
--- 
a/versioned_docs/version-2.0/admin-manual/cluster-management/elastic-expansion.md
+++ 
b/versioned_docs/version-2.0/admin-manual/cluster-management/elastic-expansion.md
@@ -106,7 +106,7 @@ You can also view the BE node through the front-end page 
connection: ``http://fe
 
 All of the above methods require Doris's root user rights.
 
-The expansion and scaling process of BE nodes does not affect the current 
system operation and the tasks being performed, and does not affect the 
performance of the current system. Data balancing is done automatically. 
Depending on the amount of data available in the cluster, the cluster will be 
restored to load balancing in a few hours to a day. For cluster load, see the 
[Tablet Load Balancing Document](../cluster-management/elastic-expansion.md).
+The expansion and scaling process of BE nodes does not affect the current 
system operation and the tasks being performed, and does not affect the 
performance of the current system. Data balancing is done automatically. 
Depending on the amount of data available in the cluster, the cluster will be 
restored to load balancing in a few hours to a day. For cluster load, see the 
[Tablet Load Balancing Document](../cluster-management/load-balancing).
 
 ### Add BE nodes
 
diff --git 
a/versioned_docs/version-2.0/admin-manual/config/fe-config-template.md 
b/versioned_docs/version-2.0/admin-manual/config/fe-config-template.md
index 2ae517ac243..110938fb4fb 100644
--- a/versioned_docs/version-2.0/admin-manual/config/fe-config-template.md
+++ b/versioned_docs/version-2.0/admin-manual/config/fe-config-template.md
@@ -93,7 +93,7 @@ There are two ways to configure FE configuration items:
     
 3. Dynamic configuration via HTTP protocol
 
-    For details, please refer to [Set Config 
Action](../http-actions/fe/set-config-action.md)
+    For details, please refer to [Set Config Action](../fe/set-config-action)
 
     This method can also persist the modified configuration items. The 
configuration items will be persisted in the `fe_custom.conf` file and will 
still take effect after FE is restarted.
 
diff --git 
a/versioned_docs/version-2.0/admin-manual/maint-monitor/disk-capacity.md 
b/versioned_docs/version-2.0/admin-manual/maint-monitor/disk-capacity.md
index f211faf1f6a..0703f95f29b 100644
--- a/versioned_docs/version-2.0/admin-manual/maint-monitor/disk-capacity.md
+++ b/versioned_docs/version-2.0/admin-manual/maint-monitor/disk-capacity.md
@@ -162,6 +162,6 @@ When the disk capacity is higher than High Watermark or 
even Flood Stage, many o
 
         ```rm -rf data/0/12345/```
 
-    * Delete tablet metadata refer to [Tablet metadata management 
tool](tablet-meta-tool.md)
+    * Delete tablet metadata refer to [Tablet metadata management 
tool](./tablet-meta-tool.md)
 
         ```./lib/meta_tool --operation=delete_header 
--root_path=/path/to/root_path --tablet_id=12345 --schema_hash= 352781111```
diff --git 
a/versioned_docs/version-2.0/admin-manual/resource-admin/workload-group.md 
b/versioned_docs/version-2.0/admin-manual/resource-admin/workload-group.md
index 306f92bad1e..3a148d2c9c6 100644
--- a/versioned_docs/version-2.0/admin-manual/resource-admin/workload-group.md
+++ b/versioned_docs/version-2.0/admin-manual/resource-admin/workload-group.md
@@ -1,6 +1,6 @@
 ---
 {
-    "title": "WORKLOAD GROUP",
+    "title": "Workload Group",
     "language": "en"
 }
 ---
@@ -24,9 +24,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# WORKLOAD GROUP
 
-<version since="dev"></version>
 
 The workload group can limit the use of compute and memory resources on a 
single be node for tasks within the group. Currently, query binding to workload 
groups is supported.
 
diff --git a/versioned_docs/version-2.0/ecosystem/flink-doris-connector.md 
b/versioned_docs/version-2.0/ecosystem/flink-doris-connector.md
index 997004b7c10..4d6b85c905a 100644
--- a/versioned_docs/version-2.0/ecosystem/flink-doris-connector.md
+++ b/versioned_docs/version-2.0/ecosystem/flink-doris-connector.md
@@ -660,7 +660,7 @@ At this time, it cannot be started from the checkpoint, and 
the expiration time
 
 6. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
-This is because the concurrent import of the same library exceeds 100, which 
can be solved by adjusting the parameter `max_running_txn_num_per_db` of 
fe.conf. For details, please refer to 
[max_running_txn_num_per_db](https://doris.apache.org/zh-CN/docs/dev/admin-manual/config/fe-config/#max_running_txn_num_per_db)
+This is because the concurrent import of the same library exceeds 100, which 
can be solved by adjusting the parameter `max_running_txn_num_per_db` of 
fe.conf. For details, please refer to 
[max_running_txn_num_per_db](../admin-manual/config/fe-config#max_running_txn_num_per_db)
 
 At the same time, if a task frequently modifies the label and restarts, it may 
also cause this error. In the 2pc scenario (Duplicate/Aggregate model), the 
label of each task needs to be unique, and when restarting from the checkpoint, 
the Flink task will actively abort the txn that has been successfully 
precommitted before and has not been committed. Frequently modifying the label 
and restarting will cause a large number of txn that have successfully 
precommitted to fail to be aborted, o [...]
 
diff --git a/versioned_docs/version-2.1/admin-manual/audit-plugin.md 
b/versioned_docs/version-2.1/admin-manual/audit-plugin.md
index 7cbd99963ae..4805a82d619 100644
--- a/versioned_docs/version-2.1/admin-manual/audit-plugin.md
+++ b/versioned_docs/version-2.1/admin-manual/audit-plugin.md
@@ -91,7 +91,7 @@ The audit log plug-in framework is enabled by default in 
Doris and is controlled
     * plugin.conf: plugin configuration file.
 
 You can place this file on an http download server or copy(or unzip) it to the 
specified directory of all FEs. Here we use the latter.  
-The installation of this plugin can be found in 
[INSTALL](../sql-manual/sql-reference/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
+The installation of this plugin can be found in 
[INSTALL](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN.md)
  
 After executing install, the AuditLoader directory will be automatically 
generated.
 
 3. Modify plugin.conf
@@ -212,7 +212,7 @@ Install the audit loader plugin:
 INSTALL PLUGIN FROM [source] [PROPERTIES ("key"="value", ...)]
 ```
 
-Detailed command reference: 
[INSTALL-PLUGIN.md](../sql-manual/sql-reference/Database-Administration-Statements/INSTALL-PLUGIN)
+Detailed command reference: 
[INSTALL-PLUGIN.md](../sql-manual/sql-statements/Database-Administration-Statements/INSTALL-PLUGIN)
 
 After successful installation, you can see the installed plug-ins through 
`SHOW PLUGINS`, and the status is `INSTALLED`.
 
diff --git 
a/versioned_docs/version-2.1/admin-manual/cluster-management/elastic-expansion.md
 
b/versioned_docs/version-2.1/admin-manual/cluster-management/elastic-expansion.md
index f7a8fd8cc44..b71973d859c 100644
--- 
a/versioned_docs/version-2.1/admin-manual/cluster-management/elastic-expansion.md
+++ 
b/versioned_docs/version-2.1/admin-manual/cluster-management/elastic-expansion.md
@@ -106,7 +106,7 @@ You can also view the BE node through the front-end page 
connection: ``http://fe
 
 All of the above methods require Doris's root user rights.
 
-The expansion and scaling process of BE nodes does not affect the current 
system operation and the tasks being performed, and does not affect the 
performance of the current system. Data balancing is done automatically. 
Depending on the amount of data available in the cluster, the cluster will be 
restored to load balancing in a few hours to a day. For cluster load, see the 
[Tablet Load Balancing Document](../cluster-management/elastic-expansion.md).
+The expansion and scaling process of BE nodes does not affect the current 
system operation and the tasks being performed, and does not affect the 
performance of the current system. Data balancing is done automatically. 
Depending on the amount of data available in the cluster, the cluster will be 
restored to load balancing in a few hours to a day. For cluster load, see the 
[Tablet Load Balancing Document](../cluster-management/load-balancing).
 
 ### Add BE nodes
 
diff --git a/versioned_docs/version-2.1/admin-manual/cluster-management/fqdn.md 
b/versioned_docs/version-2.1/admin-manual/cluster-management/fqdn.md
index 6d99b3e9b48..4f489df2ddd 100644
--- a/versioned_docs/version-2.1/admin-manual/cluster-management/fqdn.md
+++ b/versioned_docs/version-2.1/admin-manual/cluster-management/fqdn.md
@@ -58,14 +58,14 @@ After Doris supports FQDN, communication between nodes is 
entirely based on FQDN
    ```
 4. Verification: It can 'ping fe2' on FE1, and can resolve the correct IP 
address and ping it, indicating that the network environment is available.
 5. fe.conf settings for each FE node ` enable_ fqdn_ mode = true`.
-6. Refer to[Standard deployment](../../install/standard-deployment.md)
+6. Refer to[Standard 
deployment](../../install/cluster-deployment/standard-deployment)
 7. Select several machines to deploy broker on six machines as needed, and 
execute `ALTER SYSTEM ADD BROKER broker_name "fe1:8000","be1:8000",...;`.
 
 ### Deployment of Doris for K8S
 
 After an unexpected restart of the Pod, K8s cannot guarantee that the Pod's IP 
will not change, but it can ensure that the domain name remains unchanged. 
Based on this feature, when Doris enables FQDN, it can ensure that the Pod can 
still provide services normally after an unexpected restart.
 
-Please refer to the method for deploying Doris in K8s[Kubernetes 
Deployment](../../install/k8s-deploy/operator-deploy.md)
+Please refer to the method for deploying Doris in K8s[Kubernetes 
Deployment](../../install/cluster-deployment/k8s-deploy/install-operator)
 
 ### Server change IP
 
diff --git a/versioned_docs/version-2.1/admin-manual/config/be-config.md 
b/versioned_docs/version-2.1/admin-manual/config/be-config.md
index 52fe5abc030..b5cb83898f9 100644
--- a/versioned_docs/version-2.1/admin-manual/config/be-config.md
+++ b/versioned_docs/version-2.1/admin-manual/config/be-config.md
@@ -158,8 +158,8 @@ There are two ways to configure BE configuration items:
 
   eg.2: 
`storage_root_path=/home/disk1/doris,medium:hdd;/home/disk2/doris,medium:ssd`
 
-    - 1./home/disk1/doris,medium:hdd,indicates that the storage medium is HDD;
-    - 2./home/disk2/doris,medium:ssd,indicates that the storage medium is SSD;
+    - 1./home/disk1/doris,medium:hdd, indicates that the storage medium is HDD;
+    - 2./home/disk2/doris,medium:ssd, indicates that the storage medium is SSD;
 
 * Default value: ${DORIS_HOME}/storage
 
@@ -346,7 +346,7 @@ There are two ways to configure BE configuration items:
 #### `doris_max_scan_key_num`
 
 * Type: int
-* Description: Used to limit the maximum number of scan keys that a scan node 
can split in a query request. When a conditional query request reaches the scan 
node, the scan node will try to split the conditions related to the key column 
in the query condition into multiple scan key ranges. After that, these scan 
key ranges will be assigned to multiple scanner threads for data scanning. A 
larger value usually means that more scanner threads can be used to increase 
the parallelism of the s [...]
+* Description: Used to limit the maximum number of scan keys that a scan node 
can split in a query request. When a conditional query request reaches the scan 
node, the scan node will try to split the conditions related to the key column 
in the query condition into multiple scan key ranges. After that, these scan 
key ranges will be assigned to multiple scanner threads for data scanning. A 
larger value usually means that more scanner threads can be used to increase 
the parallelism of the s [...]
   - When the concurrency cannot be improved in high concurrency scenarios, try 
to reduce this value and observe the impact.
 * Default value: 48
 
@@ -400,7 +400,7 @@ There are two ways to configure BE configuration items:
 #### `max_pushdown_conditions_per_column`
 
 * Type: int
-* Description: Used to limit the maximum number of conditions that can be 
pushed down to the storage engine for a single column in a query request. 
During the execution of the query plan, the filter conditions on some columns 
can be pushed down to the storage engine, so that the index information in the 
storage engine can be used for data filtering, reducing the amount of data that 
needs to be scanned by the query. Such as equivalent conditions, conditions in 
IN predicates, etc. In most  [...]
+* Description: Used to limit the maximum number of conditions that can be 
pushed down to the storage engine for a single column in a query request. 
During the execution of the query plan, the filter conditions on some columns 
can be pushed down to the storage engine, so that the index information in the 
storage engine can be used for data filtering, reducing the amount of data that 
needs to be scanned by the query. Such as equivalent conditions, conditions in 
IN predicates, etc. In most  [...]
 * Default value: 1024
 
 * Example
@@ -1066,18 +1066,18 @@ BaseCompaction:546859:
 #### `generate_cache_cleaner_task_interval_sec`
 
 * Type:int64
-* Description:Cleaning interval of cache files, in seconds
-* Default:43200(12 hours)
+* Description: Cleaning interval of cache files, in seconds
+* Default: 43200 (12 hours)
 
 #### `path_gc_check`
 
 * Type:bool
-* Description:Whether to enable the recycle scan data thread check
+* Description: Whether to enable the recycle scan data thread check
 * Default:true
 
 #### `path_gc_check_interval_second`
 
-* Description:Recycle scan data thread check interval
+* Description: Recycle scan data thread check interval
 * Default:86400 (s)
 
 #### `path_gc_check_step`
@@ -1094,7 +1094,7 @@ BaseCompaction:546859:
 
 #### `scan_context_gc_interval_min`
 
-* Description:This configuration is used for the context gc thread scheduling 
cycle. Note: The unit is minutes, and the default is 5 minutes
+* Description: This configuration is used for the context gc thread scheduling 
cycle. Note: The unit is minutes, and the default is 5 minutes
 * Default:5
 
 ### Storage
@@ -1114,7 +1114,7 @@ BaseCompaction:546859:
 #### `disk_stat_monitor_interval`
 
 * Description: Disk status check interval
-* Default value: 5(s)
+* Default value: 5 (s)
 
 #### `max_free_io_buffers`
 
@@ -1165,7 +1165,7 @@ BaseCompaction:546859:
 #### `storage_flood_stage_usage_percent`
 
 * Description: The storage_flood_stage_usage_percent and 
storage_flood_stage_left_capacity_bytes configurations limit the maximum usage 
of the capacity of the data directory.
-* Default value: 90 (90%)
+* Default value: 90 (90%)
 
 #### `storage_medium_migrate_count`
 
@@ -1245,7 +1245,7 @@ BaseCompaction:546859:
 
 #### `tablet_meta_checkpoint_min_interval_secs`
 
-* Description: TabletMeta Checkpoint线程轮询的时间间隔
+* Description: TabletMeta Checkpoint 线程轮询的时间间隔
 * Default value: 600 (s)
 
 #### `tablet_meta_checkpoint_min_new_rowsets_num`
@@ -1422,7 +1422,7 @@ Indicates how many tablets failed to load in the data 
directory. At the same tim
 #### `max_download_speed_kbps`
 
 * Description: Maximum download speed limit
-* Default value: 50000 (kb/s)
+* Default value: 50000 (kb/s)
 
 #### `download_low_speed_time`
 
@@ -1493,7 +1493,7 @@ Indicates how many tablets failed to load in the data 
directory. At the same tim
 
 #### `group_commit_memory_rows_for_max_filter_ratio`
 
-* Description: The `max_filter_ratio` limit can only work if the total rows of 
`group commit` is less than this value. See [Group 
Commit](../../data-operate/import/import-way/group-commit-manual.md) for more 
details
+* Description: The `max_filter_ratio` limit can only work if the total rows of 
`group commit` is less than this value. See [Group 
Commit](../../data-operate/import/group-commit-manual.md) for more details
 * Default: 10000
 
 #### `default_tzfiles_path`
diff --git a/versioned_docs/version-2.1/admin-manual/config/fe-config.md 
b/versioned_docs/version-2.1/admin-manual/config/fe-config.md
index c7127cba8c2..4abb1bfd04b 100644
--- a/versioned_docs/version-2.1/admin-manual/config/fe-config.md
+++ b/versioned_docs/version-2.1/admin-manual/config/fe-config.md
@@ -48,7 +48,7 @@ There are two ways to view the configuration items of FE:
 
 2. View by command
 
-    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-reference/Database-Administration-Statements/SHOW-CONFIG.md):
+    After the FE is started, you can view the configuration items of the FE in 
the MySQL client with the following command,Concrete language law reference 
[SHOW-CONFIG](../../sql-manual/sql-statements/Database-Administration-Statements/SHOW-CONFIG.md):
 
     `SHOW FRONTEND CONFIG;`
 
@@ -85,7 +85,7 @@ There are two ways to configure FE configuration items:
 
 3. Dynamic configuration via HTTP protocol
 
-    For details, please refer to [Set Config 
Action](../http-actions/fe/set-config-action.md)
+    For details, please refer to [Set Config Action](../fe/set-config-action)
 
     This method can also persist the modified configuration items. The 
configuration items will be persisted in the `fe_custom.conf` file and will 
still take effect after FE is restarted.
 
@@ -177,13 +177,13 @@ Num of thread to handle grpc events in grpc_threadmgr.
 
 Default:10  (s)
 
-The replica ack timeout when writing to bdbje , When writing some relatively 
large logs, the ack time may time out, resulting in log writing failure.  At 
this time, you can increase this value appropriately.
+The replica ack timeout when writing to bdbje , When writing some relatively 
large logs, the ack time may time out, resulting in log writing failure.  At 
this time, you can increase this value appropriately.
 
 #### `bdbje_lock_timeout_second`
 
 Default:5
 
-The lock timeout of bdbje operation, If there are many LockTimeoutException in 
FE WARN log, you can try to increase this value
+The lock timeout of bdbje operation, If there are many LockTimeoutException in 
FE WARN log, you can try to increase this value
 
 #### `bdbje_heartbeat_timeout_second`
 
@@ -195,7 +195,7 @@ The heartbeat timeout of bdbje between master and follower. 
the default is 30 se
 
 Default:SIMPLE_MAJORITY
 
-OPTION:ALL, NONE, SIMPLE_MAJORITY
+OPTION: ALL, NONE, SIMPLE_MAJORITY
 
 Replica ack policy of bdbje. more info, see: 
http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/je/Durability.ReplicaAckPolicy.html
 
@@ -236,7 +236,7 @@ This is helpful when you try to stop the Master FE for a 
relatively long time fo
 
 #### `meta_delay_toleration_second`
 
-Default:300 (5 min)
+Default: 300 (5 min)
 
 Non-master FE will stop offering service  if meta data delay gap exceeds 
*meta_delay_toleration_second*
 
@@ -324,7 +324,7 @@ Default:true
 
 IsMutable:true
 
-The multi cluster feature will be deprecated in version 0.12 ,set this config 
to true will disable all operations related to cluster feature, include:
+The multi cluster feature will be deprecated in version 0.12 , set this config 
to true will disable all operations related to cluster feature, include:
 
 1. create/drop cluster
 2. add free backend/add backend to cluster/decommission cluster balance
@@ -416,7 +416,7 @@ Default value: 0.0.0.0
 
 Default:none
 
-Declare a selection strategy for those servers have many ips.  Note that there 
should at most one ip match this list.  this is a list in semicolon-delimited 
format, in CIDR notation, e.g. 10.10.10.0/24 , If no ip match this rule, will 
choose one randomly.
+Declare a selection strategy for those servers have many ips.  Note that there 
should at most one ip match this list.  this is a list in semicolon-delimited 
format, in CIDR notation, e.g. 10.10.10.0/24 , If no ip match this rule, will 
choose one randomly.
 
 #### `http_port`
 
@@ -481,7 +481,7 @@ The thrift server max worker threads
 
 Default:1024
 
-The backlog_num for thrift server , When you enlarge this backlog_num, you 
should ensure it's value larger than the linux /proc/sys/net/core/somaxconn 
config
+The backlog_num for thrift server , When you enlarge this backlog_num, you 
should ensure it's value larger than the linux /proc/sys/net/core/somaxconn 
config
 
 #### `thrift_client_timeout_ms`
 
@@ -557,7 +557,7 @@ MasterOnly:true
 
 #### `max_backend_down_time_second`
 
-Default:3600  (1 hour)
+Default: 3600  (1 hour)
 
 IsMutable:true
 
@@ -637,7 +637,7 @@ Default:30000  (ms)
 
 IsMutable:true
 
-The timeout of executing async remote fragment.  In normal case, the async 
remote fragment will be executed in a short time. If system are under high load 
condition,try to set this timeout longer.
+The timeout of executing async remote fragment.  In normal case, the async 
remote fragment will be executed in a short time. If system are under high load 
condition, try to set this timeout longer.
 
 #### `auth_token`
 
@@ -647,7 +647,7 @@ Cluster token used for internal authentication.
 
 #### `enable_http_server_v2`
 
-Default:The default is true after the official 0.14.0 version is released, and 
the default is false before
+Default: The default is true after the official 0.14.0 version is released, 
and the default is false before
 
 HTTP Server V2 is implemented by SpringBoot. It uses an architecture that 
separates the front and back ends. Only when HTTPv2 is enabled can users use 
the new front-end UI interface.
 
@@ -1009,7 +1009,7 @@ Default:1
 
 IsMutable:true
 
-Colocote join PlanFragment instance的memory_limit = exec_mem_limit / min 
(query_colocate_join_memory_limit_penalty_factor, instance_num)
+Colocote join PlanFragment instance 的 memory_limit = exec_mem_limit / min 
(query_colocate_join_memory_limit_penalty_factor, instance_num)
 
 #### `rewrite_count_distinct_to_bitmap_hll`
 
@@ -1115,7 +1115,7 @@ IsMutable:true
 
 MasterOnly:true
 
-Max number of load jobs, include PENDING、ETL、LOADING、QUORUM_FINISHED. If 
exceed this number, load job is not allowed to be submitted
+Max number of load jobs, include PENDING, ETL, LOADING, QUORUM_FINISHED. If 
exceed this number, load job is not allowed to be submitted
 
 #### `db_used_data_quota_update_interval_secs`
 
@@ -1257,7 +1257,7 @@ IsMutable:true
 
 MasterOnly:true
 
-Default number of waiting jobs for routine load and version 2 of load , This 
is a desired number.  In some situation, such as switch the master, the current 
number is maybe more than desired_max_waiting_jobs.
+Default number of waiting jobs for routine load and version 2 of load , This 
is a desired number.  In some situation, such as switch the master, the current 
number is maybe more than desired_max_waiting_jobs.
 
 #### `disable_hadoop_load`
 
@@ -1345,7 +1345,7 @@ Min stream load timeout applicable to all type of load
 
 #### `max_stream_load_timeout_second`
 
-Default:259200 (3 day)
+Default: 259200 (3 day)
 
 IsMutable:true
 
@@ -1355,7 +1355,7 @@ This configuration is specifically used to limit timeout 
setting for stream load
 
 #### `max_load_timeout_second`
 
-Default:259200 (3 day)
+Default: 259200 (3 day)
 
 IsMutable:true
 
@@ -1365,7 +1365,7 @@ Max load timeout applicable to all type of load except 
for stream load
 
 #### `stream_load_default_timeout_second`
 
-Default:86400 * 3 (3 day)
+Default: 86400 * 3 (3 day)
 
 IsMutable:true
 
@@ -1396,7 +1396,7 @@ When HTTP header `memtable_on_sink_node` is not set.
 
 #### `insert_load_default_timeout_second`
 
-Default:3600(1 hour)
+Default: 3600 (1 hour)
 
 IsMutable:true
 
@@ -1406,7 +1406,7 @@ Default insert load timeout
 
 #### `mini_load_default_timeout_second`
 
-Default:3600(1 hour)
+Default: 3600 (1 hour)
 
 IsMutable:true
 
@@ -1416,7 +1416,7 @@ Default non-streaming mini load timeout
 
 #### `broker_load_default_timeout_second`
 
-Default:14400(4 hour)
+Default: 14400 (4 hour)
 
 IsMutable:true
 
@@ -1426,7 +1426,7 @@ Default broker load timeout
 
 #### `spark_load_default_timeout_second`
 
-Default:86400  (1 day)
+Default: 86400  (1 day)
 
 IsMutable:true
 
@@ -1436,7 +1436,7 @@ Default spark load timeout
 
 #### `hadoop_load_default_timeout_second`
 
-Default:86400 * 3   (3 day)
+Default: 86400 * 3   (3 day)
 
 IsMutable:true
 
@@ -1530,7 +1530,7 @@ In the case of high concurrent writes, if there is a 
large backlog of jobs and c
 
 #### `streaming_label_keep_max_second`
 
-Default:43200 (12 hour)
+Default: 43200 (12 hour)
 
 IsMutable:true
 
@@ -1540,7 +1540,7 @@ For some high-frequency load work, such as: INSERT, 
STREAMING LOAD, ROUTINE_LOAD
 
 #### `label_clean_interval_second`
 
-Default:1 * 3600  (1 hour)
+Default:1 * 3600  (1 hour)
 
 Load label cleaner will run every *label_clean_interval_second* to clean the 
outdated jobs.
 
@@ -1564,7 +1564,7 @@ Whether it is a configuration item unique to the Master 
FE node: true
 
 Data synchronization job running status check.
 
-Default: 10(s)
+Default: 10 (s)
 
 #### `max_sync_task_threads_num`
 
@@ -1620,7 +1620,7 @@ Number of tablets per export query plan
 
 #### `export_task_default_timeout_second`
 
-Default:2 * 3600   (2 hour)
+Default: 2 * 3600   (2 hour)
 
 IsMutable:true
 
@@ -1654,7 +1654,7 @@ The max size of one sys log and audit log
 
 #### `sys_log_dir`
 
-Default:DorisFE.DORIS_HOME_DIR + "/log"
+Default: DorisFE.DORIS_HOME_DIR + "/log"
 
 sys_log_dir:
 
@@ -1667,7 +1667,7 @@ fe.warn.log  all WARNING and ERROR log of FE process.
 
 Default:INFO
 
-log level:INFO, WARN, ERROR, FATAL
+log level: INFO, WARN, ERROR, FATAL
 
 #### `sys_log_roll_num`
 
@@ -1741,7 +1741,7 @@ Slow query contains all queries which cost exceed 
*qe_slow_log_ms*
 
 #### `qe_slow_log_ms`
 
-Default:5000 (5 seconds)
+Default: 5000 (5 seconds)
 
 If the response time of a query exceed this threshold, it will be recorded in 
audit log as slow_query.
 
@@ -1749,8 +1749,8 @@ If the response time of a query exceed this threshold, it 
will be recorded in au
 
 Default:DAY
 
-DAY:  logsuffix is :yyyyMMdd
-HOUR: logsuffix is :yyyyMMddHH
+DAY:  logsuffix is : yyyyMMdd
+HOUR: logsuffix is : yyyyMMddHH
 
 #### `audit_log_delete_age`
 
@@ -1838,7 +1838,7 @@ Set to true so that Doris will automatically use blank 
replicas to fill tablets
 
 #### `min_clone_task_timeout_sec` `And max_clone_task_timeout_sec`
 
-Default:Minimum 3 minutes, maximum two hours
+Default: Minimum 3 minutes, maximum two hours
 
 IsMutable:true
 
@@ -1876,7 +1876,7 @@ IsMutable:true
 
 MasterOnly:true
 
-Valid only if use PartitionRebalancer,
+Valid only if use PartitionRebalancer,
 
 #### `partition_rebalance_move_expire_after_access`
 
@@ -1938,7 +1938,7 @@ if set to true, TabletScheduler will not do disk balance.
 
 #### `balance_load_score_threshold`
 
-Default:0.1 (10%)
+Default: 0.1 (10%)
 
 IsMutable:true
 
@@ -1948,7 +1948,7 @@ the threshold of cluster balance score, if a backend's 
load score is 10% lower t
 
 #### `capacity_used_percent_high_water`
 
-Default:0.75  (75%)
+Default: 0.75  (75%)
 
 IsMutable:true
 
@@ -1958,7 +1958,7 @@ The high water of disk capacity used percent. This is 
used for calculating load
 
 #### `clone_distribution_balance_threshold`
 
-Default:0.2
+Default: 0.2
 
 IsMutable:true
 
@@ -1968,7 +1968,7 @@ Balance threshold of num of replicas in Backends.
 
 #### `clone_capacity_balance_threshold`
 
-Default:0.2
+Default: 0.2
 
 IsMutable:true
 
@@ -2179,7 +2179,7 @@ MasterOnly:true
 
 #### `catalog_trash_expire_second`
 
-Default:86400L (1 day)
+Default: 86400L (1 day)
 
 IsMutable:true
 
@@ -2212,7 +2212,7 @@ Is it a configuration item unique to the Master FE node: 
true
 
 #### `check_consistency_default_timeout_second`
 
-Default:600 (10 minutes)
+Default: 600 (10 minutes)
 
 IsMutable:true
 
@@ -2294,7 +2294,7 @@ Maximal timeout for delete job, in seconds.
 
 #### `alter_table_timeout_second`
 
-Default:86400 * 30(1 month)
+Default: 86400 * 30 (1 month)
 
 IsMutable:true
 
@@ -2462,9 +2462,9 @@ Default:{
 
 #### `yarn_config_dir`
 
-Default:DorisFE.DORIS_HOME_DIR + "/lib/yarn-config"
+Default: DorisFE.DORIS_HOME_DIR + "/lib/yarn-config"
 
-Default yarn config file directory ,Each time before running the yarn command, 
we need to check that the  config file exists under this path, and if not, 
create them.
+Default yarn config file directory , Each time before running the yarn 
command, we need to check that the  config file exists under this path, and if 
not, create them.
 
 #### `yarn_client_path`
 
@@ -2492,7 +2492,7 @@ Default spark home dir
 
 #### `spark_dpp_version`
 
-Default:1.0.0
+Default: 1.0.0
 
 Default spark dpp version
 
@@ -2500,13 +2500,13 @@ Default spark dpp version
 
 #### `tmp_dir`
 
-Default:DorisFE.DORIS_HOME_DIR + "/temp_dir"
+Default: DorisFE.DORIS_HOME_DIR + "/temp_dir"
 
 temp dir is used to save intermediate results of some process, such as backup 
and restore process.  file in this dir will be cleaned after these process is 
finished.
 
 #### `custom_config_dir`
 
-Default:DorisFE.DORIS_HOME_DIR + "/conf"
+Default: DorisFE.DORIS_HOME_DIR + "/conf"
 
 Custom configuration file directory
 
@@ -2579,7 +2579,7 @@ This threshold is to avoid piling up too many report task 
in FE, which may cause
 
 #### `backup_job_default_timeout_ms`
 
-Default:86400 * 1000  (1 day)
+Default: 86400 * 1000  (1 day)
 
 IsMutable:true
 
@@ -2659,7 +2659,7 @@ IsMutable:true
 
 MasterOnly:false
 
-Whether to push the filter conditions with functions down to MYSQL, when 
execute query of ODBC、JDBC external tables
+Whether to push the filter conditions with functions down to MYSQL, when 
execute query of ODBC, JDBC external tables
 
 #### `jdbc_drivers_dir`
 
diff --git a/versioned_docs/version-2.1/admin-manual/data-admin/backup.md 
b/versioned_docs/version-2.1/admin-manual/data-admin/backup.md
index 91e0f4dfd25..881b23c1c83 100644
--- a/versioned_docs/version-2.1/admin-manual/data-admin/backup.md
+++ b/versioned_docs/version-2.1/admin-manual/data-admin/backup.md
@@ -160,7 +160,7 @@ ALTER TABLE tbl1 SET ("dynamic_partition.enable"="true")
    1 row in set (0.15 sec)
    ```
 
-For the detailed usage of BACKUP, please refer to 
[here](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md).
+For the detailed usage of BACKUP, please refer to 
[here](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/BACKUP.md).
 
 ## Best Practices
 
@@ -192,7 +192,7 @@ It is recommended to import the new and old clusters in 
parallel for a period of
 
    1. CREATE REPOSITORY
 
-      Create a remote repository path for backup or restore. This command 
needs to use the Broker process to access the remote storage. Different brokers 
need to provide different parameters. For details, please refer to [Broker 
documentation](../../advanced/broker.md), or you can directly back up to 
support through the S3 protocol For the remote storage of AWS S3 protocol, or 
directly back up to HDFS, please refer to [Create Remote Warehouse 
Documentation](../../sql-manual/sql-reference [...]
+      Create a remote repository path for backup or restore. This command 
needs to use the Broker process to access the remote storage. Different brokers 
need to provide different parameters. For details, please refer to [Broker 
documentation](../../data-operate/import/broker-load-manual), or you can 
directly back up to support through the S3 protocol For the remote storage of 
AWS S3 protocol, or directly back up to HDFS, please refer to [Create Remote 
Warehouse Documentation](../../sql- [...]
 
    2. BACKUP
 
@@ -247,4 +247,4 @@ It is recommended to import the new and old clusters in 
parallel for a period of
 
 ## More Help
 
- For more detailed syntax and best practices used by BACKUP, please refer to 
the 
[BACKUP](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/BACKUP.md)
 command manual, You can also type `HELP BACKUP` on the MySql client command 
line for more help.
+ For more detailed syntax and best practices used by BACKUP, please refer to 
the 
[BACKUP](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/BACKUP.md)
 command manual, You can also type `HELP BACKUP` on the MySql client command 
line for more help.
diff --git a/versioned_docs/version-2.1/admin-manual/data-admin/restore.md 
b/versioned_docs/version-2.1/admin-manual/data-admin/restore.md
index f47a2ebb256..779a8a26f83 100644
--- a/versioned_docs/version-2.1/admin-manual/data-admin/restore.md
+++ b/versioned_docs/version-2.1/admin-manual/data-admin/restore.md
@@ -126,7 +126,7 @@ The restore operation needs to specify an existing backup 
in the remote warehous
    1 row in set (0.01 sec)
    ```
 
-For detailed usage of RESTORE, please refer to 
[here](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE.md).
+For detailed usage of RESTORE, please refer to 
[here](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/RESTORE.md).
 
 ## Related Commands
 
@@ -134,7 +134,7 @@ The commands related to the backup and restore function are 
as follows. For the
 
 1. CREATE REPOSITORY
 
-   Create a remote repository path for backup or restore. This command needs 
to use the Broker process to access the remote storage. Different brokers need 
to provide different parameters. For details, please refer to [Broker 
documentation](../../data-operate/import/broker-load-manual), or you can 
directly back up to support through the S3 protocol For the remote storage of 
AWS S3 protocol, directly back up to HDFS, please refer to [Create Remote 
Warehouse Documentation](../../sql-manual [...]
+   Create a remote repository path for backup or restore. This command needs 
to use the Broker process to access the remote storage. Different brokers need 
to provide different parameters. For details, please refer to [Broker 
documentation](../../data-operate/import/broker-load-manual), or you can 
directly back up to support through the S3 protocol For the remote storage of 
AWS S3 protocol, directly back up to HDFS, please refer to [Create Remote 
Warehouse Documentation](../../sql-manual [...]
 
 2. RESTORE
 
@@ -182,12 +182,12 @@ The commands related to the backup and restore function 
are as follows. For the
 
 1. Restore Report An Error:[20181: invalid md5 of downloaded file: 
/data/doris.HDD/snapshot/20220607095111.862.86400/19962/668322732/19962.hdr, 
expected: f05b63cca5533ea0466f62a9897289b5, get: 
d41d8cd98f00b204e9800998ecf8427e]
 
-   If the number of copies of the table backed up and restored is 
inconsistent, you need to specify the number of copies when executing the 
restore command. For specific commands, please refer to 
[RESTORE](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE)
 command manual
+   If the number of copies of the table backed up and restored is 
inconsistent, you need to specify the number of copies when executing the 
restore command. For specific commands, please refer to 
[RESTORE](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/RESTORE)
 command manual
 
 2. Restore Report An Error:[COMMON_ERROR, msg: Could not set meta version to 
97 since it is lower than minimum required version 100]
 
-   Backup and restore are not caused by the same version, use the specified 
meta_version to read the metadata of the previous backup. Note that this 
parameter is used as a temporary solution and is only used to restore the data 
backed up by the old version of Doris. The latest version of the backup data 
already contains the meta version, so there is no need to specify it. For the 
specific solution to the above error, specify meta_version = 100. For specific 
commands, please refer to [RES [...]
+   Backup and restore are not caused by the same version, use the specified 
meta_version to read the metadata of the previous backup. Note that this 
parameter is used as a temporary solution and is only used to restore the data 
backed up by the old version of Doris. The latest version of the backup data 
already contains the meta version, so there is no need to specify it. For the 
specific solution to the above error, specify meta_version = 100. For specific 
commands, please refer to [RES [...]
 
 ## More Help
 
-For more detailed syntax and best practices used by RESTORE, please refer to 
the 
[RESTORE](../../sql-manual/sql-reference/Data-Definition-Statements/Backup-and-Restore/RESTORE)
 command manual, You can also type `HELP RESTORE` on the MySql client command 
line for more help.
+For more detailed syntax and best practices used by RESTORE, please refer to 
the 
[RESTORE](../../sql-manual/sql-statements/Data-Definition-Statements/Backup-and-Restore/RESTORE)
 command manual, You can also type `HELP RESTORE` on the MySql client command 
line for more help.
diff --git 
a/versioned_docs/version-2.1/admin-manual/maint-monitor/disk-capacity.md 
b/versioned_docs/version-2.1/admin-manual/maint-monitor/disk-capacity.md
index f211faf1f6a..0703f95f29b 100644
--- a/versioned_docs/version-2.1/admin-manual/maint-monitor/disk-capacity.md
+++ b/versioned_docs/version-2.1/admin-manual/maint-monitor/disk-capacity.md
@@ -162,6 +162,6 @@ When the disk capacity is higher than High Watermark or 
even Flood Stage, many o
 
         ```rm -rf data/0/12345/```
 
-    * Delete tablet metadata refer to [Tablet metadata management 
tool](tablet-meta-tool.md)
+    * Delete tablet metadata refer to [Tablet metadata management 
tool](./tablet-meta-tool.md)
 
         ```./lib/meta_tool --operation=delete_header 
--root_path=/path/to/root_path --tablet_id=12345 --schema_hash= 352781111```
diff --git 
a/versioned_docs/version-2.1/admin-manual/maint-monitor/metadata-operation.md 
b/versioned_docs/version-2.1/admin-manual/maint-monitor/metadata-operation.md
index f98e5c9bd56..f92e2786b12 100644
--- 
a/versioned_docs/version-2.1/admin-manual/maint-monitor/metadata-operation.md
+++ 
b/versioned_docs/version-2.1/admin-manual/maint-monitor/metadata-operation.md
@@ -357,7 +357,7 @@ The third level can display the value information of the 
specified key.
 
 ## Best Practices
 
-The deployment recommendation of FE is described in the Installation and 
[Deployment Document](../../install/standard-deployment.md). Here are some 
supplements.
+The deployment recommendation of FE is described in the Installation and 
[Deployment Document](../../install/cluster-deployment/standard-deployment.md). 
Here are some supplements.
 
 * **If you don't know the operation logic of FE metadata very well, or you 
don't have enough experience in the operation and maintenance of FE metadata, 
we strongly recommend that only one FOLLOWER-type FE be deployed as MASTER in 
practice, and the other FEs are OBSERVER, which can reduce many complex 
operation and maintenance problems.** Don't worry too much about the failure of 
MASTER single point to write metadata. First, if you configure it properly, FE 
as a java process is very diff [...]
 
diff --git 
a/versioned_docs/version-2.1/admin-manual/maint-monitor/tablet-repair-and-balance.md
 
b/versioned_docs/version-2.1/admin-manual/maint-monitor/tablet-repair-and-balance.md
index 62010673141..cdcb8380d3f 100644
--- 
a/versioned_docs/version-2.1/admin-manual/maint-monitor/tablet-repair-and-balance.md
+++ 
b/versioned_docs/version-2.1/admin-manual/maint-monitor/tablet-repair-and-balance.md
@@ -28,7 +28,7 @@ under the License.
 
 Beginning with version 0.9.0, Doris introduced an optimized replica management 
strategy and supported a richer replica status viewing tool. This document 
focuses on Doris data replica balancing, repair scheduling strategies, and 
replica management operations and maintenance methods. Help users to more 
easily master and manage the replica status in the cluster.
 
-> Repairing and balancing copies of tables with Colocation attributes can be 
referred to 
[HERE](../../query-acceleration/join-optimization/colocation-join.md)
+> Repairing and balancing copies of tables with Colocation attributes can be 
referred to [HERE](../../query/join-optimization/colocation-join.md)
 
 ## Noun Interpretation
 
diff --git 
a/versioned_docs/version-2.1/admin-manual/memory-management/be-oom-analysis.md 
b/versioned_docs/version-2.1/admin-manual/memory-management/be-oom-analysis.md
index 165b60259f9..a8ba95b5011 100644
--- 
a/versioned_docs/version-2.1/admin-manual/memory-management/be-oom-analysis.md
+++ 
b/versioned_docs/version-2.1/admin-manual/memory-management/be-oom-analysis.md
@@ -24,14 +24,12 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# BE OOM Analysis
 
-<version since="1.2.0">
 
 Ideally, in [Memory Limit Exceeded 
Analysis](./memory-limit-exceeded-analysis.md), we regularly detect the 
remaining available memory of the operating system and respond in time when the 
memory is insufficient , such as triggering the memory GC to release the cache 
or cancel the memory overrun query, but because refreshing process memory 
statistics and memory GC both have a certain lag, and it is difficult for us to 
completely catch all large memory applications, there are still OOM risk.
 
 ## Solution
-Refer to [BE Configuration Items](../../../admin-manual/config/be-config.md) 
to reduce `mem_limit` and increase `max_sys_mem_available_low_water_mark_bytes` 
in `be.conf`.
+Refer to [BE Configuration Items](../../admin-manual/config/be-config.md) to 
reduce `mem_limit` and increase `max_sys_mem_available_low_water_mark_bytes` in 
`be.conf`.
 
 ## Memory analysis
 If you want to further understand the memory usage location of the BE process 
before OOM and reduce the memory usage of the process, you can refer to the 
following steps to analyze.
@@ -75,10 +73,9 @@ Memory Tracker Summary:
 
 6. `type=load` imports a lot of memory.
 
-7. When the `type=global` memory is used for a long time, continue to check 
the `type=global` detailed statistics in the second half of the `Memory Tracker 
Summary` log. When DataPageCache, IndexPageCache, SegmentCache, ChunkAllocator, 
LastSuccessChannelCache, etc. use a lot of memory, refer to [BE Configuration 
Item](../../../admin-manual/config/be-config.md) to consider modifying the size 
of the cache; when Orphan memory usage is too large, Continue the analysis as 
follows.
+7. When the `type=global` memory is used for a long time, continue to check 
the `type=global` detailed statistics in the second half of the `Memory Tracker 
Summary` log. When DataPageCache, IndexPageCache, SegmentCache, ChunkAllocator, 
LastSuccessChannelCache, etc. use a lot of memory, refer to [BE Configuration 
Item](../../admin-manual/config/be-config.md) to consider modifying the size of 
the cache; when Orphan memory usage is too large, Continue the analysis as 
follows.
   - If the sum of the tracker statistics of `Parent Label=Orphan` only 
accounts for a small part of the Orphan memory, it means that there is 
currently a large amount of memory that has no accurate statistics, such as the 
memory of the brpc process. At this time, you can consider using the heap 
profile [Memory Tracker]( 
https://doris.apache.org/community/developer-guide/debug-tool) to further 
analyze memory locations.
   - If the tracker statistics of `Parent Label=Orphan` account for most of 
Orphan’s memory, when `Label=TabletManager` uses a lot of memory, further check 
the number of tablets in the cluster. If there are too many tablets, delete 
them and they will not be used table or data; when `Label=StorageEngine` uses 
too much memory, further check the number of segment files in the cluster, and 
consider manually triggering compaction if the number of segment files is too 
large;
 
 8. If `be/log/be.INFO` does not print the `Memory Tracker Summary` log before 
OOM, it means that BE did not detect the memory limit in time, observe Grafana 
memory monitoring to confirm the memory growth trend of BE before OOM, if OOM 
is reproducible, consider adding `memory_debug=true` in `be.conf`, after 
restarting the cluster, the cluster memory statistics will be printed every 
second, observe the last `Memory Tracker Summary` log before OOM, and continue 
to step 3 for analysis;
 
-</version>
diff --git 
a/versioned_docs/version-2.1/admin-manual/query-admin/sql-interception.md 
b/versioned_docs/version-2.1/admin-manual/query-admin/sql-interception.md
index 0ce33e744de..62e5959af56 100644
--- a/versioned_docs/version-2.1/admin-manual/query-admin/sql-interception.md
+++ b/versioned_docs/version-2.1/admin-manual/query-admin/sql-interception.md
@@ -37,7 +37,7 @@ Support SQL block rule by user level:
 ## Rule
 
 SQL block rule CRUD
-- create SQL block rule,For more creation syntax see[CREATE SQL BLOCK 
RULE](../../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-SQL-BLOCK-RULE.md)
+- create SQL block rule,For more creation syntax see[CREATE SQL BLOCK 
RULE](../../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-SQL-BLOCK-RULE.md)
     - sql: Regex pattern, Special characters need to be translated, "NULL" by 
default
     - sqlHash: Sql hash value, Used to match exactly, We print it in 
fe.audit.log, This parameter is the only choice between sql and sql, "NULL" by 
default
     - partition_num: Max number of partitions will be scanned by a scan node, 
0L by default
@@ -70,12 +70,12 @@ ERROR 1064 (HY000): errCode = 2, detailMessage = sql match 
regex sql block rule:
 CREATE SQL_BLOCK_RULE test_rule2 PROPERTIES("partition_num" = "30", 
"cardinality"="10000000000","global"="false","enable"="true")
 ```
 
-- show configured SQL block rules, or show all rules if you do not specify a 
rule name,Please see the specific grammar [SHOW SQL BLOCK 
RULE](../../sql-manual/sql-reference/Show-Statements/SHOW-SQL-BLOCK-RULE.md)
+- show configured SQL block rules, or show all rules if you do not specify a 
rule name,Please see the specific grammar [SHOW SQL BLOCK 
RULE](../../sql-manual/sql-statements/Show-Statements/SHOW-SQL-BLOCK-RULE.md)
 
 ```sql
 SHOW SQL_BLOCK_RULE [FOR RULE_NAME]
 ```
-- alter SQL block rule, Allows changes 
sql/sqlHash/global/enable/partition_num/tablet_num/cardinality anyone,Please 
see the specific grammar[ALTER SQL BLOCK  
RULE](../../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-SQL-BLOCK-RULE.md)
+- alter SQL block rule, Allows changes 
sql/sqlHash/global/enable/partition_num/tablet_num/cardinality anyone,Please 
see the specific grammar[ALTER SQL BLOCK  
RULE](../../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-SQL-BLOCK-RULE.md)
     - sql and sqlHash cannot be set both. It means if sql or sqlHash is set in 
a rule, another property will never be allowed to be altered
     - sql/sqlHash and partition_num/tablet_num/cardinality cannot be set 
together. For example, partition_num is set in a rule, then sql or sqlHash will 
never be allowed to be altered.
 ```sql
@@ -86,7 +86,7 @@ ALTER SQL_BLOCK_RULE test_rule PROPERTIES("sql"="select \\* 
from test_table","en
 ALTER SQL_BLOCK_RULE test_rule2 PROPERTIES("partition_num" = 
"10","tablet_num"="300","enable"="true")
 ```
 
-- drop SQL block rule, Support multiple rules, separated by `,`,Please see the 
specific grammar[DROP SQL BLOCK 
RULE](../../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-SQL-BLOCK-RULE.md)
+- drop SQL block rule, Support multiple rules, separated by `,`,Please see the 
specific grammar[DROP SQL BLOCK 
RULE](../../sql-manual/sql-statements/Data-Definition-Statements/Drop/DROP-SQL-BLOCK-RULE.md)
 ```sql
 DROP SQL_BLOCK_RULE test_rule1,test_rule2
 ```
diff --git 
a/versioned_docs/version-2.1/admin-manual/resource-admin/compute-node.md 
b/versioned_docs/version-2.1/admin-manual/resource-admin/compute-node.md
index 6d6f18a3d85..6b1e8d7cd01 100644
--- a/versioned_docs/version-2.1/admin-manual/resource-admin/compute-node.md
+++ b/versioned_docs/version-2.1/admin-manual/resource-admin/compute-node.md
@@ -133,7 +133,7 @@ Moreover, as compute nodes are stateless Backend (BE) 
nodes, they can be easily
 
 3. Can compute nodes and mix nodes configure a file cache directory?
 
-    [File cache](./filecache.md) accelerates subsequent queries for the same 
data by caching data files from recently accessed remote storage systems (HDFS 
or object storage).
+    [File cache](../../lakehouse/filecache) accelerates subsequent queries for 
the same data by caching data files from recently accessed remote storage 
systems (HDFS or object storage).
     
     Both compute and mix nodes can set up a file cache directory, which needs 
to be created in advance.
     
diff --git 
a/versioned_docs/version-2.1/admin-manual/resource-admin/workload-group.md 
b/versioned_docs/version-2.1/admin-manual/resource-admin/workload-group.md
index 77c79dc82ba..d958b6e78df 100644
--- a/versioned_docs/version-2.1/admin-manual/resource-admin/workload-group.md
+++ b/versioned_docs/version-2.1/admin-manual/resource-admin/workload-group.md
@@ -24,9 +24,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# WORKLOAD GROUP
 
-<version since="dev"></version>
 
 The workload group can limit the use of compute and memory resources on a 
single be node for tasks within the group. Currently, query binding to workload 
groups is supported.
 
diff --git a/versioned_docs/version-2.1/admin-manual/small-file-mgr.md 
b/versioned_docs/version-2.1/admin-manual/small-file-mgr.md
index e4500660c81..d9be766c601 100644
--- a/versioned_docs/version-2.1/admin-manual/small-file-mgr.md
+++ b/versioned_docs/version-2.1/admin-manual/small-file-mgr.md
@@ -46,7 +46,7 @@ File management has three main commands: `CREATE FILE`, `SHOW 
FILE` and `DROP FI
 
 ### CREATE FILE
 
-This statement is used to create and upload a file to the Doris cluster. For 
details, see [CREATE 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.md).
+This statement is used to create and upload a file to the Doris cluster. For 
details, see [CREATE 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FILE.md).
 
 Examples:
 
@@ -74,7 +74,7 @@ Examples:
 
 ### SHOW FILE
 
-This statement can view the files that have been created successfully. For 
details, see [SHOW 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.md).
+This statement can view the files that have been created successfully. For 
details, see [SHOW 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Drop/DROP-FILE.md).
 
 Examples:
 
@@ -86,7 +86,7 @@ Examples:
 
 ### DROP FILE
 
-This statement can view and delete an already created file. For specific 
operations, see [DROP 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.md).
+This statement can view and delete an already created file. For specific 
operations, see [DROP 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Drop/DROP-FILE.md).
 
 Examples:
 
@@ -128,4 +128,4 @@ Because the file meta-information and content are stored in 
FE memory. So by def
 
 ## More Help
 
-For more detailed syntax and best practices used by the file manager, see 
[CREATE 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Create/CREATE-FILE.md),
 [DROP 
FILE](../sql-manual/sql-reference/Data-Definition-Statements/Drop/DROP-FILE.md) 
and [SHOW FILE](../sql-manual/sql-reference/Show-Statements/SHOW-FILE.md) 
command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and 
`HELP SHOW FILE` in the MySql client command line to get more help information.
+For more detailed syntax and best practices used by the file manager, see 
[CREATE 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Create/CREATE-FILE.md),
 [DROP 
FILE](../sql-manual/sql-statements/Data-Definition-Statements/Drop/DROP-FILE.md)
 and [SHOW FILE](../sql-manual/sql-statements/Show-Statements/SHOW-FILE.md) 
command manual, you can also enter `HELP CREATE FILE`, `HELP DROP FILE` and 
`HELP SHOW FILE` in the MySql client command line to get more help information.
diff --git a/versioned_docs/version-2.1/benchmark/tpcds.md 
b/versioned_docs/version-2.1/benchmark/tpcds.md
index 4e13b162d72..a90df892d90 100644
--- a/versioned_docs/version-2.1/benchmark/tpcds.md
+++ b/versioned_docs/version-2.1/benchmark/tpcds.md
@@ -52,7 +52,7 @@ On 99 queries on the TPC-DS standard test data set, we 
conducted a comparison te
 - Doris Deployed 3BEs and 1FE
 - Kernel Version: Linux version 5.4.0-96-generic (buildd@lgw01-amd64-051)
 - OS version: Ubuntu 20.04 LTS (Focal Fossa)
-- Doris software version: Apache Doris 2.1.1-rc03、 Apache Doris 2.0.6.
+- Doris software version: Apache Doris 2.1.1-rc03, Apache Doris 2.0.6.
 - JDK: openjdk version "1.8.0_131"
 
 ## 3. Test Data Volume
@@ -88,7 +88,7 @@ The TPC-DS 1000G data generated by the simulation of the 
entire test are respect
 
 ## 4. Test SQL
 
-TPC-DS 99 test query statements : 
[TPC-DS-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpcds-tools/queries/sf1000)
+TPC-DS 99 test query statements : 
[TPC-DS-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpcds-tools/queries/sf1000)
 
 ## 5. Test Results
 
@@ -199,7 +199,7 @@ Here we use Apache Doris 2.1.1-rc03 and Apache Doris 2.0.6 
for comparative testi
 
 ## 6. Environmental Preparation
 
-Please refer to the [official document](../install/standard-deployment.md) to 
install and deploy Doris to obtain a normal running Doris cluster (at least 1 
FE 1 BE, 1 FE 3 BE is recommended).
+Please refer to the [official 
document](../install/cluster-deployment/standard-deployment.md) to install and 
deploy Doris to obtain a normal running Doris cluster (at least 1 FE 1 BE, 1 FE 
3 BE is recommended).
 
 ## 7. Data Preparation
 
diff --git a/versioned_docs/version-2.1/benchmark/tpch.md 
b/versioned_docs/version-2.1/benchmark/tpch.md
index 7c05627fee1..307f2917866 100644
--- a/versioned_docs/version-2.1/benchmark/tpch.md
+++ b/versioned_docs/version-2.1/benchmark/tpch.md
@@ -49,7 +49,7 @@ On 22 queries on the TPC-H standard test data set, we 
conducted a comparison tes
 - Doris Deployed 3BEs and 1FE
 - Kernel Version: Linux version 5.4.0-96-generic (buildd@lgw01-amd64-051)
 - OS version: Ubuntu 20.04 LTS (Focal Fossa)
-- Doris software version: Apache Doris 2.1.1-rc03、 Apache Doris 2.0.6.
+- Doris software version: Apache Doris 2.1.1-rc03, Apache Doris 2.0.6.
 - JDK: openjdk version "1.8.0_131"
 
 ## 3. Test Data Volume
@@ -69,7 +69,7 @@ The TPCH 1000G data generated by the simulation of the entire 
test are respectiv
 
 ## 4. Test SQL
 
-TPCH 22 test query statements : 
[TPCH-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpch-tools/queries/sf1000)
+TPCH 22 test query statements : 
[TPCH-Query-SQL](https://github.com/apache/doris/tree/master/tools/tpch-tools/queries/sf1000)
 
 
 ## 5. Test Results
@@ -105,7 +105,7 @@ Here we use Apache Doris 2.1.1-rc03 and Apache Doris 2.0.6 
for comparative testi
 
 ## 6. Environmental Preparation
 
-Please refer to the [official document](../install/standard-deployment.md) to 
install and deploy Doris to obtain a normal running Doris cluster (at least 1 
FE 1 BE, 1 FE 3 BE is recommended).
+Please refer to the [official 
document](../install/cluster-deployment/standard-deployment.md) to install and 
deploy Doris to obtain a normal running Doris cluster (at least 1 FE 1 BE, 1 FE 
3 BE is recommended).
 
 ## 7. Data Preparation
 
diff --git a/versioned_docs/version-2.1/ecosystem/dbt-doris-adapter.md 
b/versioned_docs/version-2.1/ecosystem/dbt-doris-adapter.md
index 0f15e7d7f42..39cae5b9d82 100644
--- a/versioned_docs/version-2.1/ecosystem/dbt-doris-adapter.md
+++ b/versioned_docs/version-2.1/ecosystem/dbt-doris-adapter.md
@@ -29,7 +29,7 @@ under the License.
 [DBT(Data Build Tool)](https://docs.getdbt.com/docs/introduction) is a 
component that focuses on doing T (Transform) in ELT (extraction, loading, 
transformation) - the "transformation data" link
 The `dbt-doris` adapter is developed based on `dbt-core` 1.5.0 and relies on 
the `mysql-connector-python` driver to convert data to doris.
 
-git:https://github.com/apache/doris/tree/master/extension/dbt-doris
+git: https://github.com/apache/doris/tree/master/extension/dbt-doris
 
 ## version
 
@@ -41,15 +41,15 @@ 
git:https://github.com/apache/doris/tree/master/extension/dbt-doris
 ## dbt-doris adapter Instructions
 
 ### dbt-doris adapter install
-use pip install:
+use pip install:
 ```shell
 pip install dbt-doris
 ```
-check version:
+check version:
 ```shell
 dbt --version
 ```
-if command not found: dbt:
+if command not found: dbt:
 ```shell
 ln -s /usr/local/python3/bin/dbt /usr/bin/dbt
 ```
@@ -63,7 +63,7 @@ Users need to prepare the following information to init dbt 
project
 | name     |  default | meaning                                                
                                                                                
   |  
 
|----------|------|-------------------------------------------------------------------------------------------------------------------------------------------|
 | project  |      | project name                                               
                                                                               
| 
-| database |      | Enter the corresponding number to select the adapter 
(选择doris)                                                                       
     | 
+| database |      | Enter the corresponding number to select the adapter(选择 
doris)                                                                          
  | 
 | host     |      | doris host                                                 
                                                                               
| 
 | port     | 9030 | doris MySQL Protocol Port                                  
                                                                               |
 | schema   |      | In dbt-doris, it is equivalent to database, Database name  
                                                                               |
@@ -114,7 +114,7 @@ When using the `table` materialization mode, your model is 
rebuilt as a table at
 For the tablet materialization of dbt, dbt-doris uses the following steps to 
ensure the atomicity of data changes:
 1. first create a temporary table: `create table this_table_temp as {{ model 
sql}}`.
 2. Determine whether `this_table` does not exist, that is, it is created for 
the first time, execute `rename`, and change the temporary table to the final 
table.
-3. if already exists, then `alter table this_table REPLACE WITH TABLE 
this_table_temp PROPERTIES('swap' = 'False')`,This operation can exchange the 
table name and delete the `this_table_temp` temporary 
table,[this](../sql-manual/sql-reference/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md)
 guarantees the atomicity of this operation through the transaction mechanism 
of the Doris.
+3. if already exists, then `alter table this_table REPLACE WITH TABLE 
this_table_temp PROPERTIES('swap' = 'False')`,This operation can exchange the 
table name and delete the `this_table_temp` temporary 
table,[this](../sql-manual/sql-statements/Data-Definition-Statements/Alter/ALTER-TABLE-REPLACE.md)
 guarantees the atomicity of this operation through the transaction mechanism 
of the Doris.
 
 ``` 
 Advantages: table query speed will be faster than view.
diff --git a/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md 
b/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md
index c1117c2d1a8..c013e0f40a8 100644
--- a/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md
+++ b/versioned_docs/version-2.1/ecosystem/flink-doris-connector.md
@@ -738,7 +738,7 @@ WITH (
    'sink.label-prefix' = 'doris_label',
    'sink.properties.columns' = 'dt,page,user_id,user_id=to_bitmap(user_id)'
 )
-````
+```
 4. **errCode = 2, detailMessage = Label [label_0_1] has already been used, 
relate to txn [19650]**
 
 In the Exactly-Once scenario, the Flink Job must be restarted from the latest 
Checkpoint/Savepoint, otherwise the above error will be reported.
@@ -751,13 +751,13 @@ At this time, it cannot be started from the checkpoint, 
and the expiration time
 
 6. **errCode = 2, detailMessage = current running txns on db 10006 is 100, 
larger than limit 100**
 
-This is because the concurrent import of the same library exceeds 100, which 
can be solved by adjusting the parameter `max_running_txn_num_per_db` of 
fe.conf. For details, please refer to 
[max_running_txn_num_per_db](https://doris.apache.org/zh-CN/docs/dev/admin-manual/config/fe-config/#max_running_txn_num_per_db)
+This is because the concurrent import of the same library exceeds 100, which 
can be solved by adjusting the parameter `max_running_txn_num_per_db` of 
fe.conf. For details, please refer to 
[max_running_txn_num_per_db](../admin-manual/config/fe-config#max_running_txn_num_per_db)
 
 At the same time, if a task frequently modifies the label and restarts, it may 
also cause this error. In the 2pc scenario (Duplicate/Aggregate model), the 
label of each task needs to be unique, and when restarting from the checkpoint, 
the Flink task will actively abort the txn that has been successfully 
precommitted before and has not been committed. Frequently modifying the label 
and restarting will cause a large number of txn that have successfully 
precommitted to fail to be aborted, o [...]
 
 7. **How to ensure the order of a batch of data when Flink writes to the Uniq 
model?**
 
-You can add sequence column configuration to ensure that, for details, please 
refer to 
[sequence](https://doris.apache.org/zh-CN/docs/dev/data-operate/update-delete/sequence-column-manual)
+You can add sequence column configuration to ensure that, for details, please 
refer to [sequence](../data-operate/update/update-of-unique-model)
 
 8. **The Flink task does not report an error, but the data cannot be 
synchronized? **
 
diff --git a/versioned_docs/version-2.1/ecosystem/hive-bitmap-udf.md 
b/versioned_docs/version-2.1/ecosystem/hive-bitmap-udf.md
index 16b9d569e65..3fce7a1d78c 100644
--- a/versioned_docs/version-2.1/ecosystem/hive-bitmap-udf.md
+++ b/versioned_docs/version-2.1/ecosystem/hive-bitmap-udf.md
@@ -53,10 +53,10 @@ CREATE TABLE IF NOT EXISTS `hive_table`(
 ) comment  'comment'
 ```
 
-### Hive Bitmap UDF Usage:
+### Hive Bitmap UDF Usage:
 
    Hive Bitmap UDF used in Hive/Spark,First, you need to compile fe to get 
hive-udf-jar-with-dependencies.jar.
-   Compilation preparation:If you have compiled the ldb source code, you can 
directly compile fe,If you have compiled the ldb source code, you can compile 
it directly. If you have not compiled the ldb source code, you need to manually 
install thrift,
+   Compilation preparation:If you have compiled the ldb source code, you can 
directly compile fe,If you have compiled the ldb source code, you can compile 
it directly. If you have not compiled the ldb source code, you need to manually 
install thrift,
    Reference:[Setting Up dev env for 
FE](https://doris.apache.org/community/developer-guide/fe-idea-dev/).
 
 ```sql
@@ -160,6 +160,6 @@ PROPERTIES (
 insert into doris_bitmap_table select k1, k2, k3, bitmap_from_base64(uuid) 
from hive.test.hive_bitmap_table;
 ```
 
-### Method 2:Spark Load
+### Method 2: Spark Load
 
- see details: [Spark 
Load](../data-operate/import/import-way/spark-load-manual.md) -> Basic 
operation -> Create load(Example 3: when the upstream data source is hive 
binary type table)
+ see details: [Spark 
Load](https://doris.apache.org/zh-CN/docs/1.2/data-operate/import/import-way/spark-load-manual)
 -> Basic operation -> Create load(Example 3: when the upstream data source is 
hive binary type table)
diff --git a/versioned_docs/version-2.1/ecosystem/hive-hll-udf.md 
b/versioned_docs/version-2.1/ecosystem/hive-hll-udf.md
index 058f0b224db..89584eff00c 100644
--- a/versioned_docs/version-2.1/ecosystem/hive-hll-udf.md
+++ b/versioned_docs/version-2.1/ecosystem/hive-hll-udf.md
@@ -26,7 +26,7 @@ under the License.
 
 # Hive HLL UDF
 
- The Hive HLL UDF provides a set of UDFs for generating HLL operations in Hive 
tables, which are identical to Doris HLL. Hive HLL can be imported into Doris 
through Spark HLL Load. For more information about HLL, please refer to Using 
HLL for Approximate Deduplication.:[Approximate Deduplication Using 
HLL](../query/duplicate/using-hll.md)
+ The Hive HLL UDF provides a set of UDFs for generating HLL operations in Hive 
tables, which are identical to Doris HLL. Hive HLL can be imported into Doris 
through Spark HLL Load. For more information about HLL, please refer to Using 
HLL for Approximate Deduplication.:[Approximate Deduplication Using 
HLL](../query/duplicate/using-hll.md)
 
  Function Introduction:
   1. UDAF
@@ -39,7 +39,7 @@ under the License.
 
     · hll_cardinality: Returns the number of distinct elements added to the 
HLL, similar to the bitmap_count function
 
- Main Purpose:
+ Main Purpose:
   1. Reduce data import time to Doris by eliminating the need for dictionary 
construction and HLL pre-aggregation
   2. Save Hive storage by compressing data using HLL, significantly reducing 
storage costs compared to Bitmap statistics
   3. Provide flexible HLL operations in Hive, including union and cardinality 
statistics, and allow the resulting HLL to be directly imported into Doris
@@ -249,4 +249,4 @@ select k3, 
hll_cardinality(hll_union(hll_from_base64(uuid))) from hive.hive_test
 
 ### Method 2: Spark Load
 
- See details: [Spark 
Load](../data-operate/import/import-way/spark-load-manual.md) -> Basic 
operation -> Creating Load (Example 3: when the upstream data source is hive 
binary type table)
+ See details: [Spark 
Load](https://doris.apache.org/zh-CN/docs/1.2/data-operate/import/import-way/spark-load-manual)
 -> Basic operation -> Creating Load (Example 3: when the upstream data source 
is hive binary type table)
diff --git a/versioned_docs/version-2.1/faq/install-faq.md 
b/versioned_docs/version-2.1/faq/install-faq.md
index b0ff867bd93..b1c401a0361 100644
--- a/versioned_docs/version-2.1/faq/install-faq.md
+++ b/versioned_docs/version-2.1/faq/install-faq.md
@@ -267,7 +267,7 @@ This is a bug in bdbje that has not yet been resolved. In 
this case, you can onl
 
 ### Q12. Doris compile and install JDK version incompatibility problem
 
-When compiling Doris using Docker, start FE after compiling and installing, 
and the exception message `java.lang.Suchmethoderror: java.nio.ByteBuffer.limit 
(I)Ljava/nio/ByteBuffer;` appears, this is because the default in Docker It is 
JDK 11. If your installation environment is using JDK8, you need to switch the 
JDK environment to JDK8 in Docker. For the specific switching method, please 
refer to [Compile 
Documentation](../install/source-install/compilation-general.md)
+When compiling Doris using Docker, start FE after compiling and installing, 
and the exception message `java.lang.Suchmethoderror: java.nio.ByteBuffer.limit 
(I)Ljava/nio/ByteBuffer;` appears, this is because the default in Docker It is 
JDK 11. If your installation environment is using JDK8, you need to switch the 
JDK environment to JDK8 in Docker. For the specific switching method, please 
refer to [Compile 
Documentation](../install/source-install/compilation-with-docker)
 
 ### Q13. Error starting FE or unit test locally Cannot find external parser 
table action_table.dat
 Run the following command
@@ -285,7 +285,7 @@ In doris 1.0 onwards, openssl has been upgraded to 1.1 and 
is built into the dor
 ```
 ERROR 1105 (HY000): errCode = 2, detailMessage = driver connect Error: HY000 
[MySQL][ODBC 8.0(w) Driver]SSL connection error: Failed to set ciphers to use 
(2026)
 ```
-The solution is to use the `Connector/ODBC 8.0.28` version of ODBC Connector 
and select `Linux - Generic` in the operating system, this version of ODBC 
Driver uses openssl version 1.1. Or use a lower version of ODBC connector, e.g. 
[Connector/ODBC 
5.3.14](https://dev.mysql.com/downloads/connector/odbc/5.3.html). For details, 
see the [ODBC exterior documentation](../lakehouse/external-table/odbc.md).
+The solution is to use the `Connector/ODBC 8.0.28` version of ODBC Connector 
and select `Linux - Generic` in the operating system, this version of ODBC 
Driver uses openssl version 1.1. Or use a lower version of ODBC connector, e.g. 
[Connector/ODBC 
5.3.14](https://dev.mysql.com/downloads/connector/odbc/5.3.html). For details, 
see the [ODBC exterior 
documentation](https://doris.apache.org/docs/1.2/lakehouse/external-table/odbc).
 
 You can verify the version of openssl used by MySQL ODBC Driver by
 
diff --git a/versioned_docs/version-2.1/faq/sql-faq.md 
b/versioned_docs/version-2.1/faq/sql-faq.md
index 9e38eced91e..769c773f62f 100644
--- a/versioned_docs/version-2.1/faq/sql-faq.md
+++ b/versioned_docs/version-2.1/faq/sql-faq.md
@@ -65,7 +65,7 @@ For example, the table is defined as k1, v1. A batch of 
imported data is as foll
 
 Then maybe the result of copy 1 is `1, "abc"`, and the result of copy 2 is `1, 
"def"`. As a result, the query results are inconsistent.
 
-To ensure that the data sequence between different replicas is unique, you can 
refer to the [Sequence 
Column](../data-operate/update-delete/sequence-column-manual.md) function.
+To ensure that the data sequence between different replicas is unique, you can 
refer to the [Sequence 
Column](../data-operate/update/update-of-unique-model.md) function.
 
 ### Q5. The problem of querying bitmap/hll type data returns NULL
 
@@ -95,7 +95,7 @@ If the `curl 77: Problem with the SSL CA cert` error appears 
in the be.INFO log.
 2. Copy the certificate to the specified location: `sudo cp /tmp/cacert.pem 
/etc/ssl/certs/ca-certificates.crt`
 3. Restart the BE node.
 
-### Q7. import error:"Message": "[INTERNAL_ERROR]single replica load is 
disabled on BE."
+### Q7. import error:"Message": "[INTERNAL_ERROR]single replica load is 
disabled on BE."
 
 1. Make sure this parameters `enable_single_replica_load` in be.conf is set 
true
 2.  Restart the BE node.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to