This is an automated email from the ASF dual-hosted git repository.

dataroaring pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 36efe9b4532 [doc](load) replace verbose Examples sections in SQL 
manual load pages with links to Data Import guide (#3440)
36efe9b4532 is described below

commit 36efe9b4532e1eae11fcf3d7c21b775c988ccc6a
Author: hui lai <[email protected]>
AuthorDate: Tue Mar 10 16:55:43 2026 +0800

    [doc](load) replace verbose Examples sections in SQL manual load pages with 
links to Data Import guide (#3440)
    
    … to Data Import guide
    
    ## Versions
    
    - [x] dev
    - [x] 4.x
    - [ ] 3.x
    - [ ] 2.1
    
    ## Languages
    
    - [x] Chinese
    - [x] English
    
    ## Docs Checklist
    
    - [ ] Checked by AI
    - [ ] Test Cases Built
---
 .../load-and-export/BROKER-LOAD.md                 | 294 +-------------------
 .../load-and-export/CREATE-ROUTINE-LOAD.md         | 214 +--------------
 .../load-and-export/MYSQL-LOAD.md                  |  85 +-----
 .../load-and-export/BROKER-LOAD.md                 | 296 +--------------------
 .../load-and-export/CREATE-ROUTINE-LOAD.md         | 216 +--------------
 .../load-and-export/MYSQL-LOAD.md                  |  85 +-----
 .../load-and-export/BROKER-LOAD.md                 | 296 +--------------------
 .../load-and-export/CREATE-ROUTINE-LOAD.md         | 216 +--------------
 .../load-and-export/MYSQL-LOAD.md                  |  85 +-----
 .../load-and-export/BROKER-LOAD.md                 | 294 +-------------------
 .../load-and-export/CREATE-ROUTINE-LOAD.md         | 214 +--------------
 .../load-and-export/MYSQL-LOAD.md                  |  85 +-----
 12 files changed, 12 insertions(+), 2368 deletions(-)

diff --git 
a/docs/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
 
b/docs/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
index 4d7bf54e296..0949cd9f219 100644
--- 
a/docs/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
+++ 
b/docs/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
@@ -152,296 +152,4 @@ Users executing this SQL command must have at least the 
following permissions:
 
 ## Examples
 
-1. Import a batch of data from HDFS. The imported file is `file.txt`, 
separated by commas, and imported into the table `my_table`.
-
-    ```sql
-    LOAD LABEL example_db.label1
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file.txt")
-        INTO TABLE `my_table`
-        COLUMNS TERMINATED BY ","
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-2. Import data from HDFS using wildcards to match two batches of files and 
import them into two tables respectively. Use wildcards to match two batches of 
files, `file - 10*` and `file - 20*`, and import them into the tables 
`my_table1` and `my_table2` respectively. For `my_table1`, specify to import 
into partition `p1`, and import the values of the second and third columns in 
the source file after adding 1.
-
-    ```sql
-    LOAD LABEL example_db.label2
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-10*")
-        INTO TABLE `my_table1`
-        PARTITION (p1)
-        COLUMNS TERMINATED BY ","
-        (k1, tmp_k2, tmp_k3)
-        SET (
-            k2 = tmp_k2 + 1,
-            k3 = tmp_k3 + 1
-        ),
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-20*")
-        INTO TABLE `my_table2`
-        COLUMNS TERMINATED BY ","
-        (k1, k2, k3)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-3. Import a batch of data from HDFS. Specify the separator as the default Hive 
separator `\\x01`, and use the wildcard `*` to specify all files in all 
directories under the `data` directory. Use simple authentication and configure 
namenode HA at the same time.
-
-    ```sql
-    LOAD LABEL example_db.label3
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/user/doris/data/*/*")
-        INTO TABLE `my_table`
-        COLUMNS TERMINATED BY "\\x01"
-    )
-    WITH BROKER my_hdfs_broker
-    (
-        "username" = "",
-        "password" = "",
-        "fs.defaultFS" = "hdfs://my_ha",
-        "dfs.nameservices" = "my_ha",
-        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
-        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
-        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
-        "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
-    );
-    ```
-
-4. Import data in Parquet format and specify the `FORMAT` as `parquet`. By 
default, it is determined by the file suffix.
-
-    ```sql
-    LOAD LABEL example_db.label4
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file")
-        INTO TABLE `my_table`
-        FORMAT AS "parquet"
-        (k1, k2, k3)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-5. Import data and extract partition fields from the file path. The columns in 
the `my_table` are `k1, k2, k3, city, utc_date`. The directory 
`hdfs://hdfs_host:hdfs_port/user/doris/data/input/dir/city = beijing` contains 
the following files:
-    ```text
-    hdfs://hdfs_host:hdfs_port/input/city=beijing/utc_date=2020-10-01/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=beijing/utc_date=2020-10-02/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=tianji/utc_date=2020-10-03/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=tianji/utc_date=2020-10-04/0000.csv
-    ```
-    The files only contain three columns of data, `k1, k2, k3`, and the two 
columns of data, `city` and `utc_date`, will be extracted from the file path.
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/city=beijing/*/*")
-        INTO TABLE `my_table`
-        FORMAT AS "csv"
-        (k1, k2, k3)
-        COLUMNS FROM PATH AS (city, utc_date)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-6. Filter the data to be imported. Only rows where `k1 = 1` in the original 
data and `k1 > k2` after conversion will be imported.
-
-    ```sql
-    LOAD LABEL example_db.label6
-    (
-        DATA INFILE("hdfs://host:port/input/file")
-        INTO TABLE `my_table`
-        (k1, k2, k3)
-        SET (
-            k2 = k2 + 1
-        )
-        PRECEDING FILTER k1 = 1
-        WHERE k1 > k2
-    )
-    WITH BROKER hdfs
-    (
-        "username"="user",
-        "password"="pass"
-    );
-    ```
-
-7. Import data, extract the time partition field from the file path, and the 
time contains `%3A` (in the HDFS path, `:` is not allowed, so all `:` will be 
replaced by `%3A`).
-
-   ```sql
-   LOAD LABEL example_db.label7
-   (
-       DATA INFILE("hdfs://host:port/user/data/*/test.txt") 
-       INTO TABLE `tbl12`
-       COLUMNS TERMINATED BY ","
-       (k2,k3)
-       COLUMNS FROM PATH AS (data_time)
-       SET (
-           data_time=str_to_date(data_time, '%Y-%m-%d %H%%3A%i%%3A%s')
-       )
-   )
-   WITH BROKER hdfs
-   (
-       "username"="user",
-       "password"="pass"
-   );
-   ```
-
-   The directory contains the following files:
-
-   ```text
-   /user/data/data_time=2020-02-17 00%3A00%3A00/test.txt
-   /user/data/data_time=2020-02-18 00%3A00%3A00/test.txt
-   ```
-
-   The table structure is:
-
-   ```text
-   data_time DATETIME,
-   k2        INT,
-   k3        INT
-   ```
-
-8. Import a batch of data from HDFS, specifying the timeout period and the 
filtering ratio. Use the broker `my_hdfs_broker` with plain - text 
authentication. Delete the columns in the original data that match the columns 
where `v2 > 100` in the imported data, and import other columns normally.
-
-   ```sql
-   LOAD LABEL example_db.label8
-   (
-       MERGE DATA INFILE("HDFS://test:802/input/file")
-       INTO TABLE `my_table`
-       (k1, k2, k3, v2, v1)
-       DELETE ON v2 > 100
-   )
-   WITH HDFS
-   (
-       "hadoop.username"="user",
-       "password"="pass"
-   )
-   PROPERTIES
-   (
-       "timeout" = "3600",
-       "max_filter_ratio" = "0.1"
-   );
-   ```
-
-   Use the `MERGE` method for import. `my_table` must be a table with the 
Unique Key model. When the value of the `v2` column in the imported data is 
greater than 100, the row will be considered a deletion row.
-
-   The timeout period for the import task is 3600 seconds, and an error rate 
of up to 10% is allowed.
-
-9. Specify the `source_sequence` column during import to ensure the 
replacement order in the `UNIQUE_KEYS` table:
-
-   ```sql
-   LOAD LABEL example_db.label9
-   (
-       DATA INFILE("HDFS://test:802/input/file")
-       INTO TABLE `my_table`
-       COLUMNS TERMINATED BY ","
-       (k1,k2,source_sequence,v1,v2)
-       ORDER BY source_sequence
-   ) 
-   WITH HDFS
-   (
-       "hadoop.username"="user",
-       "password"="pass"
-   )
-   ```
-
-   `my_table` must be a table with the Unique Key model and a `Sequence Col` 
must be specified. The data will be ordered according to the values in the 
`source_sequence` column of the source data.
-
-10. Import a batch of data from HDFS, specifying the file format as `json` and 
setting `json_root` and `jsonpaths`:
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("HDFS://test:port/input/file.json")
-        INTO TABLE `my_table`
-        FORMAT AS "json"
-        PROPERTIES(
-          "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.city, $.code]"
-        )       
-    )
-    WITH BROKER HDFS (
-        "hadoop.username" = "user",
-        "password" = ""
-    )
-    PROPERTIES
-    (
-        "timeout"="1200",
-        "max_filter_ratio"="0.1"
-    );
-    ```
-
-    `jsonpaths` can be used in conjunction with `column list` and `SET 
(column_mapping)`:
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("HDFS://test:port/input/file.json")
-        INTO TABLE `my_table`
-        FORMAT AS "json"
-        (id, code, city)
-        SET (id = id * 10)
-        PROPERTIES(
-          "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.code, $.city]"
-        )       
-    )
-    WITH BROKER HDFS (
-        "hadoop.username" = "user",
-        "password" = ""
-    )
-    PROPERTIES
-    (
-        "timeout"="1200",
-        "max_filter_ratio"="0.1"
-    );
-    ```
-
-11. Import data in CSV format from Tencent Cloud COS.
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("cosn://my_bucket/input/file.csv")
-        INTO TABLE `my_table`
-        (k1, k2, k3)
-    )
-    WITH BROKER "broker_name"
-    (
-        "fs.cosn.userinfo.secretId" = "xxx",
-        "fs.cosn.userinfo.secretKey" = "xxxx",
-        "fs.cosn.bucket.endpoint_suffix" = "cos.xxxxxxxxx.myqcloud.com"
-    )
-    ```
-
-12. Remove double quotes and skip the first 5 rows when importing CSV data.
-
-    ```sql 
-    LOAD LABEL example_db.label12
-    (
-        DATA INFILE("cosn://my_bucket/input/file.csv")
-        INTO TABLE `my_table`
-        (k1, k2, k3)
-        PROPERTIES("trim_double_quotes" = "true", "skip_lines" = "5")
-    )
-    WITH BROKER "broker_name"
-    (
-        "fs.cosn.userinfo.secretId" = "xxx",
-        "fs.cosn.userinfo.secretKey" = "xxxx",
-        "fs.cosn.bucket.endpoint_suffix" = "cos.xxxxxxxxx.myqcloud.com"
-    )
-    ```
\ No newline at end of file
+For complete examples covering S3, HDFS, JSON format, Merge mode, path-based 
partition extraction, and more, refer to [Broker 
Load](../../../../data-operate/import/import-way/broker-load-manual.md) in the 
Data Import guide.
\ No newline at end of file
diff --git 
a/docs/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
 
b/docs/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
index 3f7d86d1849..24a09dcc8cb 100644
--- 
a/docs/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
+++ 
b/docs/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
@@ -383,216 +383,4 @@ Users executing this SQL command must have at least the 
following privileges:
 
 ## Examples
 
-- Create a Kafka routine load task named test1 for example_tbl in example_db. 
Specify column separator, group.id and client.id, and automatically consume all 
partitions by default, starting subscription from where data exists 
(OFFSET_BEGINNING)
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS TERMINATED BY ",",
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100)
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.group.id" = "xxx",
-       "property.client.id" = "xxx",
-       "property.kafka_default_offsets" = "OFFSET_BEGINNING"
-   );
-   ```
-
-- Create a Kafka routine dynamic multi-table load task named test1 for 
example_db. Specify column separator, group.id and client.id, and automatically 
consume all partitions by default, starting subscription from where data exists 
(OFFSET_BEGINNING)
-
-  Assuming we need to import data from Kafka into test1 and test2 tables in 
example_db, we create a routine load task named test1, and write data from 
test1 and test2 to a Kafka topic named `my_topic`. This way, we can import data 
from Kafka into two tables through one routine load task.
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.group.id" = "xxx",
-       "property.client.id" = "xxx",
-       "property.kafka_default_offsets" = "OFFSET_BEGINNING"
-   );
-   ```
-
-- Create a Kafka routine load task named test1 for example_tbl in example_db. 
The import task is in strict mode.
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100),
-   PRECEDING FILTER k1 = 1,
-   WHERE k1 > 100 and k2 like "%doris%"
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "true"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- Import data from Kafka cluster using SSL authentication. Also set client.id 
parameter. Import task is in non-strict mode, timezone is Africa/Abidjan
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100),
-   WHERE k1 > 100 and k2 like "%doris%"
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "timezone" = "Africa/Abidjan"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.security.protocol" = "ssl",
-       "property.ssl.ca.location" = "FILE:ca.pem",
-       "property.ssl.certificate.location" = "FILE:client.pem",
-       "property.ssl.key.location" = "FILE:client.key",
-       "property.ssl.key.password" = "abcdefg",
-       "property.client.id" = "my_client_id"
-   );
-   ```
-
-- Import Json format data. Use field names in Json as column name mapping by 
default. Specify importing partitions 0,1,2, all starting offsets are 0
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_json_label_1 ON table1
-   COLUMNS(category,price,author)
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "format" = "json"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2",
-       "kafka_offsets" = "0,0,0"
-   );
-   ```
-
-- Import Json data, extract fields through Jsonpaths, and specify Json 
document root node
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(category, author, price, timestamp, dt=from_unixtime(timestamp, 
'%Y%m%d'))
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "format" = "json",
-       "jsonpaths" = 
"[\"$.category\",\"$.author\",\"$.price\",\"$.timestamp\"]",
-       "json_root" = "$.RECORDS"
-       "strip_outer_array" = "true"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2",
-       "kafka_offsets" = "0,0,0"
-   );
-   ```
-
-- Create a Kafka routine load task named test1 for example_tbl in example_db 
with condition filtering.
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   WITH MERGE
-   COLUMNS(k1, k2, k3, v1, v2, v3),
-   WHERE k1 > 100 and k2 like "%doris%",
-   DELETE ON v3 >100
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- Import data into a Unique Key model table containing sequence columns
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_job ON example_tbl
-   COLUMNS TERMINATED BY ",",
-   COLUMNS(k1,k2,source_sequence,v1,v2),
-   ORDER BY source_sequence
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "30",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200"
-   ) FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- Start consuming from a specified time point
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_job ON example_tbl
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "30",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200"
-   ) FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092",
-       "kafka_topic" = "my_topic",
-       "property.kafka_default_offsets" = "2021-05-21 10:00:00"
-   );
-   ```
\ No newline at end of file
+For complete examples covering Kafka CSV/JSON import, SSL authentication, 
Merge mode, sequence columns, and more, refer to [Routine 
Load](../../../../data-operate/import/import-way/routine-load-manual.md) in the 
Data Import guide.
\ No newline at end of file
diff --git 
a/docs/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
 
b/docs/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
index 090aa0c8c27..68a39cd3ccc 100644
--- 
a/docs/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
+++ 
b/docs/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
@@ -89,87 +89,4 @@ Users executing this SQL command must have at least the 
following permissions:
 
 ## Examples
 
-1. Import data from the client's local file `testData` into the table 
`testTbl` in the database `testDb`. Specify a timeout of 100 seconds.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("timeout"="100")
-    ```
-
-2. Import data from the server's local file `/root/testData` (you need to set 
the FE configuration `mysql_load_server_secure_path` to `/root`) into the table 
`testTbl` in the database `testDb`. Specify a timeout of 100 seconds.
-
-    ```sql
-    LOAD DATA
-    INFILE '/root/testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("timeout"="100")
-    ```
-
-3. Import data from the client's local file `testData` into the table 
`testTbl` in the database `testDb`, allowing an error rate of 20%.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-4. Import data from the client's local file `testData` into the table 
`testTbl` in the database `testDb`, allowing an error rate of 20%, and specify 
the column names of the file.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    (k2, k1, v1)
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-5. Import data from the local file `testData` into partitions `p1` and `p2` of 
the table `testTbl` in the database `testDb`, allowing an error rate of 20%.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PARTITION (p1, p2)
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-6. Import data from the local CSV file `testData` with a line separator of 
`0102` and a column separator of `0304` into the table `testTbl` in the 
database `testDb`.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    COLUMNS TERMINATED BY '0304'
-    LINES TERMINATED BY '0102'
-    ```
-
-7. Import data from the local file `testData` into partitions `p1` and `p2` of 
the table `testTbl` in the database `testDb` and skip the first 3 lines.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PARTITION (p1, p2)
-    IGNORE 3 LINES
-    ```
-
-8. Import data with strict mode filtering and set the time zone to 
`Africa/Abidjan`.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("strict_mode"="true", "timezone"="Africa/Abidjan")
-    ```
-
-9. Limit the import memory to 10GB and set a timeout of 10 minutes for the 
data import.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("exec_mem_limit"="10737418240", "timeout"="600")
-    ```
\ No newline at end of file
+For complete examples covering local file import, partition selection, column 
mapping, strict mode, and more, refer to [MySQL 
Load](../../../../data-operate/import/import-way/mysql-load-manual.md) in the 
Data Import guide.
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
index 65abb3ebc3a..53d0b1cb6d6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
@@ -152,298 +152,4 @@ WITH BROKER "<broker_name>"
 
 ## 举例
 
-1. 从 HDFS 导入一批数据,导入文件 `file.txt`,按逗号分隔,导入到表 `my_table`。
-
-    ```sql
-    LOAD LABEL example_db.label1
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file.txt")
-        INTO TABLE `my_table`
-        COLUMNS TERMINATED BY ","
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-2. 从 HDFS 导入数据,使用通配符匹配两批文件。分别导入到两个表中。使用通配符匹配导入两批文件 `file-10*` 和 
`file-20*`。分别导入到 `my_table1` 和`my_table2` 两张表中。其中 `my_table1` 指定导入到分区 `p1` 
中,并且将导入源文件中第列和第三列的值 +1 后导入。
-
-    ```sql
-    LOAD LABEL example_db.label2
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-10*")
-        INTO TABLE `my_table1`
-        PARTITION (p1)
-        COLUMNS TERMINATED BY ","
-        (k1, tmp_k2, tmp_k3)
-        SET (
-            k2 = tmp_k2 + 1,
-            k3 = tmp_k3 + 1
-        ),
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-20*")
-        INTO TABLE `my_table2`
-        COLUMNS TERMINATED BY ","
-        (k1, k2, k3)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-3. 从 HDFS 导入一批数据。指定分隔符为 Hive 的默认分隔符 `\\x01`,并使用通配符 * 指定 `data` 
目录下所有目录的所文件。使用简单认证,同时配置 namenode HA。
-
-    ```sql
-    LOAD LABEL example_db.label3
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/user/doris/data/*/*")
-        INTO TABLE `my_table`
-        COLUMNS TERMINATED BY "\\x01"
-    )
-    WITH BROKER my_hdfs_broker
-    (
-        "username" = "",
-        "password" = "",
-        "fs.defaultFS" = "hdfs://my_ha",
-        "dfs.nameservices" = "my_ha",
-        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
-        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
-        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
-        "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.    namenode.ha.ConfiguredFailoverProxyProvider"
-    );
-    ```
-
-4. 导入 Parquet 格式数据,指定 FORMAT 为 parquet。默认是通过文件后缀判断
-
-    ```sql
-    LOAD LABEL example_db.label4
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file")
-        INTO TABLE `my_table`
-        FORMAT AS "parquet"
-        (k1, k2, k3)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-5. 导入数据,并提取文件路径中的分区字段。`my_table` 表中的列为 `k1, k2, k3, city, utc_date`。其中 
`hdfs://hdfs_host:hdfs_port/user/doris/data/input/dir/city=beijing` 目下包括如下文件:
-    ```text
-    hdfs://hdfs_host:hdfs_port/input/city=beijing/utc_date=2020-10-01/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=beijing/utc_date=2020-10-02/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=tianji/utc_date=2020-10-03/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=tianji/utc_date=2020-10-04/0000.csv
-    ```
-    文件中只包含 `k1, k2, k3` 三列数据,`city, utc_date` 这两列数据会从文件路径中提取。
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/city=beijing/*/*")
-        INTO TABLE `my_table`
-        FORMAT AS "csv"
-        (k1, k2, k3)
-        COLUMNS FROM PATH AS (city, utc_date)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-6. 对待导入数据进行过滤。只有原始数据中,k1 = 1,并且转换后,k1 > k2 的行才会被导入。
-
-    ```sql
-    LOAD LABEL example_db.label6
-    (
-        DATA INFILE("hdfs://host:port/input/file")
-        INTO TABLE `my_table`
-        (k1, k2, k3)
-        SET (
-            k2 = k2 + 1
-        )
-        PRECEDING FILTER k1 = 1
-        WHERE k1 > k2
-    )
-    WITH BROKER hdfs
-    (
-        "username"="user",
-        "password"="pass"
-    );
-    ```
-
-   
-
-7. 导入数据,提取文件路径中的时间分区字段,并且时间包含 %3A (在 hdfs 路径中,不允许有 ':',所有 ':' 会由 %3A 替换)
-
-   ```sql
-   LOAD LABEL example_db.label7
-   (
-       DATA INFILE("hdfs://host:port/user/data/*/test.txt") 
-       INTO TABLE `tbl12`
-       COLUMNS TERMINATED BY ","
-       (k2,k3)
-       COLUMNS FROM PATH AS (data_time)
-       SET (
-           data_time=str_to_date(data_time, '%Y-%m-%d %H%%3A%i%%3A%s')
-       )
-   )
-   WITH BROKER hdfs
-   (
-       "username"="user",
-       "password"="pass"
-   );
-   ```
-
-   路径下有如下文件:
-
-   ```text
-   /user/data/data_time=2020-02-17 00%3A00%3A00/test.txt
-   /user/data/data_time=2020-02-18 00%3A00%3A00/test.txt
-   ```
-
-   表结构为:
-
-   ```text
-   data_time DATETIME,
-   k2        INT,
-   k3        INT
-   ```
-
-8. 从 HDFS 导入一批数据,指定超时时间和过滤比例。使用明文 my_hdfs_broker 的 broker。简单认证。并且将原有数据中与 导入数据中 
v2 大于 100 的列相匹配的列删除,其他列正常导入
-
-   ```sql
-   LOAD LABEL example_db.label8
-   (
-       MERGE DATA INFILE("HDFS://test:802/input/file")
-       INTO TABLE `my_table`
-       (k1, k2, k3, v2, v1)
-       DELETE ON v2 > 100
-   )
-   WITH HDFS
-   (
-       "hadoop.username"="user",
-       "password"="pass"
-   )
-   PROPERTIES
-   (
-       "timeout" = "3600",
-       "max_filter_ratio" = "0.1"
-   );
-   ```
-
-   使用 MERGE 方式导入。`my_table` 必须是一张 Unique Key 的表。当导入数据中的 v2 列的值大于 100 
时,该行会被认为是一个删除行。
-
-   导入任务的超时时间是 3600 秒,并且允许错误率在 10% 以内。
-
-9. 导入时指定 source_sequence 列,保证 UNIQUE_KEYS 表中的替换顺序:
-
-   ```sql
-   LOAD LABEL example_db.label9
-   (
-       DATA INFILE("HDFS://test:802/input/file")
-       INTO TABLE `my_table`
-       COLUMNS TERMINATED BY ","
-       (k1,k2,source_sequence,v1,v2)
-       ORDER BY source_sequence
-   ) 
-   WITH HDFS
-   (
-       "hadoop.username"="user",
-       "password"="pass"
-   )
-   ```
-
-   `my_table` 必须是 Unique Key 模型表,并且指定了 Sequcence Col。数据会按照源数据中 
`source_sequence` 列的值来保证顺序性。
-
-10. 从 HDFS 导入一批数据,指定文件格式为 `json` 并指定 `json_root`、`jsonpaths`
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("HDFS://test:port/input/file.json")
-        INTO TABLE `my_table`
-        FORMAT AS "json"
-        PROPERTIES(
-          "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.city, $.code]"
-        )       
-    )
-    with HDFS (
-    "hadoop.username" = "user"
-    "password" = ""
-    )
-    PROPERTIES
-    (
-    "timeout"="1200",
-    "max_filter_ratio"="0.1"
-    );
-    ```
-
-    `jsonpaths` 可与 `column list` 及 `SET (column_mapping)`配合:
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("HDFS://test:port/input/file.json")
-        INTO TABLE `my_table`
-        FORMAT AS "json"
-        (id, code, city)
-        SET (id = id * 10)
-        PROPERTIES(
-          "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.code, $.city]"
-        )       
-    )
-    with HDFS (
-    "hadoop.username" = "user"
-    "password" = ""
-    )
-    PROPERTIES
-    (
-    "timeout"="1200",
-    "max_filter_ratio"="0.1"
-    );
-    ```
-
-11. 从腾讯云 cos 中以 csv 格式导入数据。
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-    DATA INFILE("cosn://my_bucket/input/file.csv")
-    INTO TABLE `my_table`
-    (k1, k2, k3)
-    )
-    WITH BROKER "broker_name"
-    (
-        "fs.cosn.userinfo.secretId" = "xxx",
-        "fs.cosn.userinfo.secretKey" = "xxxx",
-        "fs.cosn.bucket.endpoint_suffix" = "cos.xxxxxxxxx.myqcloud.com"
-    )
-    ```
-
-12. 导入 CSV 数据时去掉双引号,并跳过前 5 行。
-
-    ```sql 
-    LOAD LABEL example_db.label12
-    (
-    DATA INFILE("cosn://my_bucket/input/file.csv")
-    INTO TABLE `my_table`
-    (k1, k2, k3)
-    PROPERTIES("trim_double_quotes" = "true", "skip_lines" = "5")
-    )
-    WITH BROKER "broker_name"
-    (
-        "fs.cosn.userinfo.secretId" = "xxx",
-        "fs.cosn.userinfo.secretKey" = "xxxx",
-        "fs.cosn.bucket.endpoint_suffix" = "cos.xxxxxxxxx.myqcloud.com"
-    )
-    ```
\ No newline at end of file
+完整示例(包括 S3、HDFS、JSON 格式、Merge 模式、路径分区提取等)请参考数据导入指南中的 [Broker 
Load](../../../../data-operate/import/import-way/broker-load-manual.md)。
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
index c7ae5d37400..44af4de228b 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
@@ -384,218 +384,4 @@ FROM <data_source> [<data_source_properties>]
 
 ## 示例
 
-- 为 example_db 的 example_tbl 创建一个名为 test1 的 Kafka 例行导入任务。指定列分隔符和 group.id 和 
client.id,并且自动默认消费所有分区,且从有数据的位置(OFFSET_BEGINNING)开始订阅
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS TERMINATED BY ",",
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100)
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.group.id" = "xxx",
-       "property.client.id" = "xxx",
-       "property.kafka_default_offsets" = "OFFSET_BEGINNING"
-   );
-   ```
-
-- 为 example_db 创建一个名为 test1 的 Kafka 例行动态多表导入任务。指定列分隔符和 group.id 和 
client.id,并且自动默认消费所有分区, 
-   且从有数据的位置(OFFSET_BEGINNING)开始订阅
-
-  我们假设需要将 Kafka 中的数据导入到 example_db 中的 test1 以及 test2 表中,我们创建了一个名为 test1 
的例行导入任务,同时将 test1 和 
-  test2 中的数据写到一个名为 `my_topic` 的 Kafka 的 topic 中,这样就可以通过一个例行导入任务将 Kafka 
中的数据导入到两个表中。
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.group.id" = "xxx",
-       "property.client.id" = "xxx",
-       "property.kafka_default_offsets" = "OFFSET_BEGINNING"
-   );
-   ```
-
-- 为 example_db 的 example_tbl 创建一个名为 test1 的 Kafka 例行导入任务。导入任务为严格模式。
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100),
-   PRECEDING FILTER k1 = 1,
-   WHERE k1 > 100 and k2 like "%doris%"
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "true"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
--  通过 SSL 认证方式,从 Kafka 集群导入数据。同时设置 client.id 参数。导入任务为非严格模式,时区为 Africa/Abidjan
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100),
-   WHERE k1 > 100 and k2 like "%doris%"
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "timezone" = "Africa/Abidjan"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.security.protocol" = "ssl",
-       "property.ssl.ca.location" = "FILE:ca.pem",
-       "property.ssl.certificate.location" = "FILE:client.pem",
-       "property.ssl.key.location" = "FILE:client.key",
-       "property.ssl.key.password" = "abcdefg",
-       "property.client.id" = "my_client_id"
-   );
-   ```
-
--  导入 Json 格式数据。默认使用 Json 中的字段名作为列名映射。指定导入 0,1,2 三个分区,起始 offset 都为 0
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_json_label_1 ON table1
-   COLUMNS(category,price,author)
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "format" = "json"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2",
-       "kafka_offsets" = "0,0,0"
-   );
-   ```
-
-- 导入 Json 数据,并通过 Jsonpaths 抽取字段,并指定 Json 文档根节点
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(category, author, price, timestamp, dt=from_unixtime(timestamp, 
'%Y%m%d'))
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "format" = "json",
-       "jsonpaths" = 
"[\"$.category\",\"$.author\",\"$.price\",\"$.timestamp\"]",
-       "json_root" = "$.RECORDS"
-       "strip_outer_array" = "true"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2",
-       "kafka_offsets" = "0,0,0"
-   );
-   ```
-
-- 为 example_db 的 example_tbl 创建一个名为 test1 的 Kafka 例行导入任务。并且使用条件过滤。
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   WITH MERGE
-   COLUMNS(k1, k2, k3, v1, v2, v3),
-   WHERE k1 > 100 and k2 like "%doris%",
-   DELETE ON v3 >100
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- 导入数据到含有 sequence 列的 Unique Key 模型表中
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_job ON example_tbl
-   COLUMNS TERMINATED BY ",",
-   COLUMNS(k1,k2,source_sequence,v1,v2),
-   ORDER BY source_sequence
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "30",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200"
-   ) FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- 从指定的时间点开始消费
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_job ON example_tbl
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "30",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200"
-   ) FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092",
-       "kafka_topic" = "my_topic",
-       "property.kafka_default_offsets" = "2021-05-21 10:00:00"
-   );
-   ```
\ No newline at end of file
+完整示例(包括 Kafka CSV/JSON 导入、SSL 认证、Merge 模式、Sequence 列等)请参考数据导入指南中的 [Routine 
Load](../../../../data-operate/import/import-way/routine-load-manual.md)。
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
index dad8c6b089f..741592a858d 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
@@ -90,87 +90,4 @@ INTO TABLE "<tbl_name>"
 
 ## 举例
 
-1. 将客户端本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表。指定超时时间为 100 秒
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("timeout"="100")
-    ```
-
-2. 将服务端本地文件'/root/testData'(需设置 FE 配置`mysql_load_server_secure_path`为`/root`) 
中的数据导入到数据库'testDb'中'testTbl'的表。指定超时时间为 100 秒
-
-    ```sql
-    LOAD DATA
-    INFILE '/root/testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("timeout"="100")
-    ```
-
-3. 将客户端本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表,允许 20% 的错误率
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-4. 将客户端本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表,允许 20% 的错误率,并且指定文件的列名
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    (k2, k1, v1)
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-5. 将本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表中的 p1, p2 分区,允许 20% 的错误率。
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PARTITION (p1, p2)
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-6. 将本地行分隔符为`0102`,列分隔符为`0304`的 CSV 文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表中。
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    COLUMNS TERMINATED BY '0304'
-    LINES TERMINATED BY '0102'
-    ```
-
-7. 将本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表中的 p1, p2 分区,并跳过前面 3 行。
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PARTITION (p1, p2)
-    IGNORE 1 LINES
-    ```
-
-8. 导入数据进行严格模式过滤,并设置时区为 Africa/Abidjan
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("strict_mode"="true", "timezone"="Africa/Abidjan")
-    ```
-
-9. 导入数据进行限制导入内存为 10GB, 并在 10 分钟超时
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("exec_mem_limit"="10737418240", "timeout"="600")
-    ```
\ No newline at end of file
+完整示例(包括本地文件导入、分区选择、列映射、严格模式等)请参考数据导入指南中的 [MySQL 
Load](../../../../data-operate/import/import-way/mysql-load-manual.md)。
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
index 65abb3ebc3a..53d0b1cb6d6 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
@@ -152,298 +152,4 @@ WITH BROKER "<broker_name>"
 
 ## 举例
 
-1. 从 HDFS 导入一批数据,导入文件 `file.txt`,按逗号分隔,导入到表 `my_table`。
-
-    ```sql
-    LOAD LABEL example_db.label1
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file.txt")
-        INTO TABLE `my_table`
-        COLUMNS TERMINATED BY ","
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-2. 从 HDFS 导入数据,使用通配符匹配两批文件。分别导入到两个表中。使用通配符匹配导入两批文件 `file-10*` 和 
`file-20*`。分别导入到 `my_table1` 和`my_table2` 两张表中。其中 `my_table1` 指定导入到分区 `p1` 
中,并且将导入源文件中第列和第三列的值 +1 后导入。
-
-    ```sql
-    LOAD LABEL example_db.label2
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-10*")
-        INTO TABLE `my_table1`
-        PARTITION (p1)
-        COLUMNS TERMINATED BY ","
-        (k1, tmp_k2, tmp_k3)
-        SET (
-            k2 = tmp_k2 + 1,
-            k3 = tmp_k3 + 1
-        ),
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-20*")
-        INTO TABLE `my_table2`
-        COLUMNS TERMINATED BY ","
-        (k1, k2, k3)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-3. 从 HDFS 导入一批数据。指定分隔符为 Hive 的默认分隔符 `\\x01`,并使用通配符 * 指定 `data` 
目录下所有目录的所文件。使用简单认证,同时配置 namenode HA。
-
-    ```sql
-    LOAD LABEL example_db.label3
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/user/doris/data/*/*")
-        INTO TABLE `my_table`
-        COLUMNS TERMINATED BY "\\x01"
-    )
-    WITH BROKER my_hdfs_broker
-    (
-        "username" = "",
-        "password" = "",
-        "fs.defaultFS" = "hdfs://my_ha",
-        "dfs.nameservices" = "my_ha",
-        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
-        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
-        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
-        "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.    namenode.ha.ConfiguredFailoverProxyProvider"
-    );
-    ```
-
-4. 导入 Parquet 格式数据,指定 FORMAT 为 parquet。默认是通过文件后缀判断
-
-    ```sql
-    LOAD LABEL example_db.label4
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file")
-        INTO TABLE `my_table`
-        FORMAT AS "parquet"
-        (k1, k2, k3)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-5. 导入数据,并提取文件路径中的分区字段。`my_table` 表中的列为 `k1, k2, k3, city, utc_date`。其中 
`hdfs://hdfs_host:hdfs_port/user/doris/data/input/dir/city=beijing` 目下包括如下文件:
-    ```text
-    hdfs://hdfs_host:hdfs_port/input/city=beijing/utc_date=2020-10-01/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=beijing/utc_date=2020-10-02/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=tianji/utc_date=2020-10-03/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=tianji/utc_date=2020-10-04/0000.csv
-    ```
-    文件中只包含 `k1, k2, k3` 三列数据,`city, utc_date` 这两列数据会从文件路径中提取。
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/city=beijing/*/*")
-        INTO TABLE `my_table`
-        FORMAT AS "csv"
-        (k1, k2, k3)
-        COLUMNS FROM PATH AS (city, utc_date)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-6. 对待导入数据进行过滤。只有原始数据中,k1 = 1,并且转换后,k1 > k2 的行才会被导入。
-
-    ```sql
-    LOAD LABEL example_db.label6
-    (
-        DATA INFILE("hdfs://host:port/input/file")
-        INTO TABLE `my_table`
-        (k1, k2, k3)
-        SET (
-            k2 = k2 + 1
-        )
-        PRECEDING FILTER k1 = 1
-        WHERE k1 > k2
-    )
-    WITH BROKER hdfs
-    (
-        "username"="user",
-        "password"="pass"
-    );
-    ```
-
-   
-
-7. 导入数据,提取文件路径中的时间分区字段,并且时间包含 %3A (在 hdfs 路径中,不允许有 ':',所有 ':' 会由 %3A 替换)
-
-   ```sql
-   LOAD LABEL example_db.label7
-   (
-       DATA INFILE("hdfs://host:port/user/data/*/test.txt") 
-       INTO TABLE `tbl12`
-       COLUMNS TERMINATED BY ","
-       (k2,k3)
-       COLUMNS FROM PATH AS (data_time)
-       SET (
-           data_time=str_to_date(data_time, '%Y-%m-%d %H%%3A%i%%3A%s')
-       )
-   )
-   WITH BROKER hdfs
-   (
-       "username"="user",
-       "password"="pass"
-   );
-   ```
-
-   路径下有如下文件:
-
-   ```text
-   /user/data/data_time=2020-02-17 00%3A00%3A00/test.txt
-   /user/data/data_time=2020-02-18 00%3A00%3A00/test.txt
-   ```
-
-   表结构为:
-
-   ```text
-   data_time DATETIME,
-   k2        INT,
-   k3        INT
-   ```
-
-8. 从 HDFS 导入一批数据,指定超时时间和过滤比例。使用明文 my_hdfs_broker 的 broker。简单认证。并且将原有数据中与 导入数据中 
v2 大于 100 的列相匹配的列删除,其他列正常导入
-
-   ```sql
-   LOAD LABEL example_db.label8
-   (
-       MERGE DATA INFILE("HDFS://test:802/input/file")
-       INTO TABLE `my_table`
-       (k1, k2, k3, v2, v1)
-       DELETE ON v2 > 100
-   )
-   WITH HDFS
-   (
-       "hadoop.username"="user",
-       "password"="pass"
-   )
-   PROPERTIES
-   (
-       "timeout" = "3600",
-       "max_filter_ratio" = "0.1"
-   );
-   ```
-
-   使用 MERGE 方式导入。`my_table` 必须是一张 Unique Key 的表。当导入数据中的 v2 列的值大于 100 
时,该行会被认为是一个删除行。
-
-   导入任务的超时时间是 3600 秒,并且允许错误率在 10% 以内。
-
-9. 导入时指定 source_sequence 列,保证 UNIQUE_KEYS 表中的替换顺序:
-
-   ```sql
-   LOAD LABEL example_db.label9
-   (
-       DATA INFILE("HDFS://test:802/input/file")
-       INTO TABLE `my_table`
-       COLUMNS TERMINATED BY ","
-       (k1,k2,source_sequence,v1,v2)
-       ORDER BY source_sequence
-   ) 
-   WITH HDFS
-   (
-       "hadoop.username"="user",
-       "password"="pass"
-   )
-   ```
-
-   `my_table` 必须是 Unique Key 模型表,并且指定了 Sequcence Col。数据会按照源数据中 
`source_sequence` 列的值来保证顺序性。
-
-10. 从 HDFS 导入一批数据,指定文件格式为 `json` 并指定 `json_root`、`jsonpaths`
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("HDFS://test:port/input/file.json")
-        INTO TABLE `my_table`
-        FORMAT AS "json"
-        PROPERTIES(
-          "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.city, $.code]"
-        )       
-    )
-    with HDFS (
-    "hadoop.username" = "user"
-    "password" = ""
-    )
-    PROPERTIES
-    (
-    "timeout"="1200",
-    "max_filter_ratio"="0.1"
-    );
-    ```
-
-    `jsonpaths` 可与 `column list` 及 `SET (column_mapping)`配合:
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("HDFS://test:port/input/file.json")
-        INTO TABLE `my_table`
-        FORMAT AS "json"
-        (id, code, city)
-        SET (id = id * 10)
-        PROPERTIES(
-          "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.code, $.city]"
-        )       
-    )
-    with HDFS (
-    "hadoop.username" = "user"
-    "password" = ""
-    )
-    PROPERTIES
-    (
-    "timeout"="1200",
-    "max_filter_ratio"="0.1"
-    );
-    ```
-
-11. 从腾讯云 cos 中以 csv 格式导入数据。
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-    DATA INFILE("cosn://my_bucket/input/file.csv")
-    INTO TABLE `my_table`
-    (k1, k2, k3)
-    )
-    WITH BROKER "broker_name"
-    (
-        "fs.cosn.userinfo.secretId" = "xxx",
-        "fs.cosn.userinfo.secretKey" = "xxxx",
-        "fs.cosn.bucket.endpoint_suffix" = "cos.xxxxxxxxx.myqcloud.com"
-    )
-    ```
-
-12. 导入 CSV 数据时去掉双引号,并跳过前 5 行。
-
-    ```sql 
-    LOAD LABEL example_db.label12
-    (
-    DATA INFILE("cosn://my_bucket/input/file.csv")
-    INTO TABLE `my_table`
-    (k1, k2, k3)
-    PROPERTIES("trim_double_quotes" = "true", "skip_lines" = "5")
-    )
-    WITH BROKER "broker_name"
-    (
-        "fs.cosn.userinfo.secretId" = "xxx",
-        "fs.cosn.userinfo.secretKey" = "xxxx",
-        "fs.cosn.bucket.endpoint_suffix" = "cos.xxxxxxxxx.myqcloud.com"
-    )
-    ```
\ No newline at end of file
+完整示例(包括 S3、HDFS、JSON 格式、Merge 模式、路径分区提取等)请参考数据导入指南中的 [Broker 
Load](../../../../data-operate/import/import-way/broker-load-manual.md)。
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
index c7ae5d37400..44af4de228b 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
@@ -384,218 +384,4 @@ FROM <data_source> [<data_source_properties>]
 
 ## 示例
 
-- 为 example_db 的 example_tbl 创建一个名为 test1 的 Kafka 例行导入任务。指定列分隔符和 group.id 和 
client.id,并且自动默认消费所有分区,且从有数据的位置(OFFSET_BEGINNING)开始订阅
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS TERMINATED BY ",",
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100)
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.group.id" = "xxx",
-       "property.client.id" = "xxx",
-       "property.kafka_default_offsets" = "OFFSET_BEGINNING"
-   );
-   ```
-
-- 为 example_db 创建一个名为 test1 的 Kafka 例行动态多表导入任务。指定列分隔符和 group.id 和 
client.id,并且自动默认消费所有分区, 
-   且从有数据的位置(OFFSET_BEGINNING)开始订阅
-
-  我们假设需要将 Kafka 中的数据导入到 example_db 中的 test1 以及 test2 表中,我们创建了一个名为 test1 
的例行导入任务,同时将 test1 和 
-  test2 中的数据写到一个名为 `my_topic` 的 Kafka 的 topic 中,这样就可以通过一个例行导入任务将 Kafka 
中的数据导入到两个表中。
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.group.id" = "xxx",
-       "property.client.id" = "xxx",
-       "property.kafka_default_offsets" = "OFFSET_BEGINNING"
-   );
-   ```
-
-- 为 example_db 的 example_tbl 创建一个名为 test1 的 Kafka 例行导入任务。导入任务为严格模式。
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100),
-   PRECEDING FILTER k1 = 1,
-   WHERE k1 > 100 and k2 like "%doris%"
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "true"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
--  通过 SSL 认证方式,从 Kafka 集群导入数据。同时设置 client.id 参数。导入任务为非严格模式,时区为 Africa/Abidjan
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100),
-   WHERE k1 > 100 and k2 like "%doris%"
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "timezone" = "Africa/Abidjan"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.security.protocol" = "ssl",
-       "property.ssl.ca.location" = "FILE:ca.pem",
-       "property.ssl.certificate.location" = "FILE:client.pem",
-       "property.ssl.key.location" = "FILE:client.key",
-       "property.ssl.key.password" = "abcdefg",
-       "property.client.id" = "my_client_id"
-   );
-   ```
-
--  导入 Json 格式数据。默认使用 Json 中的字段名作为列名映射。指定导入 0,1,2 三个分区,起始 offset 都为 0
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_json_label_1 ON table1
-   COLUMNS(category,price,author)
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "format" = "json"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2",
-       "kafka_offsets" = "0,0,0"
-   );
-   ```
-
-- 导入 Json 数据,并通过 Jsonpaths 抽取字段,并指定 Json 文档根节点
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(category, author, price, timestamp, dt=from_unixtime(timestamp, 
'%Y%m%d'))
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "format" = "json",
-       "jsonpaths" = 
"[\"$.category\",\"$.author\",\"$.price\",\"$.timestamp\"]",
-       "json_root" = "$.RECORDS"
-       "strip_outer_array" = "true"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2",
-       "kafka_offsets" = "0,0,0"
-   );
-   ```
-
-- 为 example_db 的 example_tbl 创建一个名为 test1 的 Kafka 例行导入任务。并且使用条件过滤。
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   WITH MERGE
-   COLUMNS(k1, k2, k3, v1, v2, v3),
-   WHERE k1 > 100 and k2 like "%doris%",
-   DELETE ON v3 >100
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- 导入数据到含有 sequence 列的 Unique Key 模型表中
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_job ON example_tbl
-   COLUMNS TERMINATED BY ",",
-   COLUMNS(k1,k2,source_sequence,v1,v2),
-   ORDER BY source_sequence
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "30",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200"
-   ) FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- 从指定的时间点开始消费
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_job ON example_tbl
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "30",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200"
-   ) FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092",
-       "kafka_topic" = "my_topic",
-       "property.kafka_default_offsets" = "2021-05-21 10:00:00"
-   );
-   ```
\ No newline at end of file
+完整示例(包括 Kafka CSV/JSON 导入、SSL 认证、Merge 模式、Sequence 列等)请参考数据导入指南中的 [Routine 
Load](../../../../data-operate/import/import-way/routine-load-manual.md)。
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
index dad8c6b089f..741592a858d 100644
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
@@ -90,87 +90,4 @@ INTO TABLE "<tbl_name>"
 
 ## 举例
 
-1. 将客户端本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表。指定超时时间为 100 秒
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("timeout"="100")
-    ```
-
-2. 将服务端本地文件'/root/testData'(需设置 FE 配置`mysql_load_server_secure_path`为`/root`) 
中的数据导入到数据库'testDb'中'testTbl'的表。指定超时时间为 100 秒
-
-    ```sql
-    LOAD DATA
-    INFILE '/root/testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("timeout"="100")
-    ```
-
-3. 将客户端本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表,允许 20% 的错误率
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-4. 将客户端本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表,允许 20% 的错误率,并且指定文件的列名
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    (k2, k1, v1)
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-5. 将本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表中的 p1, p2 分区,允许 20% 的错误率。
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PARTITION (p1, p2)
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-6. 将本地行分隔符为`0102`,列分隔符为`0304`的 CSV 文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表中。
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    COLUMNS TERMINATED BY '0304'
-    LINES TERMINATED BY '0102'
-    ```
-
-7. 将本地文件'testData'中的数据导入到数据库'testDb'中'testTbl'的表中的 p1, p2 分区,并跳过前面 3 行。
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PARTITION (p1, p2)
-    IGNORE 1 LINES
-    ```
-
-8. 导入数据进行严格模式过滤,并设置时区为 Africa/Abidjan
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("strict_mode"="true", "timezone"="Africa/Abidjan")
-    ```
-
-9. 导入数据进行限制导入内存为 10GB, 并在 10 分钟超时
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("exec_mem_limit"="10737418240", "timeout"="600")
-    ```
\ No newline at end of file
+完整示例(包括本地文件导入、分区选择、列映射、严格模式等)请参考数据导入指南中的 [MySQL 
Load](../../../../data-operate/import/import-way/mysql-load-manual.md)。
\ No newline at end of file
diff --git 
a/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
 
b/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
index 4d7bf54e296..0949cd9f219 100644
--- 
a/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
+++ 
b/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/BROKER-LOAD.md
@@ -152,296 +152,4 @@ Users executing this SQL command must have at least the 
following permissions:
 
 ## Examples
 
-1. Import a batch of data from HDFS. The imported file is `file.txt`, 
separated by commas, and imported into the table `my_table`.
-
-    ```sql
-    LOAD LABEL example_db.label1
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file.txt")
-        INTO TABLE `my_table`
-        COLUMNS TERMINATED BY ","
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-2. Import data from HDFS using wildcards to match two batches of files and 
import them into two tables respectively. Use wildcards to match two batches of 
files, `file - 10*` and `file - 20*`, and import them into the tables 
`my_table1` and `my_table2` respectively. For `my_table1`, specify to import 
into partition `p1`, and import the values of the second and third columns in 
the source file after adding 1.
-
-    ```sql
-    LOAD LABEL example_db.label2
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-10*")
-        INTO TABLE `my_table1`
-        PARTITION (p1)
-        COLUMNS TERMINATED BY ","
-        (k1, tmp_k2, tmp_k3)
-        SET (
-            k2 = tmp_k2 + 1,
-            k3 = tmp_k3 + 1
-        ),
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file-20*")
-        INTO TABLE `my_table2`
-        COLUMNS TERMINATED BY ","
-        (k1, k2, k3)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-3. Import a batch of data from HDFS. Specify the separator as the default Hive 
separator `\\x01`, and use the wildcard `*` to specify all files in all 
directories under the `data` directory. Use simple authentication and configure 
namenode HA at the same time.
-
-    ```sql
-    LOAD LABEL example_db.label3
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/user/doris/data/*/*")
-        INTO TABLE `my_table`
-        COLUMNS TERMINATED BY "\\x01"
-    )
-    WITH BROKER my_hdfs_broker
-    (
-        "username" = "",
-        "password" = "",
-        "fs.defaultFS" = "hdfs://my_ha",
-        "dfs.nameservices" = "my_ha",
-        "dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",
-        "dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",
-        "dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",
-        "dfs.client.failover.proxy.provider.my_ha" = 
"org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
-    );
-    ```
-
-4. Import data in Parquet format and specify the `FORMAT` as `parquet`. By 
default, it is determined by the file suffix.
-
-    ```sql
-    LOAD LABEL example_db.label4
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/file")
-        INTO TABLE `my_table`
-        FORMAT AS "parquet"
-        (k1, k2, k3)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-5. Import data and extract partition fields from the file path. The columns in 
the `my_table` are `k1, k2, k3, city, utc_date`. The directory 
`hdfs://hdfs_host:hdfs_port/user/doris/data/input/dir/city = beijing` contains 
the following files:
-    ```text
-    hdfs://hdfs_host:hdfs_port/input/city=beijing/utc_date=2020-10-01/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=beijing/utc_date=2020-10-02/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=tianji/utc_date=2020-10-03/0000.csv
-    hdfs://hdfs_host:hdfs_port/input/city=tianji/utc_date=2020-10-04/0000.csv
-    ```
-    The files only contain three columns of data, `k1, k2, k3`, and the two 
columns of data, `city` and `utc_date`, will be extracted from the file path.
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("hdfs://hdfs_host:hdfs_port/input/city=beijing/*/*")
-        INTO TABLE `my_table`
-        FORMAT AS "csv"
-        (k1, k2, k3)
-        COLUMNS FROM PATH AS (city, utc_date)
-    )
-    WITH BROKER hdfs
-    (
-        "username"="hdfs_user",
-        "password"="hdfs_password"
-    );
-    ```
-
-6. Filter the data to be imported. Only rows where `k1 = 1` in the original 
data and `k1 > k2` after conversion will be imported.
-
-    ```sql
-    LOAD LABEL example_db.label6
-    (
-        DATA INFILE("hdfs://host:port/input/file")
-        INTO TABLE `my_table`
-        (k1, k2, k3)
-        SET (
-            k2 = k2 + 1
-        )
-        PRECEDING FILTER k1 = 1
-        WHERE k1 > k2
-    )
-    WITH BROKER hdfs
-    (
-        "username"="user",
-        "password"="pass"
-    );
-    ```
-
-7. Import data, extract the time partition field from the file path, and the 
time contains `%3A` (in the HDFS path, `:` is not allowed, so all `:` will be 
replaced by `%3A`).
-
-   ```sql
-   LOAD LABEL example_db.label7
-   (
-       DATA INFILE("hdfs://host:port/user/data/*/test.txt") 
-       INTO TABLE `tbl12`
-       COLUMNS TERMINATED BY ","
-       (k2,k3)
-       COLUMNS FROM PATH AS (data_time)
-       SET (
-           data_time=str_to_date(data_time, '%Y-%m-%d %H%%3A%i%%3A%s')
-       )
-   )
-   WITH BROKER hdfs
-   (
-       "username"="user",
-       "password"="pass"
-   );
-   ```
-
-   The directory contains the following files:
-
-   ```text
-   /user/data/data_time=2020-02-17 00%3A00%3A00/test.txt
-   /user/data/data_time=2020-02-18 00%3A00%3A00/test.txt
-   ```
-
-   The table structure is:
-
-   ```text
-   data_time DATETIME,
-   k2        INT,
-   k3        INT
-   ```
-
-8. Import a batch of data from HDFS, specifying the timeout period and the 
filtering ratio. Use the broker `my_hdfs_broker` with plain - text 
authentication. Delete the columns in the original data that match the columns 
where `v2 > 100` in the imported data, and import other columns normally.
-
-   ```sql
-   LOAD LABEL example_db.label8
-   (
-       MERGE DATA INFILE("HDFS://test:802/input/file")
-       INTO TABLE `my_table`
-       (k1, k2, k3, v2, v1)
-       DELETE ON v2 > 100
-   )
-   WITH HDFS
-   (
-       "hadoop.username"="user",
-       "password"="pass"
-   )
-   PROPERTIES
-   (
-       "timeout" = "3600",
-       "max_filter_ratio" = "0.1"
-   );
-   ```
-
-   Use the `MERGE` method for import. `my_table` must be a table with the 
Unique Key model. When the value of the `v2` column in the imported data is 
greater than 100, the row will be considered a deletion row.
-
-   The timeout period for the import task is 3600 seconds, and an error rate 
of up to 10% is allowed.
-
-9. Specify the `source_sequence` column during import to ensure the 
replacement order in the `UNIQUE_KEYS` table:
-
-   ```sql
-   LOAD LABEL example_db.label9
-   (
-       DATA INFILE("HDFS://test:802/input/file")
-       INTO TABLE `my_table`
-       COLUMNS TERMINATED BY ","
-       (k1,k2,source_sequence,v1,v2)
-       ORDER BY source_sequence
-   ) 
-   WITH HDFS
-   (
-       "hadoop.username"="user",
-       "password"="pass"
-   )
-   ```
-
-   `my_table` must be a table with the Unique Key model and a `Sequence Col` 
must be specified. The data will be ordered according to the values in the 
`source_sequence` column of the source data.
-
-10. Import a batch of data from HDFS, specifying the file format as `json` and 
setting `json_root` and `jsonpaths`:
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("HDFS://test:port/input/file.json")
-        INTO TABLE `my_table`
-        FORMAT AS "json"
-        PROPERTIES(
-          "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.city, $.code]"
-        )       
-    )
-    WITH BROKER HDFS (
-        "hadoop.username" = "user",
-        "password" = ""
-    )
-    PROPERTIES
-    (
-        "timeout"="1200",
-        "max_filter_ratio"="0.1"
-    );
-    ```
-
-    `jsonpaths` can be used in conjunction with `column list` and `SET 
(column_mapping)`:
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("HDFS://test:port/input/file.json")
-        INTO TABLE `my_table`
-        FORMAT AS "json"
-        (id, code, city)
-        SET (id = id * 10)
-        PROPERTIES(
-          "json_root" = "$.item",
-          "jsonpaths" = "[$.id, $.code, $.city]"
-        )       
-    )
-    WITH BROKER HDFS (
-        "hadoop.username" = "user",
-        "password" = ""
-    )
-    PROPERTIES
-    (
-        "timeout"="1200",
-        "max_filter_ratio"="0.1"
-    );
-    ```
-
-11. Import data in CSV format from Tencent Cloud COS.
-
-    ```sql
-    LOAD LABEL example_db.label10
-    (
-        DATA INFILE("cosn://my_bucket/input/file.csv")
-        INTO TABLE `my_table`
-        (k1, k2, k3)
-    )
-    WITH BROKER "broker_name"
-    (
-        "fs.cosn.userinfo.secretId" = "xxx",
-        "fs.cosn.userinfo.secretKey" = "xxxx",
-        "fs.cosn.bucket.endpoint_suffix" = "cos.xxxxxxxxx.myqcloud.com"
-    )
-    ```
-
-12. Remove double quotes and skip the first 5 rows when importing CSV data.
-
-    ```sql 
-    LOAD LABEL example_db.label12
-    (
-        DATA INFILE("cosn://my_bucket/input/file.csv")
-        INTO TABLE `my_table`
-        (k1, k2, k3)
-        PROPERTIES("trim_double_quotes" = "true", "skip_lines" = "5")
-    )
-    WITH BROKER "broker_name"
-    (
-        "fs.cosn.userinfo.secretId" = "xxx",
-        "fs.cosn.userinfo.secretKey" = "xxxx",
-        "fs.cosn.bucket.endpoint_suffix" = "cos.xxxxxxxxx.myqcloud.com"
-    )
-    ```
\ No newline at end of file
+For complete examples covering S3, HDFS, JSON format, Merge mode, path-based 
partition extraction, and more, refer to [Broker 
Load](../../../../data-operate/import/import-way/broker-load-manual.md) in the 
Data Import guide.
\ No newline at end of file
diff --git 
a/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
 
b/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
index 3f7d86d1849..24a09dcc8cb 100644
--- 
a/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
+++ 
b/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/CREATE-ROUTINE-LOAD.md
@@ -383,216 +383,4 @@ Users executing this SQL command must have at least the 
following privileges:
 
 ## Examples
 
-- Create a Kafka routine load task named test1 for example_tbl in example_db. 
Specify column separator, group.id and client.id, and automatically consume all 
partitions by default, starting subscription from where data exists 
(OFFSET_BEGINNING)
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS TERMINATED BY ",",
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100)
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.group.id" = "xxx",
-       "property.client.id" = "xxx",
-       "property.kafka_default_offsets" = "OFFSET_BEGINNING"
-   );
-   ```
-
-- Create a Kafka routine dynamic multi-table load task named test1 for 
example_db. Specify column separator, group.id and client.id, and automatically 
consume all partitions by default, starting subscription from where data exists 
(OFFSET_BEGINNING)
-
-  Assuming we need to import data from Kafka into test1 and test2 tables in 
example_db, we create a routine load task named test1, and write data from 
test1 and test2 to a Kafka topic named `my_topic`. This way, we can import data 
from Kafka into two tables through one routine load task.
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.group.id" = "xxx",
-       "property.client.id" = "xxx",
-       "property.kafka_default_offsets" = "OFFSET_BEGINNING"
-   );
-   ```
-
-- Create a Kafka routine load task named test1 for example_tbl in example_db. 
The import task is in strict mode.
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100),
-   PRECEDING FILTER k1 = 1,
-   WHERE k1 > 100 and k2 like "%doris%"
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "true"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- Import data from Kafka cluster using SSL authentication. Also set client.id 
parameter. Import task is in non-strict mode, timezone is Africa/Abidjan
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(k1, k2, k3, v1, v2, v3 = k1 * 100),
-   WHERE k1 > 100 and k2 like "%doris%"
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "timezone" = "Africa/Abidjan"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "property.security.protocol" = "ssl",
-       "property.ssl.ca.location" = "FILE:ca.pem",
-       "property.ssl.certificate.location" = "FILE:client.pem",
-       "property.ssl.key.location" = "FILE:client.key",
-       "property.ssl.key.password" = "abcdefg",
-       "property.client.id" = "my_client_id"
-   );
-   ```
-
-- Import Json format data. Use field names in Json as column name mapping by 
default. Specify importing partitions 0,1,2, all starting offsets are 0
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_json_label_1 ON table1
-   COLUMNS(category,price,author)
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "format" = "json"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2",
-       "kafka_offsets" = "0,0,0"
-   );
-   ```
-
-- Import Json data, extract fields through Jsonpaths, and specify Json 
document root node
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   COLUMNS(category, author, price, timestamp, dt=from_unixtime(timestamp, 
'%Y%m%d'))
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false",
-       "format" = "json",
-       "jsonpaths" = 
"[\"$.category\",\"$.author\",\"$.price\",\"$.timestamp\"]",
-       "json_root" = "$.RECORDS"
-       "strip_outer_array" = "true"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2",
-       "kafka_offsets" = "0,0,0"
-   );
-   ```
-
-- Create a Kafka routine load task named test1 for example_tbl in example_db 
with condition filtering.
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test1 ON example_tbl
-   WITH MERGE
-   COLUMNS(k1, k2, k3, v1, v2, v3),
-   WHERE k1 > 100 and k2 like "%doris%",
-   DELETE ON v3 >100
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "20",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200",
-       "strict_mode" = "false"
-   )
-   FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- Import data into a Unique Key model table containing sequence columns
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_job ON example_tbl
-   COLUMNS TERMINATED BY ",",
-   COLUMNS(k1,k2,source_sequence,v1,v2),
-   ORDER BY source_sequence
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "30",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200"
-   ) FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092,broker3:9092",
-       "kafka_topic" = "my_topic",
-       "kafka_partitions" = "0,1,2,3",
-       "kafka_offsets" = "101,0,0,200"
-   );
-   ```
-
-- Start consuming from a specified time point
-
-   ```sql
-   CREATE ROUTINE LOAD example_db.test_job ON example_tbl
-   PROPERTIES
-   (
-       "desired_concurrent_number"="3",
-       "max_batch_interval" = "30",
-       "max_batch_rows" = "300000",
-       "max_batch_size" = "209715200"
-   ) FROM KAFKA
-   (
-       "kafka_broker_list" = "broker1:9092,broker2:9092",
-       "kafka_topic" = "my_topic",
-       "property.kafka_default_offsets" = "2021-05-21 10:00:00"
-   );
-   ```
\ No newline at end of file
+For complete examples covering Kafka CSV/JSON import, SSL authentication, 
Merge mode, sequence columns, and more, refer to [Routine 
Load](../../../../data-operate/import/import-way/routine-load-manual.md) in the 
Data Import guide.
\ No newline at end of file
diff --git 
a/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
 
b/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
index 090aa0c8c27..68a39cd3ccc 100644
--- 
a/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
+++ 
b/versioned_docs/version-4.x/sql-manual/sql-statements/data-modification/load-and-export/MYSQL-LOAD.md
@@ -89,87 +89,4 @@ Users executing this SQL command must have at least the 
following permissions:
 
 ## Examples
 
-1. Import data from the client's local file `testData` into the table 
`testTbl` in the database `testDb`. Specify a timeout of 100 seconds.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("timeout"="100")
-    ```
-
-2. Import data from the server's local file `/root/testData` (you need to set 
the FE configuration `mysql_load_server_secure_path` to `/root`) into the table 
`testTbl` in the database `testDb`. Specify a timeout of 100 seconds.
-
-    ```sql
-    LOAD DATA
-    INFILE '/root/testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("timeout"="100")
-    ```
-
-3. Import data from the client's local file `testData` into the table 
`testTbl` in the database `testDb`, allowing an error rate of 20%.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-4. Import data from the client's local file `testData` into the table 
`testTbl` in the database `testDb`, allowing an error rate of 20%, and specify 
the column names of the file.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    (k2, k1, v1)
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-5. Import data from the local file `testData` into partitions `p1` and `p2` of 
the table `testTbl` in the database `testDb`, allowing an error rate of 20%.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PARTITION (p1, p2)
-    PROPERTIES ("max_filter_ratio"="0.2")
-    ```
-
-6. Import data from the local CSV file `testData` with a line separator of 
`0102` and a column separator of `0304` into the table `testTbl` in the 
database `testDb`.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    COLUMNS TERMINATED BY '0304'
-    LINES TERMINATED BY '0102'
-    ```
-
-7. Import data from the local file `testData` into partitions `p1` and `p2` of 
the table `testTbl` in the database `testDb` and skip the first 3 lines.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PARTITION (p1, p2)
-    IGNORE 3 LINES
-    ```
-
-8. Import data with strict mode filtering and set the time zone to 
`Africa/Abidjan`.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("strict_mode"="true", "timezone"="Africa/Abidjan")
-    ```
-
-9. Limit the import memory to 10GB and set a timeout of 10 minutes for the 
data import.
-
-    ```sql
-    LOAD DATA LOCAL
-    INFILE 'testData'
-    INTO TABLE testDb.testTbl
-    PROPERTIES ("exec_mem_limit"="10737418240", "timeout"="600")
-    ```
\ No newline at end of file
+For complete examples covering local file import, partition selection, column 
mapping, strict mode, and more, refer to [MySQL 
Load](../../../../data-operate/import/import-way/mysql-load-manual.md) in the 
Data Import guide.
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to