This is an automated email from the ASF dual-hosted git repository.

zhaoxinyi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/iotdb-docs.git


The following commit(s) were added to refs/heads/main by this push:
     new e7074e2a Fix document fragmentation issue (#423)
e7074e2a is described below

commit e7074e2a7213db0bf31efd90bf3c80f674d9f7b0
Author: W1y1r <[email protected]>
AuthorDate: Fri Nov 22 17:17:15 2024 +0800

    Fix document fragmentation issue (#423)
    
    * Fix document fragmentation issue
    
    * Revise the 404 issue
---
 .../Basic-Concept/Cluster-data-partitioning.md     |   2 +-
 src/UserGuide/Master/Reference/UDF-Libraries.md    | 100 ++++++++++++++++-----
 src/UserGuide/Master/SQL-Manual/SQL-Manual.md      |   2 +-
 .../Master/User-Manual/Write-Delete-Data.md        |   4 +-
 src/UserGuide/V1.2.x/QuickStart/QuickStart.md      |   4 +-
 .../Basic-Concept/Cluster-data-partitioning.md     |   2 +-
 src/UserGuide/V1.3.0-2/Reference/UDF-Libraries.md  | 100 ++++++++++++++++-----
 .../Basic-Concept/Cluster-data-partitioning.md     |   2 +-
 src/UserGuide/latest/Reference/UDF-Libraries.md    | 100 ++++++++++++++++-----
 src/UserGuide/latest/SQL-Manual/SQL-Manual.md      |   2 +-
 .../latest/User-Manual/Write-Delete-Data.md        |   4 +-
 .../Basic-Concept/Cluster-data-partitioning.md     |   2 +-
 src/zh/UserGuide/Master/Reference/UDF-Libraries.md |  44 ++++-----
 .../Master/User-Manual/Write-Delete-Data.md        |   4 +-
 .../Basic-Concept/Cluster-data-partitioning.md     |   2 +-
 .../UserGuide/V1.3.0-2/Reference/UDF-Libraries.md  |  39 +++-----
 .../Basic-Concept/Cluster-data-partitioning.md     |   2 +-
 src/zh/UserGuide/latest/Reference/UDF-Libraries.md |  43 ++++-----
 .../latest/User-Manual/Write-Delete-Data.md        |   4 +-
 19 files changed, 301 insertions(+), 161 deletions(-)

diff --git a/src/UserGuide/Master/Basic-Concept/Cluster-data-partitioning.md 
b/src/UserGuide/Master/Basic-Concept/Cluster-data-partitioning.md
index a9c6e9b6..c10a6e6a 100644
--- a/src/UserGuide/Master/Basic-Concept/Cluster-data-partitioning.md
+++ b/src/UserGuide/Master/Basic-Concept/Cluster-data-partitioning.md
@@ -45,7 +45,7 @@ The time partitioning algorithm converts a given timestamp to 
the corresponding
 
 
$$\left\lfloor\frac{\text{Timestamp}-\text{StartTimestamp}}{\text{TimePartitionInterval}}\right\rfloor.$$
 
-In this equation, both $\text{StartTimestamp}$ and 
$\text{TimePartitionInterval}$ are configurable parameters to accommodate 
various production environments. The $\text{StartTimestamp}$ represents the 
starting time of the first time partition, while the 
$\text{TimePartitionInterval}$ defines the duration of each time partition. By 
default, the $\text{TimePartitionInterval}$ is set to one day.
+In this equation, both $\text{StartTimestamp}$ and 
$\text{TimePartitionInterval}$ are configurable parameters to accommodate 
various production environments. The $\text{StartTimestamp}$ represents the 
starting time of the first time partition, while the 
$\text{TimePartitionInterval}$ defines the duration of each time partition. By 
default, the $\text{TimePartitionInterval}$ is set to seven day.
 
 #### Schema Partitioning
 Since the series partitioning algorithm evenly partitions the time series, 
each series partition corresponds to a schema partition. These schema 
partitions are then evenly allocated across the SchemaRegionGroups to achieve a 
balanced schema distribution.
diff --git a/src/UserGuide/Master/Reference/UDF-Libraries.md 
b/src/UserGuide/Master/Reference/UDF-Libraries.md
index de7ac2c7..de0ac8b2 100644
--- a/src/UserGuide/Master/Reference/UDF-Libraries.md
+++ b/src/UserGuide/Master/Reference/UDF-Libraries.md
@@ -34,11 +34,11 @@ Based on the ability of user-defined functions, IoTDB 
provides a series of funct
     | UDF-1.3.3.zip | V1.3.3 and above      | 
[UDF.zip](https://alioss.timecho.com/upload/UDF-1.3.3.zip)   |
     | UDF-1.3.2.zip | V1.0.0~V1.3.2  | 
[UDF.zip](https://alioss.timecho.com/upload/UDF-1.3.2.zip) |
     
-2. Place the library-udf.jar file in the compressed file obtained in the 
directory `/ext/udf ` of all nodes in the IoTDB cluster
+2. Place the `library-udf.jar` file in the compressed file obtained in the 
directory `/ext/udf ` of all nodes in the IoTDB cluster
 3. In the SQL command line terminal (CLI) or visualization console (Workbench) 
SQL operation interface of IoTDB, execute the corresponding function 
registration statement as follows.
 4.  Batch registration: Two registration methods: registration script or SQL 
full statement
 - Register Script 
-    - Copy the registration script (register-UDF.sh or register-UDF.bat) from 
the compressed package to the `tools` directory of IoTDB as needed, and modify 
the parameters in the script (default is host=127.0.0.1, rpcPort=6667, 
user=root, pass=root);
+    - Copy the registration script (`register-UDF.sh` or `register-UDF.bat`) 
from the compressed package to the `tools` directory of IoTDB as needed, and 
modify the parameters in the script (default is host=127.0.0.1, rpcPort=6667, 
user=root, pass=root);
     - Start IoTDB service, run registration script to batch register UDF
 
 - All SQL statements
@@ -3934,26 +3934,86 @@ Output series:
 
 Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, 
the output is $y=2sin(2\pi t/5)$ after low-pass filtering.
 
-<!--
 
-​    Licensed to the Apache Software Foundation (ASF) under one
-​    or more contributor license agreements.  See the NOTICE file
-​    distributed with this work for additional information
-​    regarding copyright ownership.  The ASF licenses this file
-​    to you under the Apache License, Version 2.0 (the
-​    "License"); you may not use this file except in compliance
-​    with the License.  You may obtain a copy of the License at
-​
-​        http://www.apache.org/licenses/LICENSE-2.0
-​
-​    Unless required by applicable law or agreed to in writing,
-​    software distributed under the License is distributed on an
-​    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-​    KIND, either express or implied.  See the License for the
-​    specific language governing permissions and limitations
-​    under the License.
+### Envelope
+
+#### Registration statement
+
+```sql
+create function envelope as 
'org.apache.iotdb.library.frequency.UDFEnvelopeAnalysis'
+```
+
+#### Usage
+
+This function achieves signal demodulation and envelope extraction by 
inputting a one-dimensional floating-point array and a user specified 
modulation frequency. The goal of demodulation is to extract the parts of 
interest from complex signals, making them easier to understand. For example, 
demodulation can be used to find the envelope of the signal, that is, the trend 
of amplitude changes.
+
+**Name:** Envelope
+
+**Input:** Only supports a single input sequence, with types 
INT32/INT64/FLOAT/DOUBLE
+
+
+**Parameters:**
+
++ `frequency`: Frequency (optional, positive number. If this parameter is not 
filled in, the system will infer the frequency based on the time interval 
corresponding to the sequence).
++ `amplification`: Amplification factor (optional, positive integer. The 
output of the Time column is a set of positive integers and does not output 
decimals. When the frequency is less than 1, this parameter can be used to 
amplify the frequency to display normal results).
+
+**Output:**
++ `Time`: The meaning of the value returned by this column is frequency rather 
than time. If the output format is time format (e.g. 1970-01-01T08:00: 
19.000+08:00), please convert it to a timestamp value.
+
+
++ `Envelope(Path, 'frequency'='{frequency}')`:Output a single sequence of type 
DOUBLE, which is the result of envelope analysis.
+
+**Note:** When the values of the demodulated original sequence are 
discontinuous, this function will treat it as continuous processing. It is 
recommended that the analyzed time series be a complete time series of values. 
It is also recommended to specify a start time and an end time.
+
+#### Examples
+
+Input series:
+
+
+```
++-----------------------------+---------------+
+|                         Time|root.test.d1.s1|
++-----------------------------+---------------+
+|1970-01-01T08:00:01.000+08:00|       1.0     |
+|1970-01-01T08:00:02.000+08:00|       2.0     |
+|1970-01-01T08:00:03.000+08:00|       3.0     |
+|1970-01-01T08:00:04.000+08:00|       4.0     |
+|1970-01-01T08:00:05.000+08:00|       5.0     |
+|1970-01-01T08:00:06.000+08:00|       6.0     |
+|1970-01-01T08:00:07.000+08:00|       7.0     |
+|1970-01-01T08:00:08.000+08:00|       8.0     |
+|1970-01-01T08:00:09.000+08:00|       9.0     |
+|1970-01-01T08:00:10.000+08:00|       10.0    |
++-----------------------------+---------------+
+```
+
+SQL for query:
+
+```sql
+set time_display_type=long;
+select 
envelope(s1),envelope(s1,'frequency'='1000'),envelope(s1,'amplification'='10') 
from root.test.d1;
+```
+
+Output series:
+
+
+```
++----+-------------------------+---------------------------------------------+-----------------------------------------------+
+|Time|envelope(root.test.d1.s1)|envelope(root.test.d1.s1, 
"frequency"="1000")|envelope(root.test.d1.s1, "amplification"="10")|
++----+-------------------------+---------------------------------------------+-----------------------------------------------+
+|   0|        6.284350808484124|                            6.284350808484124| 
                             6.284350808484124|
+| 100|       1.5581923657404393|                           1.5581923657404393| 
                                          null|
+| 200|       0.8503211038340728|                           0.8503211038340728| 
                                          null|
+| 300|        0.512808785945551|                            0.512808785945551| 
                                          null|
+| 400|      0.26361156774506744|                          0.26361156774506744| 
                                          null|
+|1000|                     null|                                         null| 
                            1.5581923657404393|
+|2000|                     null|                                         null| 
                            0.8503211038340728|
+|3000|                     null|                                         null| 
                             0.512808785945551|
+|4000|                     null|                                         null| 
                           0.26361156774506744|
++----+-------------------------+---------------------------------------------+-----------------------------------------------+
+
+```
 
--->
 
 ## Data Matching
 
diff --git a/src/UserGuide/Master/SQL-Manual/SQL-Manual.md 
b/src/UserGuide/Master/SQL-Manual/SQL-Manual.md
index 19b8b79e..62eb67e9 100644
--- a/src/UserGuide/Master/SQL-Manual/SQL-Manual.md
+++ b/src/UserGuide/Master/SQL-Manual/SQL-Manual.md
@@ -444,7 +444,7 @@ IoTDB > select * from root.sg1.d1
 
 ### Load External TsFile Tool
 
-For more details, see document 
[Import-Export-Tool](../Tools-System/TsFile-Import-Export-Tool.md).
+For more details, see document [Data 
Import](../Tools-System/Data-Import-Tool.md).
 
 #### Load with SQL
 
diff --git a/src/UserGuide/Master/User-Manual/Write-Delete-Data.md 
b/src/UserGuide/Master/User-Manual/Write-Delete-Data.md
index f0285dba..b5600b99 100644
--- a/src/UserGuide/Master/User-Manual/Write-Delete-Data.md
+++ b/src/UserGuide/Master/User-Manual/Write-Delete-Data.md
@@ -185,11 +185,11 @@ In different scenarios, the IoTDB provides a variety of 
methods for importing da
 
 ### TsFile Batch Load
 
-TsFile is the file format of time series used in IoTDB. You can directly 
import one or more TsFile files with time series into another running IoTDB 
instance through tools such as CLI. For details, see 
[Import-Export-Tool](../Tools-System/TsFile-Import-Export-Tool.md).
+TsFile is the file format of time series used in IoTDB. You can directly 
import one or more TsFile files with time series into another running IoTDB 
instance through tools such as CLI. For details, see [Data 
Import](../Tools-System/Data-Import-Tool.md).
 
 ### CSV Batch Load
 
-CSV stores table data in plain text. You can write multiple formatted data 
into a CSV file and import the data into the IoTDB in batches. Before importing 
data, you are advised to create the corresponding metadata in the IoTDB. Don't 
worry if you forget to create one, the IoTDB can automatically infer the data 
in the CSV to its corresponding data type, as long as you have a unique data 
type for each column. In addition to a single file, the tool supports importing 
multiple CSV files as f [...]
+CSV stores table data in plain text. You can write multiple formatted data 
into a CSV file and import the data into the IoTDB in batches. Before importing 
data, you are advised to create the corresponding metadata in the IoTDB. Don't 
worry if you forget to create one, the IoTDB can automatically infer the data 
in the CSV to its corresponding data type, as long as you have a unique data 
type for each column. In addition to a single file, the tool supports importing 
multiple CSV files as f [...]
 
 ## DELETE
 
diff --git a/src/UserGuide/V1.2.x/QuickStart/QuickStart.md 
b/src/UserGuide/V1.2.x/QuickStart/QuickStart.md
index db84ed9f..cd86ffde 100644
--- a/src/UserGuide/V1.2.x/QuickStart/QuickStart.md
+++ b/src/UserGuide/V1.2.x/QuickStart/QuickStart.md
@@ -51,7 +51,7 @@ Configuration files are located in the `conf` folder
   * system config module (`iotdb-datanode.properties`)
   * log config module (`logback.xml`). 
 
-For more information, please go to 
[Config](../stage/DataNode-Config-Manual.md).
+For more information, please go to 
[Config](../Reference/DataNode-Config-Manual.md).
 
 ## Start
 
@@ -244,7 +244,7 @@ The server can be stopped using `ctrl-C` or by running the 
following script:
 ```
 Note: In Linux, please add the `sudo` as far as possible, or else the stopping 
process may fail. <!-- TODO: Actually running things as `root` is considered a 
bad practice from security perspective. Is there a reson for requiring root? I 
don't think we're using any privileged ports or resources. -->
 
-More explanations on running IoTDB in a clustered environment are available at 
[Cluster-Setup](../stage/Cluster/Cluster-Setup.md).
+More explanations on running IoTDB in a clustered environment are available at 
[Cluster-Setup](../Deployment-and-Maintenance/Deployment-Guide_timecho.md).
 
 ### Administration
 
diff --git a/src/UserGuide/V1.3.0-2/Basic-Concept/Cluster-data-partitioning.md 
b/src/UserGuide/V1.3.0-2/Basic-Concept/Cluster-data-partitioning.md
index a9c6e9b6..c10a6e6a 100644
--- a/src/UserGuide/V1.3.0-2/Basic-Concept/Cluster-data-partitioning.md
+++ b/src/UserGuide/V1.3.0-2/Basic-Concept/Cluster-data-partitioning.md
@@ -45,7 +45,7 @@ The time partitioning algorithm converts a given timestamp to 
the corresponding
 
 
$$\left\lfloor\frac{\text{Timestamp}-\text{StartTimestamp}}{\text{TimePartitionInterval}}\right\rfloor.$$
 
-In this equation, both $\text{StartTimestamp}$ and 
$\text{TimePartitionInterval}$ are configurable parameters to accommodate 
various production environments. The $\text{StartTimestamp}$ represents the 
starting time of the first time partition, while the 
$\text{TimePartitionInterval}$ defines the duration of each time partition. By 
default, the $\text{TimePartitionInterval}$ is set to one day.
+In this equation, both $\text{StartTimestamp}$ and 
$\text{TimePartitionInterval}$ are configurable parameters to accommodate 
various production environments. The $\text{StartTimestamp}$ represents the 
starting time of the first time partition, while the 
$\text{TimePartitionInterval}$ defines the duration of each time partition. By 
default, the $\text{TimePartitionInterval}$ is set to seven day.
 
 #### Schema Partitioning
 Since the series partitioning algorithm evenly partitions the time series, 
each series partition corresponds to a schema partition. These schema 
partitions are then evenly allocated across the SchemaRegionGroups to achieve a 
balanced schema distribution.
diff --git a/src/UserGuide/V1.3.0-2/Reference/UDF-Libraries.md 
b/src/UserGuide/V1.3.0-2/Reference/UDF-Libraries.md
index 0a4b7dfc..ab36b65a 100644
--- a/src/UserGuide/V1.3.0-2/Reference/UDF-Libraries.md
+++ b/src/UserGuide/V1.3.0-2/Reference/UDF-Libraries.md
@@ -32,11 +32,11 @@ Based on the ability of user-defined functions, IoTDB 
provides a series of funct
     | UDF-1.3.3.zip | V1.3.3 and above      | 
[UDF.zip](https://alioss.timecho.com/upload/UDF-1.3.3.zip)   |
     | UDF-1.3.2.zip | V1.0.0~V1.3.2  | 
[UDF.zip](https://alioss.timecho.com/upload/UDF-1.3.2.zip) |
     
-2. Place the library-udf.jar file in the compressed file obtained in the 
directory `/ext/udf ` of all nodes in the IoTDB cluster
+2. Place the `library-udf.jar` file in the compressed file obtained in the 
directory `/ext/udf ` of all nodes in the IoTDB cluster
 3. In the SQL command line terminal (CLI) or visualization console (Workbench) 
SQL operation interface of IoTDB, execute the corresponding function 
registration statement as follows.
 4.  Batch registration: Two registration methods: registration script or SQL 
full statement
 - Register Script 
-    - Copy the registration script (register-UDF.sh or register-UDF.bat) from 
the compressed package to the `tools` directory of IoTDB as needed, and modify 
the parameters in the script (default is host=127.0.0.1, rpcPort=6667, 
user=root, pass=root);
+    - Copy the registration script (`register-UDF.sh` or `register-UDF.bat`) 
from the compressed package to the `tools` directory of IoTDB as needed, and 
modify the parameters in the script (default is host=127.0.0.1, rpcPort=6667, 
user=root, pass=root);
     - Start IoTDB service, run registration script to batch register UDF
 
 - All SQL statements
@@ -3933,26 +3933,86 @@ Output series:
 
 Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, 
the output is $y=2sin(2\pi t/5)$ after low-pass filtering.
 
-<!--
 
-​    Licensed to the Apache Software Foundation (ASF) under one
-​    or more contributor license agreements.  See the NOTICE file
-​    distributed with this work for additional information
-​    regarding copyright ownership.  The ASF licenses this file
-​    to you under the Apache License, Version 2.0 (the
-​    "License"); you may not use this file except in compliance
-​    with the License.  You may obtain a copy of the License at
-​
-​        http://www.apache.org/licenses/LICENSE-2.0
-​
-​    Unless required by applicable law or agreed to in writing,
-​    software distributed under the License is distributed on an
-​    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-​    KIND, either express or implied.  See the License for the
-​    specific language governing permissions and limitations
-​    under the License.
+### Envelope
+
+#### Registration statement
+
+```sql
+create function envelope as 
'org.apache.iotdb.library.frequency.UDFEnvelopeAnalysis'
+```
+
+#### Usage
+
+This function achieves signal demodulation and envelope extraction by 
inputting a one-dimensional floating-point array and a user specified 
modulation frequency. The goal of demodulation is to extract the parts of 
interest from complex signals, making them easier to understand. For example, 
demodulation can be used to find the envelope of the signal, that is, the trend 
of amplitude changes.
+
+**Name:** Envelope
+
+**Input:** Only supports a single input sequence, with types 
INT32/INT64/FLOAT/DOUBLE
+
+
+**Parameters:**
+
++ `frequency`: Frequency (optional, positive number. If this parameter is not 
filled in, the system will infer the frequency based on the time interval 
corresponding to the sequence).
++ `amplification`: Amplification factor (optional, positive integer. The 
output of the Time column is a set of positive integers and does not output 
decimals. When the frequency is less than 1, this parameter can be used to 
amplify the frequency to display normal results).
+
+**Output:**
++ `Time`: The meaning of the value returned by this column is frequency rather 
than time. If the output format is time format (e.g. 1970-01-01T08:00: 
19.000+08:00), please convert it to a timestamp value.
+
+
++ `Envelope(Path, 'frequency'='{frequency}')`:Output a single sequence of type 
DOUBLE, which is the result of envelope analysis.
+
+**Note:** When the values of the demodulated original sequence are 
discontinuous, this function will treat it as continuous processing. It is 
recommended that the analyzed time series be a complete time series of values. 
It is also recommended to specify a start time and an end time.
+
+#### Examples
+
+Input series:
+
+
+```
++-----------------------------+---------------+
+|                         Time|root.test.d1.s1|
++-----------------------------+---------------+
+|1970-01-01T08:00:01.000+08:00|       1.0     |
+|1970-01-01T08:00:02.000+08:00|       2.0     |
+|1970-01-01T08:00:03.000+08:00|       3.0     |
+|1970-01-01T08:00:04.000+08:00|       4.0     |
+|1970-01-01T08:00:05.000+08:00|       5.0     |
+|1970-01-01T08:00:06.000+08:00|       6.0     |
+|1970-01-01T08:00:07.000+08:00|       7.0     |
+|1970-01-01T08:00:08.000+08:00|       8.0     |
+|1970-01-01T08:00:09.000+08:00|       9.0     |
+|1970-01-01T08:00:10.000+08:00|       10.0    |
++-----------------------------+---------------+
+```
+
+SQL for query:
+
+```sql
+set time_display_type=long;
+select 
envelope(s1),envelope(s1,'frequency'='1000'),envelope(s1,'amplification'='10') 
from root.test.d1;
+```
+
+Output series:
+
+
+```
++----+-------------------------+---------------------------------------------+-----------------------------------------------+
+|Time|envelope(root.test.d1.s1)|envelope(root.test.d1.s1, 
"frequency"="1000")|envelope(root.test.d1.s1, "amplification"="10")|
++----+-------------------------+---------------------------------------------+-----------------------------------------------+
+|   0|        6.284350808484124|                            6.284350808484124| 
                             6.284350808484124|
+| 100|       1.5581923657404393|                           1.5581923657404393| 
                                          null|
+| 200|       0.8503211038340728|                           0.8503211038340728| 
                                          null|
+| 300|        0.512808785945551|                            0.512808785945551| 
                                          null|
+| 400|      0.26361156774506744|                          0.26361156774506744| 
                                          null|
+|1000|                     null|                                         null| 
                            1.5581923657404393|
+|2000|                     null|                                         null| 
                            0.8503211038340728|
+|3000|                     null|                                         null| 
                             0.512808785945551|
+|4000|                     null|                                         null| 
                           0.26361156774506744|
++----+-------------------------+---------------------------------------------+-----------------------------------------------+
+
+```
 
--->
 
 ## Data Matching
 
diff --git a/src/UserGuide/latest/Basic-Concept/Cluster-data-partitioning.md 
b/src/UserGuide/latest/Basic-Concept/Cluster-data-partitioning.md
index a9c6e9b6..c10a6e6a 100644
--- a/src/UserGuide/latest/Basic-Concept/Cluster-data-partitioning.md
+++ b/src/UserGuide/latest/Basic-Concept/Cluster-data-partitioning.md
@@ -45,7 +45,7 @@ The time partitioning algorithm converts a given timestamp to 
the corresponding
 
 
$$\left\lfloor\frac{\text{Timestamp}-\text{StartTimestamp}}{\text{TimePartitionInterval}}\right\rfloor.$$
 
-In this equation, both $\text{StartTimestamp}$ and 
$\text{TimePartitionInterval}$ are configurable parameters to accommodate 
various production environments. The $\text{StartTimestamp}$ represents the 
starting time of the first time partition, while the 
$\text{TimePartitionInterval}$ defines the duration of each time partition. By 
default, the $\text{TimePartitionInterval}$ is set to one day.
+In this equation, both $\text{StartTimestamp}$ and 
$\text{TimePartitionInterval}$ are configurable parameters to accommodate 
various production environments. The $\text{StartTimestamp}$ represents the 
starting time of the first time partition, while the 
$\text{TimePartitionInterval}$ defines the duration of each time partition. By 
default, the $\text{TimePartitionInterval}$ is set to seven day.
 
 #### Schema Partitioning
 Since the series partitioning algorithm evenly partitions the time series, 
each series partition corresponds to a schema partition. These schema 
partitions are then evenly allocated across the SchemaRegionGroups to achieve a 
balanced schema distribution.
diff --git a/src/UserGuide/latest/Reference/UDF-Libraries.md 
b/src/UserGuide/latest/Reference/UDF-Libraries.md
index de7ac2c7..de0ac8b2 100644
--- a/src/UserGuide/latest/Reference/UDF-Libraries.md
+++ b/src/UserGuide/latest/Reference/UDF-Libraries.md
@@ -34,11 +34,11 @@ Based on the ability of user-defined functions, IoTDB 
provides a series of funct
     | UDF-1.3.3.zip | V1.3.3 and above      | 
[UDF.zip](https://alioss.timecho.com/upload/UDF-1.3.3.zip)   |
     | UDF-1.3.2.zip | V1.0.0~V1.3.2  | 
[UDF.zip](https://alioss.timecho.com/upload/UDF-1.3.2.zip) |
     
-2. Place the library-udf.jar file in the compressed file obtained in the 
directory `/ext/udf ` of all nodes in the IoTDB cluster
+2. Place the `library-udf.jar` file in the compressed file obtained in the 
directory `/ext/udf ` of all nodes in the IoTDB cluster
 3. In the SQL command line terminal (CLI) or visualization console (Workbench) 
SQL operation interface of IoTDB, execute the corresponding function 
registration statement as follows.
 4.  Batch registration: Two registration methods: registration script or SQL 
full statement
 - Register Script 
-    - Copy the registration script (register-UDF.sh or register-UDF.bat) from 
the compressed package to the `tools` directory of IoTDB as needed, and modify 
the parameters in the script (default is host=127.0.0.1, rpcPort=6667, 
user=root, pass=root);
+    - Copy the registration script (`register-UDF.sh` or `register-UDF.bat`) 
from the compressed package to the `tools` directory of IoTDB as needed, and 
modify the parameters in the script (default is host=127.0.0.1, rpcPort=6667, 
user=root, pass=root);
     - Start IoTDB service, run registration script to batch register UDF
 
 - All SQL statements
@@ -3934,26 +3934,86 @@ Output series:
 
 Note: The input is $y=sin(2\pi t/4)+2sin(2\pi t/5)$ with a length of 20. Thus, 
the output is $y=2sin(2\pi t/5)$ after low-pass filtering.
 
-<!--
 
-​    Licensed to the Apache Software Foundation (ASF) under one
-​    or more contributor license agreements.  See the NOTICE file
-​    distributed with this work for additional information
-​    regarding copyright ownership.  The ASF licenses this file
-​    to you under the Apache License, Version 2.0 (the
-​    "License"); you may not use this file except in compliance
-​    with the License.  You may obtain a copy of the License at
-​
-​        http://www.apache.org/licenses/LICENSE-2.0
-​
-​    Unless required by applicable law or agreed to in writing,
-​    software distributed under the License is distributed on an
-​    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-​    KIND, either express or implied.  See the License for the
-​    specific language governing permissions and limitations
-​    under the License.
+### Envelope
+
+#### Registration statement
+
+```sql
+create function envelope as 
'org.apache.iotdb.library.frequency.UDFEnvelopeAnalysis'
+```
+
+#### Usage
+
+This function achieves signal demodulation and envelope extraction by 
inputting a one-dimensional floating-point array and a user specified 
modulation frequency. The goal of demodulation is to extract the parts of 
interest from complex signals, making them easier to understand. For example, 
demodulation can be used to find the envelope of the signal, that is, the trend 
of amplitude changes.
+
+**Name:** Envelope
+
+**Input:** Only supports a single input sequence, with types 
INT32/INT64/FLOAT/DOUBLE
+
+
+**Parameters:**
+
++ `frequency`: Frequency (optional, positive number. If this parameter is not 
filled in, the system will infer the frequency based on the time interval 
corresponding to the sequence).
++ `amplification`: Amplification factor (optional, positive integer. The 
output of the Time column is a set of positive integers and does not output 
decimals. When the frequency is less than 1, this parameter can be used to 
amplify the frequency to display normal results).
+
+**Output:**
++ `Time`: The meaning of the value returned by this column is frequency rather 
than time. If the output format is time format (e.g. 1970-01-01T08:00: 
19.000+08:00), please convert it to a timestamp value.
+
+
++ `Envelope(Path, 'frequency'='{frequency}')`:Output a single sequence of type 
DOUBLE, which is the result of envelope analysis.
+
+**Note:** When the values of the demodulated original sequence are 
discontinuous, this function will treat it as continuous processing. It is 
recommended that the analyzed time series be a complete time series of values. 
It is also recommended to specify a start time and an end time.
+
+#### Examples
+
+Input series:
+
+
+```
++-----------------------------+---------------+
+|                         Time|root.test.d1.s1|
++-----------------------------+---------------+
+|1970-01-01T08:00:01.000+08:00|       1.0     |
+|1970-01-01T08:00:02.000+08:00|       2.0     |
+|1970-01-01T08:00:03.000+08:00|       3.0     |
+|1970-01-01T08:00:04.000+08:00|       4.0     |
+|1970-01-01T08:00:05.000+08:00|       5.0     |
+|1970-01-01T08:00:06.000+08:00|       6.0     |
+|1970-01-01T08:00:07.000+08:00|       7.0     |
+|1970-01-01T08:00:08.000+08:00|       8.0     |
+|1970-01-01T08:00:09.000+08:00|       9.0     |
+|1970-01-01T08:00:10.000+08:00|       10.0    |
++-----------------------------+---------------+
+```
+
+SQL for query:
+
+```sql
+set time_display_type=long;
+select 
envelope(s1),envelope(s1,'frequency'='1000'),envelope(s1,'amplification'='10') 
from root.test.d1;
+```
+
+Output series:
+
+
+```
++----+-------------------------+---------------------------------------------+-----------------------------------------------+
+|Time|envelope(root.test.d1.s1)|envelope(root.test.d1.s1, 
"frequency"="1000")|envelope(root.test.d1.s1, "amplification"="10")|
++----+-------------------------+---------------------------------------------+-----------------------------------------------+
+|   0|        6.284350808484124|                            6.284350808484124| 
                             6.284350808484124|
+| 100|       1.5581923657404393|                           1.5581923657404393| 
                                          null|
+| 200|       0.8503211038340728|                           0.8503211038340728| 
                                          null|
+| 300|        0.512808785945551|                            0.512808785945551| 
                                          null|
+| 400|      0.26361156774506744|                          0.26361156774506744| 
                                          null|
+|1000|                     null|                                         null| 
                            1.5581923657404393|
+|2000|                     null|                                         null| 
                            0.8503211038340728|
+|3000|                     null|                                         null| 
                             0.512808785945551|
+|4000|                     null|                                         null| 
                           0.26361156774506744|
++----+-------------------------+---------------------------------------------+-----------------------------------------------+
+
+```
 
--->
 
 ## Data Matching
 
diff --git a/src/UserGuide/latest/SQL-Manual/SQL-Manual.md 
b/src/UserGuide/latest/SQL-Manual/SQL-Manual.md
index c9076fd6..ca286e69 100644
--- a/src/UserGuide/latest/SQL-Manual/SQL-Manual.md
+++ b/src/UserGuide/latest/SQL-Manual/SQL-Manual.md
@@ -444,7 +444,7 @@ IoTDB > select * from root.sg1.d1
 
 ### Load External TsFile Tool
 
-For more details, see document 
[Import-Export-Tool](../Tools-System/TsFile-Import-Export-Tool.md).
+For more details, see document [Data 
Import](../Tools-System/Data-Import-Tool.md).
 
 #### Load with SQL
 
diff --git a/src/UserGuide/latest/User-Manual/Write-Delete-Data.md 
b/src/UserGuide/latest/User-Manual/Write-Delete-Data.md
index f0285dba..b5600b99 100644
--- a/src/UserGuide/latest/User-Manual/Write-Delete-Data.md
+++ b/src/UserGuide/latest/User-Manual/Write-Delete-Data.md
@@ -185,11 +185,11 @@ In different scenarios, the IoTDB provides a variety of 
methods for importing da
 
 ### TsFile Batch Load
 
-TsFile is the file format of time series used in IoTDB. You can directly 
import one or more TsFile files with time series into another running IoTDB 
instance through tools such as CLI. For details, see 
[Import-Export-Tool](../Tools-System/TsFile-Import-Export-Tool.md).
+TsFile is the file format of time series used in IoTDB. You can directly 
import one or more TsFile files with time series into another running IoTDB 
instance through tools such as CLI. For details, see [Data 
Import](../Tools-System/Data-Import-Tool.md).
 
 ### CSV Batch Load
 
-CSV stores table data in plain text. You can write multiple formatted data 
into a CSV file and import the data into the IoTDB in batches. Before importing 
data, you are advised to create the corresponding metadata in the IoTDB. Don't 
worry if you forget to create one, the IoTDB can automatically infer the data 
in the CSV to its corresponding data type, as long as you have a unique data 
type for each column. In addition to a single file, the tool supports importing 
multiple CSV files as f [...]
+CSV stores table data in plain text. You can write multiple formatted data 
into a CSV file and import the data into the IoTDB in batches. Before importing 
data, you are advised to create the corresponding metadata in the IoTDB. Don't 
worry if you forget to create one, the IoTDB can automatically infer the data 
in the CSV to its corresponding data type, as long as you have a unique data 
type for each column. In addition to a single file, the tool supports importing 
multiple CSV files as f [...]
 
 ## DELETE
 
diff --git a/src/zh/UserGuide/Master/Basic-Concept/Cluster-data-partitioning.md 
b/src/zh/UserGuide/Master/Basic-Concept/Cluster-data-partitioning.md
index fdb0340c..3d188f07 100644
--- a/src/zh/UserGuide/Master/Basic-Concept/Cluster-data-partitioning.md
+++ b/src/zh/UserGuide/Master/Basic-Concept/Cluster-data-partitioning.md
@@ -45,7 +45,7 @@ IoTDB 将生产环境中的每个传感器映射为一个时间序列。然后
 
 
$$\left\lfloor\frac{\text{Timestamp}-\text{StartTimestamp}}{\text{TimePartitionInterval}}\right\rfloor\text{。}$$
 
-在此式中,$\text{StartTimestamp}$ 和 $\text{TimePartitionInterval}$ 
都是可配置参数,以适应不同的生产环境。$\text{StartTimestamp}$ 表示第一个时间分区的起始时间,而 
$\text{TimePartitionInterval}$ 
定义了每个时间分区的持续时间。默认情况下,$\text{TimePartitionInterval}$ 设置为一天。
+在此式中,$\text{StartTimestamp}$ 和 $\text{TimePartitionInterval}$ 
都是可配置参数,以适应不同的生产环境。$\text{StartTimestamp}$ 表示第一个时间分区的起始时间,而 
$\text{TimePartitionInterval}$ 
定义了每个时间分区的持续时间。默认情况下,$\text{TimePartitionInterval}$ 设置为七天。
 
 #### 元数据分区
 由于序列分区算法对时间序列进行了均匀分区,每个序列分区对应一个元数据分区。这些元数据分区随后被均匀分配到 元数据分片 中,以实现元数据的均衡分布。
diff --git a/src/zh/UserGuide/Master/Reference/UDF-Libraries.md 
b/src/zh/UserGuide/Master/Reference/UDF-Libraries.md
index 226c853c..c8a0871a 100644
--- a/src/zh/UserGuide/Master/Reference/UDF-Libraries.md
+++ b/src/zh/UserGuide/Master/Reference/UDF-Libraries.md
@@ -30,11 +30,11 @@
     | UDF-1.3.3.zip | V1.3.3及以上      | 
[压缩包](https://alioss.timecho.com/upload/UDF-1.3.3.zip)   |
     | UDF-1.3.2.zip | V1.0.0~V1.3.2  | 
[压缩包](https://alioss.timecho.com/upload/UDF-1.3.2.zip) |
     
-2. 将获取的压缩包中的 library-udf.jar 文件放置在 IoTDB 集群所有节点的 `/ext/udf` 的目录下
+2. 将获取的压缩包中的 `library-udf.jar` 文件放置在 IoTDB 集群所有节点的 `/ext/udf` 的目录下
 3. 在 IoTDB 的 SQL 命令行终端(CLI)或可视化控制台(Workbench)的 SQL 操作界面中,执行下述相应的函数注册语句。
 4. 批量注册:两种注册方式:注册脚本 或 SQL汇总语句
 - 注册脚本 
-    - 将压缩包中的注册脚本(register-UDF.sh 或 register-UDF.bat)按需复制到 IoTDB 的 tools 
目录下,修改脚本中的参数(默认为host=127.0.0.1,rpcPort=6667,user=root,pass=root);
+    - 将压缩包中的注册脚本(`register-UDF.sh` 或 `register-UDF.bat`)按需复制到 IoTDB 的 tools 
目录下,修改脚本中的参数(默认为host=127.0.0.1,rpcPort=6667,user=root,pass=root);
     - 启动 IoTDB 服务,运行注册脚本批量注册 UDF
 
 - SQL汇总语句
@@ -3946,7 +3946,6 @@ create function lowpass as 
'org.apache.iotdb.library.frequency.UDTFLowPass'
 +-----------------------------+---------------+
 ```
 
-
 用于查询的SQL语句:
 
 ```sql
@@ -3981,9 +3980,19 @@ select lowpass(s1,'wpass'='0.45') from root.test.d1
 |1970-01-01T08:00:19.000+08:00|                  -2.664535259100376E-16|
 +-----------------------------+----------------------------------------+
 ```
-## Envelope
 
-### 函数简介
+注:输入序列服从$y=sin(2\pi t/4)+2sin(2\pi t/5)$,长度为20,因此低通滤波之后的输出序列服从$y=2sin(2\pi 
t/5)$。
+
+
+### Envelope
+
+#### 注册语句
+
+```sql
+create function envelope as 
'org.apache.iotdb.library.frequency.UDFEnvelopeAnalysis'
+```
+
+#### 函数简介
 
 
本函数通过输入一维浮点数数组和用户指定的调制频率,实现对信号的解调和包络提取。解调的目标是从复杂的信号中提取感兴趣的部分,使其更易理解。比如通过解调可以找到信号的包络,即振幅的变化趋势。
 
@@ -4003,7 +4012,7 @@ select lowpass(s1,'wpass'='0.45') from root.test.d1
 
 **提示:** 当解调的原始序列的值不连续时,本函数会视为连续处理,建议被分析的时间序列是一段值完整的时间序列。同时建议指定开始时间与结束时间。
 
-### 使用示例
+#### 使用示例
 
 输入序列:
 
@@ -4048,29 +4057,6 @@ select 
envelope(s1),envelope(s1,'frequency'='1000'),envelope(s1,'amplification'=
 
 ```
 
-注:输入序列服从$y=sin(2\pi t/4)+2sin(2\pi t/5)$,长度为20,因此低通滤波之后的输出序列服从$y=2sin(2\pi 
t/5)$。
-
-<!--
-
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-    
-        http://www.apache.org/licenses/LICENSE-2.0
-    
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
-
--->
-
 ## 数据匹配
 
 ### Cov
diff --git a/src/zh/UserGuide/Master/User-Manual/Write-Delete-Data.md 
b/src/zh/UserGuide/Master/User-Manual/Write-Delete-Data.md
index 22cd2736..f7c7bcc5 100644
--- a/src/zh/UserGuide/Master/User-Manual/Write-Delete-Data.md
+++ b/src/zh/UserGuide/Master/User-Manual/Write-Delete-Data.md
@@ -176,11 +176,11 @@ It costs 0.004s
 
 ### TsFile批量导入
 
-TsFile 是在 IoTDB 中使用的时间序列的文件格式,您可以通过CLI等工具直接将存有时间序列的一个或多个 TsFile 
文件导入到另外一个正在运行的IoTDB实例中。具体操作方式请参考[导入导出工具](../Tools-System/TsFile-Import-Export-Tool.md)。
+TsFile 是在 IoTDB 中使用的时间序列的文件格式,您可以通过CLI等工具直接将存有时间序列的一个或多个 TsFile 
文件导入到另外一个正在运行的IoTDB实例中。具体操作方式请参考[数据导入](../Tools-System/Data-Import-Tool.md)。
 
 ### CSV批量导入
 
-CSV 是以纯文本形式存储表格数据,您可以在CSV文件中写入多条格式化的数据,并批量的将这些数据导入到 IoTDB 
中,在导入数据之前,建议在IoTDB中创建好对应的元数据信息。如果忘记创建元数据也不要担心,IoTDB 
可以自动将CSV中数据推断为其对应的数据类型,前提是你每一列的数据类型必须唯一。除单个文件外,此工具还支持以文件夹的形式导入多个 CSV 
文件,并且支持设置如时间精度等优化参数。具体操作方式请参考[导入导出工具](../Tools-System/Data-Import-Export-Tool.md)。
+CSV 是以纯文本形式存储表格数据,您可以在CSV文件中写入多条格式化的数据,并批量的将这些数据导入到 IoTDB 
中,在导入数据之前,建议在IoTDB中创建好对应的元数据信息。如果忘记创建元数据也不要担心,IoTDB 
可以自动将CSV中数据推断为其对应的数据类型,前提是你每一列的数据类型必须唯一。除单个文件外,此工具还支持以文件夹的形式导入多个 CSV 
文件,并且支持设置如时间精度等优化参数。具体操作方式请参考[数据导入](../Tools-System/Data-Import-Tool.md)。
 
 ## 删除数据
 
diff --git 
a/src/zh/UserGuide/V1.3.0-2/Basic-Concept/Cluster-data-partitioning.md 
b/src/zh/UserGuide/V1.3.0-2/Basic-Concept/Cluster-data-partitioning.md
index fdb0340c..3d188f07 100644
--- a/src/zh/UserGuide/V1.3.0-2/Basic-Concept/Cluster-data-partitioning.md
+++ b/src/zh/UserGuide/V1.3.0-2/Basic-Concept/Cluster-data-partitioning.md
@@ -45,7 +45,7 @@ IoTDB 将生产环境中的每个传感器映射为一个时间序列。然后
 
 
$$\left\lfloor\frac{\text{Timestamp}-\text{StartTimestamp}}{\text{TimePartitionInterval}}\right\rfloor\text{。}$$
 
-在此式中,$\text{StartTimestamp}$ 和 $\text{TimePartitionInterval}$ 
都是可配置参数,以适应不同的生产环境。$\text{StartTimestamp}$ 表示第一个时间分区的起始时间,而 
$\text{TimePartitionInterval}$ 
定义了每个时间分区的持续时间。默认情况下,$\text{TimePartitionInterval}$ 设置为一天。
+在此式中,$\text{StartTimestamp}$ 和 $\text{TimePartitionInterval}$ 
都是可配置参数,以适应不同的生产环境。$\text{StartTimestamp}$ 表示第一个时间分区的起始时间,而 
$\text{TimePartitionInterval}$ 
定义了每个时间分区的持续时间。默认情况下,$\text{TimePartitionInterval}$ 设置为七天。
 
 #### 元数据分区
 由于序列分区算法对时间序列进行了均匀分区,每个序列分区对应一个元数据分区。这些元数据分区随后被均匀分配到 元数据分片 中,以实现元数据的均衡分布。
diff --git a/src/zh/UserGuide/V1.3.0-2/Reference/UDF-Libraries.md 
b/src/zh/UserGuide/V1.3.0-2/Reference/UDF-Libraries.md
index 6d0bab5d..9892cb54 100644
--- a/src/zh/UserGuide/V1.3.0-2/Reference/UDF-Libraries.md
+++ b/src/zh/UserGuide/V1.3.0-2/Reference/UDF-Libraries.md
@@ -30,11 +30,11 @@
     | UDF-1.3.3.zip | V1.3.3及以上      | 
[压缩包](https://alioss.timecho.com/upload/UDF-1.3.3.zip)   |
     | UDF-1.3.2.zip | V1.0.0~V1.3.2  | 
[压缩包](https://alioss.timecho.com/upload/UDF-1.3.2.zip) |
     
-2. 将获取的压缩包中的 library-udf.jar 文件放置在 IoTDB 集群所有节点的 `/ext/udf` 的目录下
+2. 将获取的压缩包中的 `library-udf.jar` 文件放置在 IoTDB 集群所有节点的 `/ext/udf` 的目录下
 3. 在 IoTDB 的 SQL 命令行终端(CLI)或可视化控制台(Workbench)的 SQL 操作界面中,执行下述相应的函数注册语句。
 4. 批量注册:两种注册方式:注册脚本 或 SQL汇总语句
 - 注册脚本 
-    - 将压缩包中的注册脚本(register-UDF.sh 或 register-UDF.bat)按需复制到 IoTDB 的 tools 
目录下,修改脚本中的参数(默认为host=127.0.0.1,rpcPort=6667,user=root,pass=root);
+    - 将压缩包中的注册脚本(`register-UDF.sh` 或 `register-UDF.bat`)按需复制到 IoTDB 的 tools 
目录下,修改脚本中的参数(默认为host=127.0.0.1,rpcPort=6667,user=root,pass=root);
     - 启动 IoTDB 服务,运行注册脚本批量注册 UDF
 
 - SQL汇总语句
@@ -3932,9 +3932,16 @@ select lowpass(s1,'wpass'='0.45') from root.test.d1
 
 注:输入序列服从$y=sin(2\pi t/4)+2sin(2\pi t/5)$,长度为20,因此低通滤波之后的输出序列服从$y=2sin(2\pi 
t/5)$。
 
-## Envelope
 
-### 函数简介
+### Envelope
+
+#### 注册语句
+
+```sql
+create function envelope as 
'org.apache.iotdb.library.frequency.UDFEnvelopeAnalysis'
+```
+
+#### 函数简介
 
 
本函数通过输入一维浮点数数组和用户指定的调制频率,实现对信号的解调和包络提取。解调的目标是从复杂的信号中提取感兴趣的部分,使其更易理解。比如通过解调可以找到信号的包络,即振幅的变化趋势。
 
@@ -3954,7 +3961,7 @@ select lowpass(s1,'wpass'='0.45') from root.test.d1
 
 **提示:** 当解调的原始序列的值不连续时,本函数会视为连续处理,建议被分析的时间序列是一段值完整的时间序列。同时建议指定开始时间与结束时间。
 
-### 使用示例
+#### 使用示例
 
 输入序列:
 
@@ -3996,28 +4003,8 @@ select 
envelope(s1),envelope(s1,'frequency'='1000'),envelope(s1,'amplification'=
 |3000|                     null|                                         null| 
                             0.512808785945551|
 |4000|                     null|                                         null| 
                           0.26361156774506744|
 
+----+-------------------------+---------------------------------------------+-----------------------------------------------+
-```
-
-<!--
-
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-    
-        http://www.apache.org/licenses/LICENSE-2.0
-    
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
 
--->
+```
 
 ## 数据匹配
 
diff --git a/src/zh/UserGuide/latest/Basic-Concept/Cluster-data-partitioning.md 
b/src/zh/UserGuide/latest/Basic-Concept/Cluster-data-partitioning.md
index fdb0340c..3d188f07 100644
--- a/src/zh/UserGuide/latest/Basic-Concept/Cluster-data-partitioning.md
+++ b/src/zh/UserGuide/latest/Basic-Concept/Cluster-data-partitioning.md
@@ -45,7 +45,7 @@ IoTDB 将生产环境中的每个传感器映射为一个时间序列。然后
 
 
$$\left\lfloor\frac{\text{Timestamp}-\text{StartTimestamp}}{\text{TimePartitionInterval}}\right\rfloor\text{。}$$
 
-在此式中,$\text{StartTimestamp}$ 和 $\text{TimePartitionInterval}$ 
都是可配置参数,以适应不同的生产环境。$\text{StartTimestamp}$ 表示第一个时间分区的起始时间,而 
$\text{TimePartitionInterval}$ 
定义了每个时间分区的持续时间。默认情况下,$\text{TimePartitionInterval}$ 设置为一天。
+在此式中,$\text{StartTimestamp}$ 和 $\text{TimePartitionInterval}$ 
都是可配置参数,以适应不同的生产环境。$\text{StartTimestamp}$ 表示第一个时间分区的起始时间,而 
$\text{TimePartitionInterval}$ 
定义了每个时间分区的持续时间。默认情况下,$\text{TimePartitionInterval}$ 设置为七天。
 
 #### 元数据分区
 由于序列分区算法对时间序列进行了均匀分区,每个序列分区对应一个元数据分区。这些元数据分区随后被均匀分配到 元数据分片 中,以实现元数据的均衡分布。
diff --git a/src/zh/UserGuide/latest/Reference/UDF-Libraries.md 
b/src/zh/UserGuide/latest/Reference/UDF-Libraries.md
index 226c853c..c7dfbd33 100644
--- a/src/zh/UserGuide/latest/Reference/UDF-Libraries.md
+++ b/src/zh/UserGuide/latest/Reference/UDF-Libraries.md
@@ -30,11 +30,11 @@
     | UDF-1.3.3.zip | V1.3.3及以上      | 
[压缩包](https://alioss.timecho.com/upload/UDF-1.3.3.zip)   |
     | UDF-1.3.2.zip | V1.0.0~V1.3.2  | 
[压缩包](https://alioss.timecho.com/upload/UDF-1.3.2.zip) |
     
-2. 将获取的压缩包中的 library-udf.jar 文件放置在 IoTDB 集群所有节点的 `/ext/udf` 的目录下
+2. 将获取的压缩包中的 `library-udf.jar` 文件放置在 IoTDB 集群所有节点的 `/ext/udf` 的目录下
 3. 在 IoTDB 的 SQL 命令行终端(CLI)或可视化控制台(Workbench)的 SQL 操作界面中,执行下述相应的函数注册语句。
 4. 批量注册:两种注册方式:注册脚本 或 SQL汇总语句
 - 注册脚本 
-    - 将压缩包中的注册脚本(register-UDF.sh 或 register-UDF.bat)按需复制到 IoTDB 的 tools 
目录下,修改脚本中的参数(默认为host=127.0.0.1,rpcPort=6667,user=root,pass=root);
+    - 将压缩包中的注册脚本(`register-UDF.sh` 或 `register-UDF.bat`)按需复制到 IoTDB 的 tools 
目录下,修改脚本中的参数(默认为host=127.0.0.1,rpcPort=6667,user=root,pass=root);
     - 启动 IoTDB 服务,运行注册脚本批量注册 UDF
 
 - SQL汇总语句
@@ -3981,9 +3981,19 @@ select lowpass(s1,'wpass'='0.45') from root.test.d1
 |1970-01-01T08:00:19.000+08:00|                  -2.664535259100376E-16|
 +-----------------------------+----------------------------------------+
 ```
-## Envelope
 
-### 函数简介
+注:输入序列服从$y=sin(2\pi t/4)+2sin(2\pi t/5)$,长度为20,因此低通滤波之后的输出序列服从$y=2sin(2\pi 
t/5)$。
+
+
+### Envelope
+
+#### 注册语句
+
+```sql
+create function envelope as 
'org.apache.iotdb.library.frequency.UDFEnvelopeAnalysis'
+```
+
+#### 函数简介
 
 
本函数通过输入一维浮点数数组和用户指定的调制频率,实现对信号的解调和包络提取。解调的目标是从复杂的信号中提取感兴趣的部分,使其更易理解。比如通过解调可以找到信号的包络,即振幅的变化趋势。
 
@@ -4003,7 +4013,7 @@ select lowpass(s1,'wpass'='0.45') from root.test.d1
 
 **提示:** 当解调的原始序列的值不连续时,本函数会视为连续处理,建议被分析的时间序列是一段值完整的时间序列。同时建议指定开始时间与结束时间。
 
-### 使用示例
+#### 使用示例
 
 输入序列:
 
@@ -4048,29 +4058,6 @@ select 
envelope(s1),envelope(s1,'frequency'='1000'),envelope(s1,'amplification'=
 
 ```
 
-注:输入序列服从$y=sin(2\pi t/4)+2sin(2\pi t/5)$,长度为20,因此低通滤波之后的输出序列服从$y=2sin(2\pi 
t/5)$。
-
-<!--
-
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-    
-        http://www.apache.org/licenses/LICENSE-2.0
-    
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
-
--->
-
 ## 数据匹配
 
 ### Cov
diff --git a/src/zh/UserGuide/latest/User-Manual/Write-Delete-Data.md 
b/src/zh/UserGuide/latest/User-Manual/Write-Delete-Data.md
index 22cd2736..f7c7bcc5 100644
--- a/src/zh/UserGuide/latest/User-Manual/Write-Delete-Data.md
+++ b/src/zh/UserGuide/latest/User-Manual/Write-Delete-Data.md
@@ -176,11 +176,11 @@ It costs 0.004s
 
 ### TsFile批量导入
 
-TsFile 是在 IoTDB 中使用的时间序列的文件格式,您可以通过CLI等工具直接将存有时间序列的一个或多个 TsFile 
文件导入到另外一个正在运行的IoTDB实例中。具体操作方式请参考[导入导出工具](../Tools-System/TsFile-Import-Export-Tool.md)。
+TsFile 是在 IoTDB 中使用的时间序列的文件格式,您可以通过CLI等工具直接将存有时间序列的一个或多个 TsFile 
文件导入到另外一个正在运行的IoTDB实例中。具体操作方式请参考[数据导入](../Tools-System/Data-Import-Tool.md)。
 
 ### CSV批量导入
 
-CSV 是以纯文本形式存储表格数据,您可以在CSV文件中写入多条格式化的数据,并批量的将这些数据导入到 IoTDB 
中,在导入数据之前,建议在IoTDB中创建好对应的元数据信息。如果忘记创建元数据也不要担心,IoTDB 
可以自动将CSV中数据推断为其对应的数据类型,前提是你每一列的数据类型必须唯一。除单个文件外,此工具还支持以文件夹的形式导入多个 CSV 
文件,并且支持设置如时间精度等优化参数。具体操作方式请参考[导入导出工具](../Tools-System/Data-Import-Export-Tool.md)。
+CSV 是以纯文本形式存储表格数据,您可以在CSV文件中写入多条格式化的数据,并批量的将这些数据导入到 IoTDB 
中,在导入数据之前,建议在IoTDB中创建好对应的元数据信息。如果忘记创建元数据也不要担心,IoTDB 
可以自动将CSV中数据推断为其对应的数据类型,前提是你每一列的数据类型必须唯一。除单个文件外,此工具还支持以文件夹的形式导入多个 CSV 
文件,并且支持设置如时间精度等优化参数。具体操作方式请参考[数据导入](../Tools-System/Data-Import-Tool.md)。
 
 ## 删除数据
 

Reply via email to