qiaojialin commented on a change in pull request #826: Update chinese design doc
URL: https://github.com/apache/incubator-iotdb/pull/826#discussion_r382360206
 
 

 ##########
 File path: 
docs/Documentation-CHN/SystemDesign/4-StorageEngine/5-DataPartition.md
 ##########
 @@ -0,0 +1,66 @@
+<!--
+
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+-->
+
+# 数据分区
+
+时间序列数据在存储组和时间范围两个层级上进行分区
+
+## 存储组
+
+存储组由用户显示指定,使用语句"SET STORAGE GROUP TO"来指定存储组,每一个存储组有一个对应的StoragetGroupProcessor
+
+### 相关代码
+* src/main/java/org/apache/iotdb/db/engine/StorageEngine.java
+
+
+## 时间范围
+
+同一个存储组中的数据按照用户指定的时间范围进行分区,相关参数为partition_interval,默认为周,也就是不同周的数据会放在不同的分区中
+
+### 实现逻辑
+
+StoragetGroupProcessor对插入的数据进行分区计算,找到指定的TsfileProcessor,而每一个TsfileProcessor对应的tsfile会被放置在不同的分区文件夹内
 
 Review comment:
   ```suggestion
   StoragetGroupProcessor 对插入的数据进行分区计算,找到指定的 TsFileProcessor,而每一个 
TsFileProcessor 对应的 TsFile 会被放置在不同的分区文件夹内
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to