This is an automated email from the ASF dual-hosted git repository.
qiaojialin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-iotdb.git
The following commit(s) were added to refs/heads/master by this push:
new bacbbe7 [IOTDB-726] CheckPoint of MTree (#1384)
bacbbe7 is described below
commit bacbbe77ed257d08e9651e8c441d896320baf537
Author: Zesong Sun <[email protected]>
AuthorDate: Tue Jun 23 11:39:55 2020 +0800
[IOTDB-726] CheckPoint of MTree (#1384)
* MTree checkpoint
* Add config of MtreeSnapshotThresholdTime
---
docs/SystemDesign/SchemaManager/SchemaManager.md | 10 ++
docs/UserGuide/Concept/Encoding.md | 2 +
docs/UserGuide/Server/Config Manual.md | 9 +
.../zh/SystemDesign/SchemaManager/SchemaManager.md | 12 +-
docs/zh/UserGuide/Concept/Encoding.md | 2 +
docs/zh/UserGuide/Server/Config Manual.md | 19 ++-
.../resources/conf/iotdb-engine.properties | 7 +
.../java/org/apache/iotdb/db/conf/IoTDBConfig.java | 37 +++-
.../org/apache/iotdb/db/conf/IoTDBConfigCheck.java | 70 ++++----
.../org/apache/iotdb/db/conf/IoTDBDescriptor.java | 7 +-
.../org/apache/iotdb/db/metadata/MLogWriter.java | 48 +++---
.../org/apache/iotdb/db/metadata/MManager.java | 186 ++++++++++++++++-----
.../java/org/apache/iotdb/db/metadata/MTree.java | 120 +++++++++++--
.../apache/iotdb/db/metadata/MetadataConstant.java | 2 +
.../org/apache/iotdb/db/metadata/mnode/MNode.java | 2 +-
15 files changed, 400 insertions(+), 133 deletions(-)
diff --git a/docs/SystemDesign/SchemaManager/SchemaManager.md
b/docs/SystemDesign/SchemaManager/SchemaManager.md
index 0734672..bb0f152 100644
--- a/docs/SystemDesign/SchemaManager/SchemaManager.md
+++ b/docs/SystemDesign/SchemaManager/SchemaManager.md
@@ -175,6 +175,16 @@ The root node exists by default. Creating storage groups,
deleting storage group
* Deleting a storage group is similar to deleting a time series. That is, the
storage group or time series node is deleted in its parent node. The time
series node also needs to delete its alias in the parent node; if in the
deletion process, a node is found not to have any child node, needs to be
deleted recursively.
+## MTree checkpoint
+
+To speed up restarting of IoTDB, we set checkpoint for MTree. Every 10
minutes, background thread checks the last modified time of MTree. If users
haven’t modified MTree for more than 1 hour (which means `mlog.txt` hasn’t been
updated for more than 1 hour), and `mlog.txt` has reached the threshold line
number of user configuration, MTree snapshot is created. In this way, we could
avoid reading mlog.txt and executing the commands line by line.
+
+The serialization of MTree is depth-first from children to parent. Information
of nodes are converted into String according to different node types, which is
convenient for deserialization.
+
+* MNode: 0,name,children size
+* StorageGroupMNode: 1,name,TTL,children size
+* MeasurementMNode:
2,name,alias,TSDataType,TSEncoding,CompressionType,props,offset,children size
+
## Log management of metadata
* org.apache.iotdb.db.metadata.MLogWriter
diff --git a/docs/UserGuide/Concept/Encoding.md
b/docs/UserGuide/Concept/Encoding.md
index 396844f..9f416a7 100644
--- a/docs/UserGuide/Concept/Encoding.md
+++ b/docs/UserGuide/Concept/Encoding.md
@@ -37,6 +37,8 @@ Run-length encoding is more suitable for storing sequence
with continuous intege
Run-length encoding can also be used to encode floating-point numbers, but it
is necessary to specify reserved decimal digits (MAX\_POINT\_NUMBER, see [this
page](../Operation%20Manual/SQL%20Reference.html) for more information on how
to specify) when creating time series. It is more suitable for storing sequence
data where floating-point values appear continuously, monotonously increasing
or decreasing, and it is not suitable for storing sequence data with high
precision requirements af [...]
+> TS_2DIFF and RLE have precision limit for data type of float and double. By
default, two decimal places are reserved. GORILLA is recommended.
+
* GORILLA
GORILLA encoding is more suitable for floating-point sequence with similar
values and is not recommended for sequence data with large fluctuations.
diff --git a/docs/UserGuide/Server/Config Manual.md
b/docs/UserGuide/Server/Config Manual.md
index 6046275..a28e212 100644
--- a/docs/UserGuide/Server/Config Manual.md
+++ b/docs/UserGuide/Server/Config Manual.md
@@ -372,6 +372,15 @@ The permission definitions are in
${IOTDB\_CONF}/conf/jmx.access.
|Default| true |
|Effective|After restart system|
+* mtree\_snapshot\_interval
+
+|Name| mtree\_snapshot\_interval |
+|:---:|:---|
+|Description| The least interval line numbers of mlog.txt when creating a
checkpoint and saving snapshot of MTree. Unit: line numbers|
+|Type| Int32 |
+|Default| 100000 |
+|Effective|After restart system|
+
* flush\_wal\_threshold
|Name| flush\_wal\_threshold |
diff --git a/docs/zh/SystemDesign/SchemaManager/SchemaManager.md
b/docs/zh/SystemDesign/SchemaManager/SchemaManager.md
index 493b8d4..983bb71 100644
--- a/docs/zh/SystemDesign/SchemaManager/SchemaManager.md
+++ b/docs/zh/SystemDesign/SchemaManager/SchemaManager.md
@@ -171,7 +171,17 @@ IoTDB 的元数据管理采用目录树的形式,倒数第二层为设备层
* 如果有别名,则在设备节点多创建一个链接指向叶子节点
*
删除存储组和删除时间序列的操作相似,即将存储组或时间序列节点在其父节点中删除,时间序列节点还需要将其别名在父节点中删除;若在删除过程中,发现某一节点没有任何子节点了,还需要递归删除此节点。
-
+
+## MTree检查点
+
+为了加快 IoTDB 重启速度,我们为 MTree 设置了检查点,即每隔10分钟,后台线程检查 MTree 的最后修改时间,如果用户超过1小时没修改
MTree (`mlog.txt` 文件超过1小时没有修改),并且 `mlog.txt` 中积累了用户配置的日志条数,就创建一次 MTree
snapshot。这样避免了在重启时按行读取并复现 `mlog.txt` 中的信息。
+
+MTree 的序列化采用“先子节点、后父节点”的深度优先序列化方式,将节点的信息按照类型转化成对应格式的字符串,便于反序列化时读取和组装MTree。
+
+* 普通节点:0,名字,子节点个数
+* 存储组节点:1,名字,TTL,子节点个数
+* 传感器节点:2,名字,别名,数据类型,编码,压缩方式,属性,偏移量,子节点个数
+
## 元数据日志管理
* org.apache.iotdb.db.metadata.MLogWriter
diff --git a/docs/zh/UserGuide/Concept/Encoding.md
b/docs/zh/UserGuide/Concept/Encoding.md
index e35460d..397a97a 100644
--- a/docs/zh/UserGuide/Concept/Encoding.md
+++ b/docs/zh/UserGuide/Concept/Encoding.md
@@ -37,6 +37,8 @@ PLAIN编码,默认的编码方式,即不编码,支持多种数据类型,
游程编码也可用于对浮点数进行编码,但在创建时间序列的时候需指定保留小数位数(MAX_POINT_NUMBER,具体指定方式参见本文本文[第5.4节](../Operation%20Manual/SQL%20Reference.html))。比较适合存储某些浮点数值连续出现的序列数据,不适合存储对小数点后精度要求较高以及前后波动较大的序列数据。
+> 游程编码(RLE)和二阶差分编码(TS_2DIFF)对 float 和 double 的编码是有精度限制的,默认保留2位小数。推荐使用 GORILLA。
+
* GORILLA编码(GORILLA)
GORILLA编码,比较适合编码前后值比较接近的浮点数序列,不适合编码前后波动较大的数据。
diff --git a/docs/zh/UserGuide/Server/Config Manual.md
b/docs/zh/UserGuide/Server/Config Manual.md
index 6c53b6c..f01b4f0 100644
--- a/docs/zh/UserGuide/Server/Config Manual.md
+++ b/docs/zh/UserGuide/Server/Config Manual.md
@@ -219,12 +219,21 @@
* enable\_partial\_insert
-|Name| enable\_partial\_insert |
+|名字| enable\_partial\_insert |
|:---:|:---|
-|Description| 在一次insert请求中,如果部分测点写入失败,是否继续写入其他测点|
-|Type| Bool |
-|Default| true |
-|Effective|重启服务器生效|
+|描述| 在一次insert请求中,如果部分测点写入失败,是否继续写入其他测点|
+|类型| Bool |
+|默认值| true |
+|改后生效方式|重启服务器生效|
+
+* mtree\_snapshot\_interval
+
+|名字| mtree\_snapshot\_interval |
+|:---:|:---|
+|描述| 创建 MTree snapshot 时至少累积的 mlog 日志行数。单位为日志行数|
+|类型| Int32 |
+|默认值| 100000 |
+|改后生效方式|重启服务器生效|
* fetch\_size
diff --git a/server/src/assembly/resources/conf/iotdb-engine.properties
b/server/src/assembly/resources/conf/iotdb-engine.properties
index 846228c..8f3d59a 100644
--- a/server/src/assembly/resources/conf/iotdb-engine.properties
+++ b/server/src/assembly/resources/conf/iotdb-engine.properties
@@ -204,6 +204,13 @@ tag_attribute_total_size=700
# if enable partial insert, one measurement failure will not impact other
measurements
enable_partial_insert=true
+# The least interval line numbers of mlog.txt when creating a checkpoint and
saving snapshot of MTree. Unit: line numbers
+mtree_snapshot_interval=100000
+
+# Threshold interval time of MTree modification. Unit: second. Default: 1
hour(3600 seconds)
+# If the last modification time is less than this threshold, MTree snapshot
will not be created
+mtree_snapshot_threshold_time=3600
+
####################
### Memory Control Configuration
####################
diff --git a/server/src/main/java/org/apache/iotdb/db/conf/IoTDBConfig.java
b/server/src/main/java/org/apache/iotdb/db/conf/IoTDBConfig.java
index 23c111a..5c2e085 100644
--- a/server/src/main/java/org/apache/iotdb/db/conf/IoTDBConfig.java
+++ b/server/src/main/java/org/apache/iotdb/db/conf/IoTDBConfig.java
@@ -568,12 +568,22 @@ public class IoTDBConfig {
private int primitiveArraySize = 64;
/**
- * whether enable data partition
- * if disabled, all data belongs to partition 0
+ * whether enable data partition. If disabled, all data belongs to partition 0
*/
private boolean enablePartition = false;
/**
+ * Interval line number of mlog.txt when creating a checkpoint and saving
snapshot of mtree
+ */
+ private int mtreeSnapshotInterval = 100000;
+
+ /**
+ * Threshold interval time of MTree modification. If the last modification
time is less than this
+ * threshold, MTree snapshot will not be created. Unit: second. Default: 1
hour(3600 seconds)
+ */
+ private int mtreeSnapshotThresholdTime = 3600;
+
+ /**
* Time range for partitioning data inside each storage group, the unit is
second
*/
private long partitionInterval = 604800;
@@ -628,6 +638,22 @@ public class IoTDBConfig {
this.enablePartition = enablePartition;
}
+ public int getMtreeSnapshotInterval() {
+ return mtreeSnapshotInterval;
+ }
+
+ public void setMtreeSnapshotInterval(int mtreeSnapshotInterval) {
+ this.mtreeSnapshotInterval = mtreeSnapshotInterval;
+ }
+
+ public int getMtreeSnapshotThresholdTime() {
+ return mtreeSnapshotThresholdTime;
+ }
+
+ public void setMtreeSnapshotThresholdTime(int mtreeSnapshotThresholdTime) {
+ this.mtreeSnapshotThresholdTime = mtreeSnapshotThresholdTime;
+ }
+
public long getPartitionInterval() {
return partitionInterval;
}
@@ -1211,7 +1237,8 @@ public class IoTDBConfig {
return allocateMemoryForTimeSeriesMetaDataCache;
}
- public void setAllocateMemoryForTimeSeriesMetaDataCache(long
allocateMemoryForTimeSeriesMetaDataCache) {
+ public void setAllocateMemoryForTimeSeriesMetaDataCache(
+ long allocateMemoryForTimeSeriesMetaDataCache) {
this.allocateMemoryForTimeSeriesMetaDataCache =
allocateMemoryForTimeSeriesMetaDataCache;
}
@@ -1336,7 +1363,9 @@ public class IoTDBConfig {
if (nanStringInferType != TSDataType.DOUBLE &&
nanStringInferType != TSDataType.FLOAT &&
nanStringInferType != TSDataType.TEXT) {
- throw new IllegalArgumentException("Config Property
nan_string_infer_type can only be FLOAT, DOUBLE or TEXT but is " +
nanStringInferType);
+ throw new IllegalArgumentException(
+ "Config Property nan_string_infer_type can only be FLOAT, DOUBLE or
TEXT but is "
+ + nanStringInferType);
}
this.nanStringInferType = nanStringInferType;
}
diff --git
a/server/src/main/java/org/apache/iotdb/db/conf/IoTDBConfigCheck.java
b/server/src/main/java/org/apache/iotdb/db/conf/IoTDBConfigCheck.java
index e8831a1..b36fc56 100644
--- a/server/src/main/java/org/apache/iotdb/db/conf/IoTDBConfigCheck.java
+++ b/server/src/main/java/org/apache/iotdb/db/conf/IoTDBConfigCheck.java
@@ -18,9 +18,17 @@
*/
package org.apache.iotdb.db.conf;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.nio.file.Files;
import java.util.HashMap;
+import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
+import java.util.Properties;
import org.apache.commons.io.FileUtils;
import org.apache.iotdb.db.conf.directories.DirectoryManager;
import org.apache.iotdb.db.engine.fileSystem.SystemFileFactory;
@@ -34,20 +42,17 @@ import org.apache.iotdb.tsfile.fileSystem.FSFactoryProducer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
-import java.io.*;
-import java.nio.file.Files;
-import java.util.List;
-import java.util.Properties;
-
public class IoTDBConfigCheck {
private static final Logger logger =
LoggerFactory.getLogger(IoTDBDescriptor.class);
+ private static final IoTDBConfig config =
IoTDBDescriptor.getInstance().getConfig();
+
// this file is located in data/system/schema/system.properties
// If user delete folder "data", system.properties can reset.
private static final String PROPERTIES_FILE_NAME = "system.properties";
- private static final String SCHEMA_DIR =
IoTDBDescriptor.getInstance().getConfig().getSchemaDir();
- private static final String WAL_DIR =
IoTDBDescriptor.getInstance().getConfig().getWalFolder();
+ private static final String SCHEMA_DIR = config.getSchemaDir();
+ private static final String WAL_DIR = config.getWalFolder();
private File propertiesFile;
private File tmpPropertiesFile;
@@ -59,32 +64,32 @@ public class IoTDBConfigCheck {
private static final String SYSTEM_PROPERTIES_STRING = "System properties:";
private static final String TIMESTAMP_PRECISION_STRING =
"timestamp_precision";
- private static String timestampPrecision =
IoTDBDescriptor.getInstance().getConfig().getTimestampPrecision();
+ private static String timestampPrecision = config.getTimestampPrecision();
private static final String PARTITION_INTERVAL_STRING = "partition_interval";
- private static long partitionInterval =
IoTDBDescriptor.getInstance().getConfig().getPartitionInterval();
+ private static long partitionInterval = config.getPartitionInterval();
private static final String TSFILE_FILE_SYSTEM_STRING = "tsfile_storage_fs";
- private static String tsfileFileSystem =
IoTDBDescriptor.getInstance().getConfig().getTsFileStorageFs().toString();
+ private static String tsfileFileSystem =
config.getTsFileStorageFs().toString();
private static final String ENABLE_PARTITION_STRING = "enable_partition";
- private static boolean enablePartition =
IoTDBDescriptor.getInstance().getConfig().isEnablePartition();
+ private static boolean enablePartition = config.isEnablePartition();
private static final String TAG_ATTRIBUTE_SIZE_STRING =
"tag_attribute_total_size";
- private static final String tagAttributeTotalSize =
String.valueOf(IoTDBDescriptor.getInstance().getConfig().getTagAttributeTotalSize());
+ private static String tagAttributeTotalSize =
String.valueOf(config.getTagAttributeTotalSize());
private static final String MAX_DEGREE_OF_INDEX_STRING =
"max_degree_of_index_node";
- private static final String maxDegreeOfIndexNode =
String.valueOf(TSFileDescriptor.getInstance().getConfig().getMaxDegreeOfIndexNode());
+ private static String maxDegreeOfIndexNode = String
+
.valueOf(TSFileDescriptor.getInstance().getConfig().getMaxDegreeOfIndexNode());
private static final String IOTDB_VERSION_STRING = "iotdb_version";
- private static final String ERROR_LOG = "Wrong %s, please set as: %s !";
-
public static IoTDBConfigCheck getInstance() {
return IoTDBConfigCheckHolder.INSTANCE;
}
private static class IoTDBConfigCheckHolder {
+
private static final IoTDBConfigCheck INSTANCE = new IoTDBConfigCheck();
}
@@ -105,8 +110,8 @@ public class IoTDBConfigCheck {
// check time stamp precision
if (!(timestampPrecision.equals("ms") || timestampPrecision.equals("us")
|| timestampPrecision.equals("ns"))) {
- logger.error("Wrong " + TIMESTAMP_PRECISION_STRING + ", please set as:
ms, us or ns ! Current is: "
- + timestampPrecision);
+ logger.error("Wrong {}, please set as: ms, us or ns ! Current is: {}",
+ TIMESTAMP_PRECISION_STRING, timestampPrecision);
System.exit(-1);
}
@@ -142,7 +147,7 @@ public class IoTDBConfigCheck {
*/
public void checkConfig() throws IOException {
propertiesFile = SystemFileFactory.INSTANCE
- .getFile(IoTDBConfigCheck.SCHEMA_DIR + File.separator +
PROPERTIES_FILE_NAME);
+ .getFile(IoTDBConfigCheck.SCHEMA_DIR + File.separator +
PROPERTIES_FILE_NAME);
tmpPropertiesFile = new File(propertiesFile.getAbsoluteFile() + ".tmp");
// system init first time, no need to check, write system.properties and
return
@@ -220,7 +225,7 @@ public class IoTDBConfigCheck {
/**
- * repair 0.10 properties
+ * repair 0.10 properties
*/
private void upgradePropertiesFileFromBrokenFile()
throws IOException {
@@ -261,41 +266,36 @@ public class IoTDBConfigCheck {
}
if
(!properties.getProperty(TIMESTAMP_PRECISION_STRING).equals(timestampPrecision))
{
- logger.error(String.format(ERROR_LOG, TIMESTAMP_PRECISION_STRING,
properties
- .getProperty(TIMESTAMP_PRECISION_STRING)));
- System.exit(-1);
+ printErrorLogAndExit(TIMESTAMP_PRECISION_STRING);
}
if (Long.parseLong(properties.getProperty(PARTITION_INTERVAL_STRING)) !=
partitionInterval) {
- logger.error(String.format(ERROR_LOG, PARTITION_INTERVAL_STRING,
properties
- .getProperty(PARTITION_INTERVAL_STRING)));
- System.exit(-1);
+ printErrorLogAndExit(PARTITION_INTERVAL_STRING);
}
if
(!(properties.getProperty(TSFILE_FILE_SYSTEM_STRING).equals(tsfileFileSystem)))
{
- logger.error(String.format(ERROR_LOG, TSFILE_FILE_SYSTEM_STRING,
properties
- .getProperty(TSFILE_FILE_SYSTEM_STRING)));
- System.exit(-1);
+ printErrorLogAndExit(TSFILE_FILE_SYSTEM_STRING);
}
if
(!(properties.getProperty(TAG_ATTRIBUTE_SIZE_STRING).equals(tagAttributeTotalSize)))
{
- logger.error(String.format(ERROR_LOG, TAG_ATTRIBUTE_SIZE_STRING,
properties
- .getProperty(TAG_ATTRIBUTE_SIZE_STRING)));
- System.exit(-1);
+ printErrorLogAndExit(TAG_ATTRIBUTE_SIZE_STRING);
}
if
(!(properties.getProperty(MAX_DEGREE_OF_INDEX_STRING).equals(maxDegreeOfIndexNode)))
{
- logger.error(String.format(ERROR_LOG, MAX_DEGREE_OF_INDEX_STRING,
properties
- .getProperty(MAX_DEGREE_OF_INDEX_STRING)));
- System.exit(-1);
+ printErrorLogAndExit(MAX_DEGREE_OF_INDEX_STRING);
}
}
+ private void printErrorLogAndExit(String property) {
+ logger.error("Wrong {}, please set as: {} !", property,
properties.getProperty(property));
+ System.exit(-1);
+ }
+
/**
* ensure all tsfiles are closed in 0.9 when starting 0.10
*/
private void checkUnClosedTsFileV1() {
- if (SystemFileFactory.INSTANCE.getFile(WAL_DIR).isDirectory()
+ if (SystemFileFactory.INSTANCE.getFile(WAL_DIR).isDirectory()
&& SystemFileFactory.INSTANCE.getFile(WAL_DIR).list().length != 0) {
logger.error("Unclosed Version-1 TsFile detected, please run 'flush' on
V0.9 IoTDB"
+ " before upgrading to V0.10");
diff --git a/server/src/main/java/org/apache/iotdb/db/conf/IoTDBDescriptor.java
b/server/src/main/java/org/apache/iotdb/db/conf/IoTDBDescriptor.java
index 333b2d8..289e38f 100644
--- a/server/src/main/java/org/apache/iotdb/db/conf/IoTDBDescriptor.java
+++ b/server/src/main/java/org/apache/iotdb/db/conf/IoTDBDescriptor.java
@@ -315,6 +315,12 @@ public class IoTDBDescriptor {
Boolean.parseBoolean(properties.getProperty("enable_partial_insert",
String.valueOf(conf.isEnablePartialInsert()))));
+ conf.setMtreeSnapshotInterval(Integer.parseInt(properties.getProperty(
+ "mtree_snapshot_interval",
Integer.toString(conf.getMtreeSnapshotInterval()))));
+
conf.setMtreeSnapshotThresholdTime(Integer.parseInt(properties.getProperty(
+ "mtree_snapshot_threshold_time",
+ Integer.toString(conf.getMtreeSnapshotThresholdTime()))));
+
conf.setEnablePerformanceStat(Boolean
.parseBoolean(properties.getProperty("enable_performance_stat",
Boolean.toString(conf.isEnablePerformanceStat())).trim()));
@@ -428,7 +434,6 @@ public class IoTDBDescriptor {
//if using org.apache.iotdb.db.auth.authorizer.OpenIdAuthorizer,
openID_url is needed.
conf.setOpenIdProviderUrl(properties.getProperty("openID_url", ""));
-
// At the same time, set TSFileConfig
TSFileDescriptor.getInstance().getConfig()
.setTSFileStorageFs(FSType.valueOf(
diff --git a/server/src/main/java/org/apache/iotdb/db/metadata/MLogWriter.java
b/server/src/main/java/org/apache/iotdb/db/metadata/MLogWriter.java
index 72ee54b..0b9ab3e 100644
--- a/server/src/main/java/org/apache/iotdb/db/metadata/MLogWriter.java
+++ b/server/src/main/java/org/apache/iotdb/db/metadata/MLogWriter.java
@@ -35,6 +35,7 @@ public class MLogWriter {
private static final Logger logger =
LoggerFactory.getLogger(MLogWriter.class);
private BufferedWriter writer;
+ private int lineNumber;
public MLogWriter(String schemaDir, String logFileName) throws IOException {
File metadataDir = SystemFileFactory.INSTANCE.getFile(schemaDir);
@@ -47,21 +48,18 @@ public class MLogWriter {
}
File logFile = SystemFileFactory.INSTANCE.getFile(schemaDir +
File.separator + logFileName);
-
- FileWriter fileWriter;
- fileWriter = new FileWriter(logFile, true);
+ FileWriter fileWriter = new FileWriter(logFile, true);
writer = new BufferedWriter(fileWriter);
}
-
public void close() throws IOException {
writer.close();
}
public void createTimeseries(CreateTimeSeriesPlan plan, long offset) throws
IOException {
writer.write(String.format("%s,%s,%s,%s,%s",
MetadataOperationType.CREATE_TIMESERIES,
- plan.getPath().getFullPath(), plan.getDataType().serialize(),
plan.getEncoding().serialize(),
- plan.getCompressor().serialize()));
+ plan.getPath().getFullPath(), plan.getDataType().serialize(),
+ plan.getEncoding().serialize(), plan.getCompressor().serialize()));
writer.write(",");
if (plan.getProps() != null) {
@@ -85,45 +83,37 @@ public class MLogWriter {
if (offset >= 0) {
writer.write(String.valueOf(offset));
}
-
- writer.newLine();
- writer.flush();
+ newLine();
}
public void deleteTimeseries(String path) throws IOException {
writer.write(MetadataOperationType.DELETE_TIMESERIES + "," + path);
- writer.newLine();
- writer.flush();
+ newLine();
}
public void setStorageGroup(String storageGroup) throws IOException {
writer.write(MetadataOperationType.SET_STORAGE_GROUP + "," + storageGroup);
- writer.newLine();
- writer.flush();
+ newLine();
}
public void deleteStorageGroup(String storageGroup) throws IOException {
writer.write(MetadataOperationType.DELETE_STORAGE_GROUP + "," +
storageGroup);
- writer.newLine();
- writer.flush();
+ newLine();
}
public void setTTL(String storageGroup, long ttl) throws IOException {
writer.write(String.format("%s,%s,%s", MetadataOperationType.SET_TTL,
storageGroup, ttl));
- writer.newLine();
- writer.flush();
+ newLine();
}
public void changeOffset(String path, long offset) throws IOException {
writer.write(String.format("%s,%s,%s",
MetadataOperationType.CHANGE_OFFSET, path, offset));
- writer.newLine();
- writer.flush();
+ newLine();
}
public void changeAlias(String path, String alias) throws IOException {
writer.write(String.format("%s,%s,%s", MetadataOperationType.CHANGE_ALIAS,
path, alias));
- writer.newLine();
- writer.flush();
+ newLine();
}
public static void upgradeMLog(String schemaDir, String logFileName) throws
IOException {
@@ -158,7 +148,6 @@ public class MLogWriter {
writer.write(buf.toString());
writer.newLine();
writer.flush();
-
}
}
@@ -166,9 +155,18 @@ public class MLogWriter {
if (!logFile.delete()) {
throw new IOException("Deleting " + logFile + "failed.");
}
-
+
// rename tmpLogFile to mlog
FSFactoryProducer.getFSFactory().moveFile(tmpLogFile, logFile);
}
-
-}
+
+ private void newLine() throws IOException {
+ writer.newLine();
+ writer.flush();
+ ++lineNumber;
+ }
+
+ int getLineNumber() {
+ return lineNumber;
+ }
+}
\ No newline at end of file
diff --git a/server/src/main/java/org/apache/iotdb/db/metadata/MManager.java
b/server/src/main/java/org/apache/iotdb/db/metadata/MManager.java
index 066b76e..16847ca 100644
--- a/server/src/main/java/org/apache/iotdb/db/metadata/MManager.java
+++ b/server/src/main/java/org/apache/iotdb/db/metadata/MManager.java
@@ -18,6 +18,31 @@
*/
package org.apache.iotdb.db.metadata;
+import static java.util.stream.Collectors.toList;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileReader;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Deque;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
import org.apache.iotdb.db.conf.IoTDBConfig;
import org.apache.iotdb.db.conf.IoTDBConstant;
import org.apache.iotdb.db.conf.IoTDBDescriptor;
@@ -26,7 +51,12 @@ import
org.apache.iotdb.db.conf.adapter.IoTDBConfigDynamicAdapter;
import org.apache.iotdb.db.engine.StorageEngine;
import org.apache.iotdb.db.engine.fileSystem.SystemFileFactory;
import org.apache.iotdb.db.exception.ConfigAdjusterException;
-import org.apache.iotdb.db.exception.metadata.*;
+import org.apache.iotdb.db.exception.metadata.DeleteFailedException;
+import org.apache.iotdb.db.exception.metadata.IllegalPathException;
+import org.apache.iotdb.db.exception.metadata.MetadataException;
+import org.apache.iotdb.db.exception.metadata.PathNotExistException;
+import org.apache.iotdb.db.exception.metadata.StorageGroupAlreadySetException;
+import org.apache.iotdb.db.exception.metadata.StorageGroupNotSetException;
import org.apache.iotdb.db.metadata.mnode.MNode;
import org.apache.iotdb.db.metadata.mnode.MeasurementMNode;
import org.apache.iotdb.db.metadata.mnode.StorageGroupMNode;
@@ -48,16 +78,6 @@ import
org.apache.iotdb.tsfile.write.schema.MeasurementSchema;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
-import java.io.BufferedReader;
-import java.io.File;
-import java.io.FileReader;
-import java.io.IOException;
-import java.util.*;
-import java.util.Map.Entry;
-import java.util.concurrent.locks.ReentrantReadWriteLock;
-
-import static java.util.stream.Collectors.toList;
-
/**
* This class takes the responsibility of serialization of all the metadata
info and persistent it
* into files. This class contains all the interfaces to modify the metadata
for delta system. All
@@ -68,10 +88,17 @@ public class MManager {
private static final Logger logger = LoggerFactory.getLogger(MManager.class);
private static final String TIME_SERIES_TREE_HEADER = "=== Timeseries Tree
===\n\n";
+ /**
+ * A thread will check whether the MTree is modified lately each such
interval. Unit: second
+ */
+ private static final long MTREE_SNAPSHOT_THREAD_CHECK_TIME = 600L;
+
// the lock for read/insert
private ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
// the log file seriesPath
private String logFilePath;
+ private String mtreeSnapshotPath;
+ private String mtreeSnapshotTmpPath;
private MTree mtree;
private MLogWriter logWriter;
private TagLogFile tagLogFile;
@@ -90,6 +117,12 @@ public class MManager {
private boolean initialized;
private IoTDBConfig config;
+ private File logFile;
+ private final int mtreeSnapshotInterval;
+ private final long mtreeSnapshotThresholdTime;
+ private int lastSnapshotLogLineNumber;
+ private ScheduledExecutorService timedCreateMTreeSnapshotThread;
+
private static class MManagerHolder {
private MManagerHolder() {
@@ -101,6 +134,8 @@ public class MManager {
private MManager() {
config = IoTDBDescriptor.getInstance().getConfig();
+ mtreeSnapshotInterval = config.getMtreeSnapshotInterval();
+ mtreeSnapshotThresholdTime = config.getMtreeSnapshotThresholdTime() *
1000L;
String schemaDir = config.getSchemaDir();
File schemaFolder = SystemFileFactory.INSTANCE.getFile(schemaDir);
if (!schemaFolder.exists()) {
@@ -111,6 +146,8 @@ public class MManager {
}
}
logFilePath = schemaDir + File.separator + MetadataConstant.METADATA_LOG;
+ mtreeSnapshotPath = schemaDir + File.separator +
MetadataConstant.MTREE_SNAPSHOT;
+ mtreeSnapshotTmpPath = schemaDir + File.separator +
MetadataConstant.MTREE_SNAPSHOT_TMP;
// do not write log when recover
isRecovering = true;
@@ -143,6 +180,13 @@ public class MManager {
cache.keySet().removeIf(s -> s.startsWith(key));
}
};
+
+ lastSnapshotLogLineNumber = 0;
+ timedCreateMTreeSnapshotThread =
Executors.newSingleThreadScheduledExecutor(r -> new Thread(r,
+ "timedCreateMTreeSnapshotThread"));
+ timedCreateMTreeSnapshotThread
+ .scheduleAtFixedRate(this::checkMTreeModified,
MTREE_SNAPSHOT_THREAD_CHECK_TIME,
+ MTREE_SNAPSHOT_THREAD_CHECK_TIME, TimeUnit.SECONDS);
}
public static MManager getInstance() {
@@ -155,7 +199,7 @@ public class MManager {
if (initialized) {
return;
}
- File logFile = SystemFileFactory.INSTANCE.getFile(logFilePath);
+ logFile = SystemFileFactory.INSTANCE.getFile(logFilePath);
try {
tagLogFile = new TagLogFile(config.getSchemaDir(),
MetadataConstant.TAG_LOG);
@@ -183,12 +227,28 @@ public class MManager {
}
private void initFromLog(File logFile) throws IOException {
+ File tmpFile = SystemFileFactory.INSTANCE.getFile(mtreeSnapshotTmpPath);
+ if (tmpFile.exists()) {
+ logger.warn("Creating MTree snapshot not successful before crashing...");
+ Files.delete(tmpFile.toPath());
+ }
+
+ File mtreeSnapshot = SystemFileFactory.INSTANCE.getFile(mtreeSnapshotPath);
+ if (!mtreeSnapshot.exists()) {
+ mtree = new MTree();
+ } else {
+ mtree = MTree.deserializeFrom(mtreeSnapshot);
+ }
// init the metadata from the operation log
- mtree = new MTree();
if (logFile.exists()) {
try (FileReader fr = new FileReader(logFile);
BufferedReader br = new BufferedReader(fr)) {
String cmd;
+ int idx = 0;
+ while (idx < mtree.getSnapshotLineNumber()) {
+ cmd = br.readLine();
+ idx++;
+ }
while ((cmd = br.readLine()) != null) {
try {
operation(cmd);
@@ -220,6 +280,10 @@ public class MManager {
tagLogFile = null;
}
initialized = false;
+ if (timedCreateMTreeSnapshotThread != null) {
+ timedCreateMTreeSnapshotThread.shutdownNow();
+ timedCreateMTreeSnapshotThread = null;
+ }
} catch (IOException e) {
logger.error("Cannot close metadata log writer, because:", e);
} finally {
@@ -357,9 +421,9 @@ public class MManager {
/**
* Add one timeseries to metadata tree, if the timeseries already exists,
throw exception
*
- * @param path the timeseries path
- * @param dataType the dateType {@code DataType} of the timeseries
- * @param encoding the encoding function {@code Encoding} of the timeseries
+ * @param path the timeseries path
+ * @param dataType the dateType {@code DataType} of the timeseries
+ * @param encoding the encoding function {@code Encoding} of the timeseries
* @param compressor the compressor function {@code Compressor} of the time
series
* @return whether the measurement occurs for the first time in this storage
group (if true, the
* measurement should be registered to the StorageEngine too)
@@ -455,7 +519,8 @@ public class MManager {
logger.debug(String.format(
"Delete: TimeSeries %s's tag info has been removed from tag
inverted index before "
+ "deleting it, tag key is %s, tag value is %s, tlog
offset is %d, contains key %b",
- node.getFullPath(), entry.getKey(), entry.getValue(),
node.getOffset(), tagIndex.containsKey(entry.getKey())));
+ node.getFullPath(), entry.getKey(), entry.getValue(),
node.getOffset(),
+ tagIndex.containsKey(entry.getKey())));
}
}
}
@@ -636,7 +701,7 @@ public class MManager {
* Get all devices under given prefixPath.
*
* @param prefixPath a prefix of a full path. if the wildcard is not at the
tail, then each
- * wildcard can only match one level, otherwise it can
match to the tail.
+ * wildcard can only match one level, otherwise it can match to the tail.
* @return A HashSet instance which stores devices names with given
prefixPath.
*/
public Set<String> getDevices(String prefixPath) throws MetadataException {
@@ -652,9 +717,9 @@ public class MManager {
* Get all nodes from the given level
*
* @param prefixPath can be a prefix of a full path. Can not be a full path.
can not have
- * wildcard. But, the level of the prefixPath can be
smaller than the given
- * level, e.g., prefixPath = root.a while the given level
is 5
- * @param nodeLevel the level can not be smaller than the level of the
prefixPath
+ * wildcard. But, the level of the prefixPath can be smaller than the given
level, e.g.,
+ * prefixPath = root.a while the given level is 5
+ * @param nodeLevel the level can not be smaller than the level of the
prefixPath
* @return A List instance which stores all node at given level
*/
public List<String> getNodesList(String prefixPath, int nodeLevel) throws
MetadataException {
@@ -711,7 +776,7 @@ public class MManager {
* expression in this method is formed by the amalgamation of seriesPath and
the character '*'.
*
* @param prefixPath can be a prefix or a full path. if the wildcard is not
at the tail, then each
- * wildcard can only match one level, otherwise it can
match to the tail.
+ * wildcard can only match one level, otherwise it can match to the tail.
*/
public List<String> getAllTimeseriesName(String prefixPath) throws
MetadataException {
lock.readLock().lock();
@@ -751,7 +816,7 @@ public class MManager {
* To calculate the count of nodes in the given level for given prefix path.
*
* @param prefixPath a prefix path or a full path, can not contain '*'
- * @param level the level can not be smaller than the level of the
prefixPath
+ * @param level the level can not be smaller than the level of the prefixPath
*/
public int getNodesCountInGivenLevel(String prefixPath, int level) throws
MetadataException {
lock.readLock().lock();
@@ -902,7 +967,8 @@ public class MManager {
throws MetadataException {
lock.readLock().lock();
try {
- MNode leaf = mtree.getNodeByPath(device).getChild(measurement);
+ MNode node = mtree.getNodeByPath(device);
+ MNode leaf = node.getChild(measurement);
if (leaf != null) {
return ((MeasurementMNode) leaf).getSchema();
} else {
@@ -989,8 +1055,7 @@ public class MManager {
}
/**
- * get device node, if the storage group is not set, create it when
autoCreateSchema is true
- * <p>
+ * get device node, if the storage group is not set, create it when
autoCreateSchema is true <p>
* (we develop this method as we need to get the node's lock after we get
the lock.writeLock())
*
* <p>!!!!!!Attention!!!!! must call the return node's readUnlock() if you
call this method.
@@ -1053,7 +1118,7 @@ public class MManager {
public MNode getDeviceNode(String path) throws MetadataException {
lock.readLock().lock();
- MNode node = null;
+ MNode node;
try {
node = mNodeCache.get(path);
return node;
@@ -1065,10 +1130,10 @@ public class MManager {
}
/**
- * To reduce the String number in memory,
- * use the deviceId from MManager instead of the deviceId read from disk
- *
- * @param deviceId read from disk
+ * To reduce the String number in memory, use the deviceId from MManager
instead of the deviceId
+ * read from disk
+ *
+ * @param path read from disk
* @return deviceId
*/
public String getDeviceId(String path) {
@@ -1147,7 +1212,7 @@ public class MManager {
* Check whether the given path contains a storage group change or set the
new offset of a
* timeseries
*
- * @param path timeseries
+ * @param path timeseries
* @param offset offset in the tag file
*/
public void changeOffset(String path, long offset) throws MetadataException {
@@ -1177,10 +1242,10 @@ public class MManager {
* upsert tags and attributes key-value for the timeseries if the key has
existed, just use the
* new value to update it.
*
- * @param alias newly added alias
- * @param tagsMap newly added tags map
+ * @param alias newly added alias
+ * @param tagsMap newly added tags map
* @param attributesMap newly added attributes map
- * @param fullPath timeseries
+ * @param fullPath timeseries
*/
public void upsertTagsAndAttributes(String alias, Map<String, String>
tagsMap,
Map<String, String> attributesMap, String fullPath) throws
MetadataException, IOException {
@@ -1254,7 +1319,8 @@ public class MManager {
logger.debug(String.format(
"Upsert: TimeSeries %s's tag info has been removed from
tag inverted index "
+ "before deleting it, tag key is %s, tag value is %s,
tlog offset is %d, contains key %b",
- leafMNode.getFullPath(), key, beforeValue,
leafMNode.getOffset(), tagIndex.containsKey(key)));
+ leafMNode.getFullPath(), key, beforeValue,
leafMNode.getOffset(),
+ tagIndex.containsKey(key)));
}
}
}
@@ -1282,7 +1348,7 @@ public class MManager {
* add new attributes key-value for the timeseries
*
* @param attributesMap newly added attributes map
- * @param fullPath timeseries
+ * @param fullPath timeseries
*/
public void addAttributes(Map<String, String> attributesMap, String fullPath)
throws MetadataException, IOException {
@@ -1324,7 +1390,7 @@ public class MManager {
/**
* add new tags key-value for the timeseries
*
- * @param tagsMap newly added tags map
+ * @param tagsMap newly added tags map
* @param fullPath timeseries
*/
public void addTags(Map<String, String> tagsMap, String fullPath)
@@ -1377,7 +1443,7 @@ public class MManager {
/**
* drop tags or attributes of the timeseries
*
- * @param keySet tags key or attributes key
+ * @param keySet tags key or attributes key
* @param fullPath timeseries path
*/
public void dropTagsOrAttributes(Set<String> keySet, String fullPath)
@@ -1434,7 +1500,8 @@ public class MManager {
logger.debug(String.format(
"Drop: TimeSeries %s's tag info has been removed from tag
inverted index "
+ "before deleting it, tag key is %s, tag value is %s,
tlog offset is %d, contains key %b",
- leafMNode.getFullPath(), key, value, leafMNode.getOffset(),
tagIndex.containsKey(key)));
+ leafMNode.getFullPath(), key, value, leafMNode.getOffset(),
+ tagIndex.containsKey(key)));
}
}
@@ -1510,7 +1577,8 @@ public class MManager {
logger.debug(String.format(
"Set: TimeSeries %s's tag info has been removed from tag
inverted index "
+ "before deleting it, tag key is %s, tag value is %s,
tlog offset is %d, contains key %b",
- leafMNode.getFullPath(), key, beforeValue,
leafMNode.getOffset(), tagIndex.containsKey(key)));
+ leafMNode.getFullPath(), key, beforeValue,
leafMNode.getOffset(),
+ tagIndex.containsKey(key)));
}
}
tagIndex.computeIfAbsent(key, k -> new HashMap<>())
@@ -1524,8 +1592,8 @@ public class MManager {
/**
* rename the tag or attribute's key of the timeseries
*
- * @param oldKey old key of tag or attribute
- * @param newKey new key of tag or attribute
+ * @param oldKey old key of tag or attribute
+ * @param newKey new key of tag or attribute
* @param fullPath timeseries
*/
public void renameTagOrAttributeKey(String oldKey, String newKey, String
fullPath)
@@ -1575,7 +1643,8 @@ public class MManager {
logger.debug(String.format(
"Rename: TimeSeries %s's tag info has been removed from tag
inverted index "
+ "before deleting it, tag key is %s, tag value is %s,
tlog offset is %d, contains key %b",
- leafMNode.getFullPath(), oldKey, value, leafMNode.getOffset(),
tagIndex.containsKey(oldKey)));
+ leafMNode.getFullPath(), oldKey, value, leafMNode.getOffset(),
+ tagIndex.containsKey(oldKey)));
}
}
tagIndex.computeIfAbsent(newKey, k -> new HashMap<>())
@@ -1702,4 +1771,33 @@ public class MManager {
mRemoteSchemaCache.put(path, schema);
}
}
+
+ private void checkMTreeModified() {
+ if (System.currentTimeMillis() - logFile.lastModified() <
mtreeSnapshotThresholdTime) {
+ logger.info("MTree snapshot is not created because of active
modification");
+ } else if (logWriter.getLineNumber() - lastSnapshotLogLineNumber <
mtreeSnapshotInterval) {
+ logger.info(
+ "MTree snapshot need not be created. Current mlog line number: {},
last snapshot line number: {}",
+ logWriter.getLineNumber(), lastSnapshotLogLineNumber);
+ } else {
+ lock.readLock().lock();
+ logger.info("Start creating MTree snapshot. This may take a while...");
+ try {
+ mtree.serializeTo(mtreeSnapshotTmpPath, logWriter.getLineNumber());
+ lastSnapshotLogLineNumber = logWriter.getLineNumber();
+ File tmpFile =
SystemFileFactory.INSTANCE.getFile(mtreeSnapshotTmpPath);
+ File snapshotFile =
SystemFileFactory.INSTANCE.getFile(mtreeSnapshotPath);
+ if (snapshotFile.exists()) {
+ Files.delete(snapshotFile.toPath());
+ }
+ if (tmpFile.renameTo(snapshotFile)) {
+ logger.info("Finish creating MTree snapshot to {}.",
mtreeSnapshotPath);
+ }
+ } catch (IOException e) {
+ logger.warn("Failed to create MTree snapshot to {}",
mtreeSnapshotPath, e);
+ } finally {
+ lock.readLock().unlock();
+ }
+ }
+ }
}
diff --git a/server/src/main/java/org/apache/iotdb/db/metadata/MTree.java
b/server/src/main/java/org/apache/iotdb/db/metadata/MTree.java
index 089be37..b4eb3e0 100644
--- a/server/src/main/java/org/apache/iotdb/db/metadata/MTree.java
+++ b/server/src/main/java/org/apache/iotdb/db/metadata/MTree.java
@@ -26,6 +26,12 @@ import static
org.apache.iotdb.db.query.executor.LastQueryExecutor.calculateLast
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.alibaba.fastjson.serializer.SerializerFeature;
+import java.io.BufferedReader;
+import java.io.BufferedWriter;
+import java.io.File;
+import java.io.FileReader;
+import java.io.FileWriter;
+import java.io.IOException;
import java.io.Serializable;
import java.util.ArrayDeque;
import java.util.ArrayList;
@@ -34,6 +40,7 @@ import java.util.Comparator;
import java.util.Deque;
import java.util.HashMap;
import java.util.HashSet;
+import java.util.LinkedHashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
@@ -45,6 +52,7 @@ import java.util.regex.Pattern;
import java.util.stream.Stream;
import org.apache.iotdb.db.conf.IoTDBConstant;
import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.engine.fileSystem.SystemFileFactory;
import org.apache.iotdb.db.exception.metadata.AliasAlreadyExistException;
import org.apache.iotdb.db.exception.metadata.IllegalPathException;
import org.apache.iotdb.db.exception.metadata.MetadataException;
@@ -65,6 +73,8 @@ import org.apache.iotdb.tsfile.read.TimeValuePair;
import org.apache.iotdb.tsfile.read.common.Path;
import org.apache.iotdb.tsfile.utils.Pair;
import org.apache.iotdb.tsfile.write.schema.MeasurementSchema;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
/**
* The hierarchical struct of the Metadata Tree is implemented in this class.
@@ -72,27 +82,40 @@ import
org.apache.iotdb.tsfile.write.schema.MeasurementSchema;
public class MTree implements Serializable {
private static final long serialVersionUID = -4200394435237291964L;
+ private static final Logger logger = LoggerFactory.getLogger(MTree.class);
+
private MNode root;
- private transient ThreadLocal<Integer> limit = new ThreadLocal<>();
- private transient ThreadLocal<Integer> offset = new ThreadLocal<>();
- private transient ThreadLocal<Integer> count = new ThreadLocal<>();
- private transient ThreadLocal<Integer> curOffset = new ThreadLocal<>();
+ /**
+ * The number of mlog lines that has been snapshot in mtree.snapshot
+ */
+ private int snapshotLineNumber;
+
+ private static transient ThreadLocal<Integer> limit = new ThreadLocal<>();
+ private static transient ThreadLocal<Integer> offset = new ThreadLocal<>();
+ private static transient ThreadLocal<Integer> count = new ThreadLocal<>();
+ private static transient ThreadLocal<Integer> curOffset = new
ThreadLocal<>();
MTree() {
this.root = new MNode(null, IoTDBConstant.PATH_ROOT);
+ this.snapshotLineNumber = 0;
+ }
+
+ private MTree(MNode root, int snapshotLineNumber) {
+ this.root = root;
+ this.snapshotLineNumber = snapshotLineNumber;
}
/**
* Create a timeseries with a full path from root to leaf node Before
creating a timeseries, the
* storage group should be set first, throw exception otherwise
*
- * @param path timeseries path
- * @param dataType data type
- * @param encoding encoding
+ * @param path timeseries path
+ * @param dataType data type
+ * @param encoding encoding
* @param compressor compressor
- * @param props props
- * @param alias alias of measurement
+ * @param props props
+ * @param alias alias of measurement
*/
MeasurementMNode createTimeseries(
String path,
@@ -685,7 +708,7 @@ public class MTree implements Serializable {
*
* @param needLast if false, lastTimeStamp in timeseriesSchemaList will be
null
* @param timeseriesSchemaList List<timeseriesSchema> result: [name, alias,
storage group,
- * dataType, encoding, compression, offset,
lastTimeStamp]
+ * dataType, encoding, compression, offset, lastTimeStamp]
*/
private void findPath(MNode node, String[] nodes, int idx, String parent,
List<String[]> timeseriesSchemaList, boolean hasLimit, boolean needLast)
@@ -773,11 +796,11 @@ public class MTree implements Serializable {
/**
* Traverse the MTree to match all child node path in next level
*
- * @param node the current traversing node
- * @param nodes split the prefix path with '.'
- * @param idx the current index of array nodes
+ * @param node the current traversing node
+ * @param nodes split the prefix path with '.'
+ * @param idx the current index of array nodes
* @param parent store the node string having traversed
- * @param res store all matched device names
+ * @param res store all matched device names
* @param length expected length of path
*/
private void findChildNodePathInNextLevel(
@@ -833,10 +856,10 @@ public class MTree implements Serializable {
/**
* Traverse the MTree to match all devices with prefix path.
*
- * @param node the current traversing node
+ * @param node the current traversing node
* @param nodes split the prefix path with '.'
- * @param idx the current index of array nodes
- * @param res store all matched device names
+ * @param idx the current index of array nodes
+ * @param res store all matched device names
*/
private void findDevices(MNode node, String[] nodes, int idx, Set<String>
res) {
String nodeReg = MetaUtils.getNodeRegByIdx(idx, nodes);
@@ -899,6 +922,69 @@ public class MTree implements Serializable {
}
}
+ public int getSnapshotLineNumber() {
+ return snapshotLineNumber;
+ }
+
+ public void serializeTo(String snapshotPath, int lineNumber) throws
IOException {
+ try (BufferedWriter bw = new BufferedWriter(
+ new FileWriter(SystemFileFactory.INSTANCE.getFile(snapshotPath)))) {
+ bw.write(String.valueOf(lineNumber));
+ bw.newLine();
+ root.serializeTo(bw);
+ }
+ }
+
+ public static MTree deserializeFrom(File mtreeSnapshot) {
+ try (BufferedReader br = new BufferedReader(new
FileReader(mtreeSnapshot))) {
+ int snapshotLineNumber = Integer.parseInt(br.readLine());
+ String s;
+ Deque<MNode> nodeStack = new ArrayDeque<>();
+ MNode node = null;
+
+ while ((s = br.readLine()) != null) {
+ String[] nodeInfo = s.split(",");
+ short nodeType = Short.parseShort(nodeInfo[0]);
+ if (nodeType == MetadataConstant.STORAGE_GROUP_MNODE_TYPE) {
+ node = StorageGroupMNode.deserializeFrom(nodeInfo);
+ } else if (nodeType == MetadataConstant.MEASUREMENT_MNODE_TYPE) {
+ node = MeasurementMNode.deserializeFrom(nodeInfo);
+ } else {
+ node = new MNode(null, nodeInfo[1]);
+ }
+
+ int childrenSize = Integer.parseInt(nodeInfo[nodeInfo.length - 1]);
+ if (childrenSize == 0) {
+ nodeStack.push(node);
+ } else {
+ Map<String, MNode> childrenMap = new LinkedHashMap<>();
+ for (int i = 0; i < childrenSize; i++) {
+ MNode child = nodeStack.removeFirst();
+ child.setParent(node);
+ childrenMap.put(child.getName(), child);
+ if (child instanceof MeasurementMNode) {
+ String alias = ((MeasurementMNode) child).getAlias();
+ if (alias != null) {
+ node.addAlias(alias, child);
+ }
+ }
+ }
+ node.setChildren(childrenMap);
+ nodeStack.push(node);
+ }
+ }
+ return new MTree(node, snapshotLineNumber);
+ } catch (IOException e) {
+ logger.warn("Failed to deserialize from {}. Use a new MTree.",
mtreeSnapshot.getPath());
+ return new MTree();
+ } finally {
+ limit = new ThreadLocal<>();
+ offset = new ThreadLocal<>();
+ count = new ThreadLocal<>();
+ curOffset = new ThreadLocal<>();
+ }
+ }
+
@Override
public String toString() {
JSONObject jsonObject = new JSONObject();
diff --git
a/server/src/main/java/org/apache/iotdb/db/metadata/MetadataConstant.java
b/server/src/main/java/org/apache/iotdb/db/metadata/MetadataConstant.java
index 5aeab5b..3852df5 100644
--- a/server/src/main/java/org/apache/iotdb/db/metadata/MetadataConstant.java
+++ b/server/src/main/java/org/apache/iotdb/db/metadata/MetadataConstant.java
@@ -28,6 +28,8 @@ public class MetadataConstant {
public static final String METADATA_LOG = "mlog.txt";
public static final String TAG_LOG = "tlog.txt";
public static final String MTREE_SNAPSHOT = "mtree.snapshot";
+ public static final String MTREE_SNAPSHOT_TMP = "mtree.snapshot.tmp";
+
public static final short MNODE_TYPE = 0;
public static final short STORAGE_GROUP_MNODE_TYPE = 1;
diff --git a/server/src/main/java/org/apache/iotdb/db/metadata/mnode/MNode.java
b/server/src/main/java/org/apache/iotdb/db/metadata/mnode/MNode.java
index a34df03..002540d 100644
--- a/server/src/main/java/org/apache/iotdb/db/metadata/mnode/MNode.java
+++ b/server/src/main/java/org/apache/iotdb/db/metadata/mnode/MNode.java
@@ -85,7 +85,7 @@ public class MNode implements Serializable {
}
/**
- * If delete a leafMNode, lock its parent, if delete an InternalNode, lock
itself
+ * delete a child
*/
public void deleteChild(String name) throws DeleteFailedException {
if (children != null && children.containsKey(name)) {