[GitHub] Beyyes commented on a change in pull request #31: Fix sonar

2019-01-27 Thread GitBox
Beyyes commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251306760
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/EngineExecutorWithoutTimeGenerator.java
 ##
 @@ -71,33 +71,50 @@ public QueryDataSet executeWithGlobalTimeFilter()
   QueryDataSource queryDataSource = 
QueryDataSourceManager.getQueryDataSource(jobId, path);
 
   // add data type
-  dataTypes.add(MManager.getInstance().getSeriesType(path.getFullPath()));
+  try {
+
dataTypes.add(MManager.getInstance().getSeriesType(path.getFullPath()));
+  } catch (PathErrorException e) {
+throw new FileNodeManagerException(e);
+  }
 
   PriorityMergeReader priorityReader = new PriorityMergeReader();
 
   // sequence reader for one sealed tsfile
-  SequenceDataReader tsFilesReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
-  timeFilter);
-  priorityReader.addReaderWithPriority(tsFilesReader, 1);
+  SequenceDataReader tsFilesReader = null;
+  try {
+tsFilesReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
+timeFilter);
+priorityReader.addReaderWithPriority(tsFilesReader, 1);
+  } catch (IOException e) {
+throw new FileNodeManagerException(e);
+  }
 
   // unseq reader for all chunk groups in unSeqFile
-  PriorityMergeReader unSeqMergeReader = SeriesReaderFactory.getInstance()
-  
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
-  priorityReader.addReaderWithPriority(unSeqMergeReader, 2);
+  PriorityMergeReader unSeqMergeReader = null;
+  try {
+unSeqMergeReader = SeriesReaderFactory.getInstance()
+
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
+priorityReader.addReaderWithPriority(unSeqMergeReader, 2);
 
 Review comment:
   Replacing it with an Enum class is not enough. Because to merge sort 
un-sequence files, there may need lots of priority number.
   
   I have added comments for this method.
   ```
/**
  * The bigger the priority value is, the higher the priority of this 
reader is
  */
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kr11 commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
kr11 commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251418637
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/EngineExecutorWithTimeGenerator.java
 ##
 @@ -59,18 +59,24 @@
* @throws IOException IOException
* @throws FileNodeManagerException FileNodeManagerException
*/
-  public QueryDataSet execute() throws IOException, FileNodeManagerException {
+  public QueryDataSet execute() throws FileNodeManagerException {
 
 QueryTokenManager.getInstance()
 .beginQueryOfGivenQueryPaths(jobId, 
queryExpression.getSelectedSeries());
 QueryTokenManager.getInstance()
 .beginQueryOfGivenExpression(jobId, queryExpression.getExpression());
 
-EngineTimeGenerator timestampGenerator = new EngineTimeGenerator(jobId,
-queryExpression.getExpression());
+EngineTimeGenerator timestampGenerator = null;
+List readersOfSelectedSeries = null;
+try {
+   timestampGenerator = new EngineTimeGenerator(jobId,
+  queryExpression.getExpression());
 
-List readersOfSelectedSeries = 
getReadersOfSelectedPaths(
-queryExpression.getSelectedSeries());
+   readersOfSelectedSeries = getReadersOfSelectedPaths(
+  queryExpression.getSelectedSeries());
+} catch (IOException ex){
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jixuan1989 commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
jixuan1989 commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251423647
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/EngineExecutorWithoutTimeGenerator.java
 ##
 @@ -71,33 +71,50 @@ public QueryDataSet executeWithGlobalTimeFilter()
   QueryDataSource queryDataSource = 
QueryDataSourceManager.getQueryDataSource(jobId, path);
 
   // add data type
-  dataTypes.add(MManager.getInstance().getSeriesType(path.getFullPath()));
+  try {
+
dataTypes.add(MManager.getInstance().getSeriesType(path.getFullPath()));
+  } catch (PathErrorException e) {
+throw new FileNodeManagerException(e);
+  }
 
   PriorityMergeReader priorityReader = new PriorityMergeReader();
 
   // sequence reader for one sealed tsfile
-  SequenceDataReader tsFilesReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
-  timeFilter);
-  priorityReader.addReaderWithPriority(tsFilesReader, 1);
+  SequenceDataReader tsFilesReader = null;
+  try {
+tsFilesReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
+timeFilter);
+priorityReader.addReaderWithPriority(tsFilesReader, 1);
+  } catch (IOException e) {
+throw new FileNodeManagerException(e);
+  }
 
   // unseq reader for all chunk groups in unSeqFile
-  PriorityMergeReader unSeqMergeReader = SeriesReaderFactory.getInstance()
-  
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
-  priorityReader.addReaderWithPriority(unSeqMergeReader, 2);
+  PriorityMergeReader unSeqMergeReader = null;
+  try {
+unSeqMergeReader = SeriesReaderFactory.getInstance()
+
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
+priorityReader.addReaderWithPriority(unSeqMergeReader, 2);
 
 Review comment:
   Or, we can define an int array, `private static final int[] priority = new 
int[]{1,2,3,4}`, then readers will know the highest priority is 4. 
   You can leave it in the future...


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jixuan1989 commented on issue #31: Fix sonar

2019-01-28 Thread GitBox
jixuan1989 commented on issue #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#issuecomment-458156997
 
 
   This PR has passed the jenkins pipeline: 
https://builds.apache.org/job/IoTDB-Pipeline/job/fix_sonar/7
   
   Does anyone has comments for this PR? 
   If not, I will merge it tonight.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fanhualta commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
fanhualta commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251406494
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/postback/conf/PostBackSenderConfig.java
 ##
 @@ -1,19 +1,15 @@
 /**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements.  See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership.  The ASF licenses this file to you under the Apache 
License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with the 
License.  You may obtain
+ * a copy of the License at
  *
- * http://www.apache.org/licenses/LICENSE-2.0
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
+ * Unless required by applicable law or agreed to in writing, software 
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 
KIND, either express
+ * or implied.  See the License for the specific language governing 
permissions and limitations
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fanhualta commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
fanhualta commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251406533
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/postback/conf/PostBackSenderDescriptor.java
 ##
 @@ -87,46 +83,42 @@ private void loadProps() {
 try {
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kr11 commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
kr11 commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251419201
 
 

 ##
 File path: iotdb/src/main/java/org/apache/iotdb/db/service/IoTDB.java
 ##
 @@ -168,6 +169,8 @@ private void systemDataRecovery() throws RecoverException {
   private static class IoTDBHolder {
 
 private static final IoTDB INSTANCE = new IoTDB();
+
+private IoTDBHolder(){}
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jixuan1989 commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
jixuan1989 commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251461180
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/postback/utils/CreateDataSender1.java
 ##
 @@ -53,6 +58,7 @@
   private static final int MAX_FLOAT = 30;
   private static final int STRING_LENGTH = 5;
   private static final int BATCH_SQL = 1;
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(CreateDataSender1.class);
 
 Review comment:
   `logger` is better than `LOGGER`, because it is a class instance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kr11 commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
kr11 commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251228850
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/EngineQueryRouter.java
 ##
 @@ -73,18 +73,14 @@ public QueryDataSet query(QueryExpression queryExpression)
   return engineExecutor.execute();
 }
 
-  } catch (QueryFilterOptimizationException | PathErrorException e) {
-throw new IOException(e);
+  } catch (QueryFilterOptimizationException e) {
+throw new FileNodeManagerException(new IOException(e));
 
 Review comment:
   the conversation is a little hard to understand.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kr11 commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
kr11 commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251411639
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/EngineQueryRouter.java
 ##
 @@ -73,18 +73,14 @@ public QueryDataSet query(QueryExpression queryExpression)
   return engineExecutor.execute();
 }
 
-  } catch (QueryFilterOptimizationException | PathErrorException e) {
-throw new IOException(e);
+  } catch (QueryFilterOptimizationException e) {
+throw new FileNodeManagerException(new IOException(e));
 
 Review comment:
   The IOException has been removed and throw `new FileNodeManagerException(e)` 
directly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jixuan1989 commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
jixuan1989 commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251472263
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/EngineExecutorWithoutTimeGenerator.java
 ##
 @@ -71,33 +71,50 @@ public QueryDataSet executeWithGlobalTimeFilter()
   QueryDataSource queryDataSource = 
QueryDataSourceManager.getQueryDataSource(jobId, path);
 
   // add data type
-  dataTypes.add(MManager.getInstance().getSeriesType(path.getFullPath()));
+  try {
+
dataTypes.add(MManager.getInstance().getSeriesType(path.getFullPath()));
+  } catch (PathErrorException e) {
+throw new FileNodeManagerException(e);
+  }
 
   PriorityMergeReader priorityReader = new PriorityMergeReader();
 
   // sequence reader for one sealed tsfile
-  SequenceDataReader tsFilesReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
-  timeFilter);
-  priorityReader.addReaderWithPriority(tsFilesReader, 1);
+  SequenceDataReader tsFilesReader = null;
+  try {
+tsFilesReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
+timeFilter);
+priorityReader.addReaderWithPriority(tsFilesReader, 1);
+  } catch (IOException e) {
+throw new FileNodeManagerException(e);
+  }
 
   // unseq reader for all chunk groups in unSeqFile
-  PriorityMergeReader unSeqMergeReader = SeriesReaderFactory.getInstance()
-  
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
-  priorityReader.addReaderWithPriority(unSeqMergeReader, 2);
+  PriorityMergeReader unSeqMergeReader = null;
+  try {
+unSeqMergeReader = SeriesReaderFactory.getInstance()
+
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
+priorityReader.addReaderWithPriority(unSeqMergeReader, 2);
 
 Review comment:
   Fixed, two final fields are set in PriorityMergeReader:
   `  public static final int LOW_PRIORITY = 1;`
   `  public static final int HIGH_PRIORITY = 2;`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jixuan1989 merged pull request #31: Fix sonar

2019-01-28 Thread GitBox
jixuan1989 merged pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fanhualta commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
fanhualta commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251406589
 
 

 ##
 File path: 
iotdb/src/test/java/org/apache/iotdb/db/postback/sender/FileManagerTest.java
 ##
 @@ -55,20 +56,19 @@ public void setUp() throws Exception {
   }
 
   @After
-  public void tearDown() throws Exception {
+  public void tearDown() throws IOException, InterruptedException {
 Thread.sleep(1000);
-delete(new File(POST_BACK_DIRECTORY_TEST));
-new File(POST_BACK_DIRECTORY_TEST).delete();
+Files.deleteIfExists((java.nio.file.Path) new 
Path(POST_BACK_DIRECTORY_TEST));
   }
 
-  public void delete(File file) {
+  public void delete(File file) throws IOException {
 if (file.isFile() || file.list().length == 0) {
-  file.delete();
+  Files.deleteIfExists((java.nio.file.Path) new 
Path(file.getAbsolutePath()));
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fanhualta commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
fanhualta commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251406638
 
 

 ##
 File path: 
iotdb/src/test/java/org/apache/iotdb/db/postback/sender/FileManagerTest.java
 ##
 @@ -368,7 +364,7 @@ public void testGetSendingFileList() throws IOException {
 
   private boolean isEmpty(Map> sendingFileList) {
 for (Entry> entry : sendingFileList.entrySet()) {
-  if (entry.getValue().size() != 0) {
+  if (entry.getValue().isEmpty()) {
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fanhualta commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
fanhualta commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251406922
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/postback/conf/PostBackSenderDescriptor.java
 ##
 @@ -87,46 +87,42 @@ private void loadProps() {
 try {
   properties.load(inputStream);
 
-  conf.serverIp = properties.getProperty("server_ip", conf.serverIp);
-  conf.serverPort = Integer
-  .parseInt(properties.getProperty("server_port", conf.serverPort + 
""));
+  conf.setServerIp(properties.getProperty("server_ip", 
conf.getServerIp()));
+  conf.setServerPort(Integer
+  .parseInt(properties.getProperty("server_port", conf.getServerPort() 
+ "")));
 
-  conf.clientPort = Integer
-  .parseInt(properties.getProperty("client_port", conf.clientPort + 
""));
-  conf.uploadCycleInSeconds = Integer
-  .parseInt(
-  properties.getProperty("upload_cycle_in_seconds", 
conf.uploadCycleInSeconds + ""));
-  conf.schemaPath = properties.getProperty("iotdb_schema_directory", 
conf.schemaPath);
-  conf.isClearEnable = Boolean
-  .parseBoolean(properties.getProperty("is_clear_enable", 
conf.isClearEnable + ""));
-  conf.uuidPath = conf.dataDirectory + "postback" + File.separator + 
"uuid.txt";
-  conf.lastFileInfo =
-  conf.dataDirectory + "postback" + File.separator + 
"lastLocalFileList.txt";
-
-  String[] snapshots = new String[conf.iotdbBufferwriteDirectory.length];
-  for (int i = 0; i < conf.iotdbBufferwriteDirectory.length; i++) {
-conf.iotdbBufferwriteDirectory[i] = new 
File(conf.iotdbBufferwriteDirectory[i])
-.getAbsolutePath();
-if (!conf.iotdbBufferwriteDirectory[i].endsWith(File.separator)) {
-  conf.iotdbBufferwriteDirectory[i] = 
conf.iotdbBufferwriteDirectory[i] + File.separator;
+  conf.setClientPort(Integer
+  .parseInt(properties.getProperty("client_port", conf.getClientPort() 
+ "")));
+  conf.setUploadCycleInSeconds(Integer.parseInt(properties
+  .getProperty("upload_cycle_in_seconds", 
conf.getUploadCycleInSeconds() + "")));
+  conf.setSchemaPath(properties.getProperty("iotdb_schema_directory", 
conf.getSchemaPath()));
+  conf.setClearEnable(Boolean
+  .parseBoolean(properties.getProperty("is_clear_enable", 
conf.getClearEnable() + "")));
+  conf.setUuidPath(conf.getDataDirectory() + POSTBACK + File.separator + 
"uuid.txt");
+  conf.setLastFileInfo(
+  conf.getDataDirectory() + POSTBACK + File.separator + 
"lastLocalFileList.txt");
+  String[] iotdbBufferwriteDirectory = conf.getIotdbBufferwriteDirectory();
+  String[] snapshots = new 
String[conf.getIotdbBufferwriteDirectory().length];
+  for (int i = 0; i < conf.getIotdbBufferwriteDirectory().length; i++) {
+iotdbBufferwriteDirectory[i] = new 
File(iotdbBufferwriteDirectory[i]).getAbsolutePath();
+if (!iotdbBufferwriteDirectory[i].endsWith(File.separator)) {
+  iotdbBufferwriteDirectory[i] = iotdbBufferwriteDirectory[i] + 
File.separator;
 }
-snapshots[i] =
-conf.iotdbBufferwriteDirectory[i] + "postback" + File.separator + 
"dataSnapshot"
-+ File.separator;
+snapshots[i] = iotdbBufferwriteDirectory[i] + POSTBACK + 
File.separator + "dataSnapshot"
++ File.separator;
   }
-  conf.snapshotPaths = snapshots;
+  conf.setIotdbBufferwriteDirectory(iotdbBufferwriteDirectory);
+  conf.setSnapshotPaths(snapshots);
 } catch (IOException e) {
   LOGGER.warn("Cannot load config file because {}, use default 
configuration", e.getMessage());
 } catch (Exception e) {
   LOGGER.warn("Error format in config file because {}, use default 
configuration",
   e.getMessage());
 } finally {
-  if (inputStream != null) {
-try {
-  inputStream.close();
-} catch (IOException e) {
-  LOGGER.error("Fail to close config file input stream because {}", 
e.getMessage());
-}
+  try {
+inputStream.close();
+  } catch (IOException e) {
+LOGGER.error("Fail to close config file input stream because {}", 
e.getMessage());
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fanhualta commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
fanhualta commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251406869
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/postback/conf/PostBackSenderDescriptor.java
 ##
 @@ -87,46 +87,42 @@ private void loadProps() {
 try {
   properties.load(inputStream);
 
-  conf.serverIp = properties.getProperty("server_ip", conf.serverIp);
-  conf.serverPort = Integer
-  .parseInt(properties.getProperty("server_port", conf.serverPort + 
""));
+  conf.setServerIp(properties.getProperty("server_ip", 
conf.getServerIp()));
+  conf.setServerPort(Integer
+  .parseInt(properties.getProperty("server_port", conf.getServerPort() 
+ "")));
 
-  conf.clientPort = Integer
-  .parseInt(properties.getProperty("client_port", conf.clientPort + 
""));
-  conf.uploadCycleInSeconds = Integer
-  .parseInt(
-  properties.getProperty("upload_cycle_in_seconds", 
conf.uploadCycleInSeconds + ""));
-  conf.schemaPath = properties.getProperty("iotdb_schema_directory", 
conf.schemaPath);
-  conf.isClearEnable = Boolean
-  .parseBoolean(properties.getProperty("is_clear_enable", 
conf.isClearEnable + ""));
-  conf.uuidPath = conf.dataDirectory + "postback" + File.separator + 
"uuid.txt";
-  conf.lastFileInfo =
-  conf.dataDirectory + "postback" + File.separator + 
"lastLocalFileList.txt";
-
-  String[] snapshots = new String[conf.iotdbBufferwriteDirectory.length];
-  for (int i = 0; i < conf.iotdbBufferwriteDirectory.length; i++) {
-conf.iotdbBufferwriteDirectory[i] = new 
File(conf.iotdbBufferwriteDirectory[i])
-.getAbsolutePath();
-if (!conf.iotdbBufferwriteDirectory[i].endsWith(File.separator)) {
-  conf.iotdbBufferwriteDirectory[i] = 
conf.iotdbBufferwriteDirectory[i] + File.separator;
+  conf.setClientPort(Integer
+  .parseInt(properties.getProperty("client_port", conf.getClientPort() 
+ "")));
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kr11 commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
kr11 commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251404197
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/postback/receiver/ServerServiceImpl.java
 ##
 @@ -320,318 +331,298 @@ public void afterReceiving() {
*/
   @Override
   public void getFileNodeInfo() throws TException {
-// String filePath = postbackPath + uuid.get() + File.separator + "data";
-// File root = new File(filePath);
-// File[] files = root.listFiles();
-// int num = 0;
-// for (File storageGroupPB : files) {
-// List filesPath = new ArrayList<>();
-// File[] filesSG = storageGroupPB.listFiles();
-// for (File fileTF : filesSG) { // fileTF means TsFiles
-// Map startTimeMap = new HashMap<>();
-// Map endTimeMap = new HashMap<>();
-// TsRandomAccessLocalFileReader input = null;
-// try {
-// input = new TsRandomAccessLocalFileReader(fileTF.getAbsolutePath());
-// FileReader reader = new FileReader(input);
-// Map deviceIdMap = 
reader.getFileMetaData().getDeviceMap();
-// Iterator it = deviceIdMap.keySet().iterator();
-// while (it.hasNext()) {
-// String key = it.next().toString(); // key represent device
-// TsDevice deltaObj = deviceIdMap.get(key);
-// startTimeMap.put(key, deltaObj.startTime);
-// endTimeMap.put(key, deltaObj.endTime);
-// }
-// } catch (Exception e) {
-// LOGGER.error("IoTDB post back receiver: unable to read tsfile {} 
because {}",
-// fileTF.getAbsolutePath(), e.getMessage());
-// } finally {
-// try {
-// input.close();
-// } catch (IOException e) {
-// LOGGER.error("IoTDB receiver : Cannot close file stream {} because {}",
-// fileTF.getAbsolutePath(), e.getMessage());
-// }
-// }
-// fileNodeStartTime.get().put(fileTF.getAbsolutePath(), startTimeMap);
-// fileNodeEndTime.get().put(fileTF.getAbsolutePath(), endTimeMap);
-// filesPath.add(fileTF.getAbsolutePath());
-// num++;
-// LOGGER.info("IoTDB receiver : Getting FileNode Info has complete : " + 
num + "/" +
-// fileNum.get());
-// }
-// fileNodeMap.get().put(storageGroupPB.getName(), filesPath);
-// }
+String filePath = postbackPath + uuid.get() + File.separator + "data";
+File root = new File(filePath);
+File[] files = root.listFiles();
+int num = 0;
+for (File storageGroupPB : files) {
+  List filesPath = new ArrayList<>();
+  File[] filesSG = storageGroupPB.listFiles();
+  for (File fileTF : filesSG) { // fileTF means TsFiles
+Map startTimeMap = new HashMap<>();
+Map endTimeMap = new HashMap<>();
+TsFileSequenceReader reader = null;
+try {
+  reader = new TsFileSequenceReader(fileTF.getAbsolutePath());
+  Map deviceIdMap = 
reader.readFileMetadata().getDeviceMap();
+  Iterator it = deviceIdMap.keySet().iterator();
+  while (it.hasNext()) {
+String key = it.next(); // key represent device
+TsDeviceMetadataIndex device = deviceIdMap.get(key);
+startTimeMap.put(key, device.getStartTime());
+endTimeMap.put(key, device.getEndTime());
+  }
+} catch (Exception e) {
+  LOGGER.error("IoTDB post back receiver: unable to read tsfile {} 
because {}",
+  fileTF.getAbsolutePath(), e.getMessage());
+} finally {
+  try {
+if (reader != null) {
+  reader.close();
+}
+  } catch (IOException e) {
+LOGGER.error("IoTDB receiver : Cannot close file stream {} because 
{}",
+fileTF.getAbsolutePath(), e.getMessage());
+  }
+}
+fileNodeStartTime.get().put(fileTF.getAbsolutePath(), startTimeMap);
+fileNodeEndTime.get().put(fileTF.getAbsolutePath(), endTimeMap);
+filesPath.add(fileTF.getAbsolutePath());
+num++;
+LOGGER.info(String
+.format("IoTDB receiver : Getting FileNode Info has complete : 
%d/%d", num,
+fileNum.get()));
+  }
+  fileNodeMap.get().put(storageGroupPB.getName(), filesPath);
+}
   }
 
   /**
* Insert all data in the tsfile into IoTDB.
*/
   @Override
   public void mergeOldData(String filePath) throws TException {
-// Set timeseries = new HashSet<>();
-// TsRandomAccessLocalFileReader input = null;
-// Connection connection = null;
-// Statement statement = null;
-// try {
-// Class.forName(JDBC_DRIVER_NAME);
-// connection = DriverManager.getConnection("jdbc:iotdb://localhost:" +
-// tsfileDBConfig.rpcPort + "/", "root",
-// "root");
-// statement = connection.createStatement();
-// int count = 0;
-//
-// input = new TsRandomAccessLocalFileReader(filePath);
-// FileReader reader = new FileReader(input);
-// Map deviceIdMap = 
reader.getFileMetaData().getDeviceMap();
-   

[GitHub] fanhualta commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
fanhualta commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251407138
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/control/FileReaderManager.java
 ##
 @@ -196,5 +197,7 @@ public ServiceType getID() {
 
   private static class FileReaderManagerHelper {
 private static final FileReaderManager INSTANCE = new FileReaderManager();
+
+private FileReaderManagerHelper(){ }
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] fanhualta commented on a change in pull request #31: Fix sonar

2019-01-28 Thread GitBox
fanhualta commented on a change in pull request #31: Fix sonar
URL: https://github.com/apache/incubator-iotdb/pull/31#discussion_r251407053
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/postback/utils/CreateDataSender1.java
 ##
 @@ -53,6 +58,7 @@
   private static final int MAX_FLOAT = 30;
   private static final int STRING_LENGTH = 5;
   private static final int BATCH_SQL = 1;
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(CreateDataSender1.class);
 
 Review comment:
   It's ok to use LOGGER.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] EJTTianYu opened a new pull request #35: Fix sonar tianyu

2019-01-28 Thread GitBox
EJTTianYu opened a new pull request #35: Fix sonar tianyu
URL: https://github.com/apache/incubator-iotdb/pull/35
 
 
   fix sonar by gouwangminhao


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] wujysh opened a new pull request #127: Minor modification of README.md

2019-04-01 Thread GitBox
wujysh opened a new pull request #127: Minor modification of README.md
URL: https://github.com/apache/incubator-iotdb/pull/127
 
 
   - Remove extra spaces and add missing ones
   - Recommend to use `mvnw`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] little-emotion commented on a change in pull request #123: provide unified query resource control interface

2019-04-02 Thread GitBox
little-emotion commented on a change in pull request #123: provide unified 
query resource control interface
URL: https://github.com/apache/incubator-iotdb/pull/123#discussion_r271241580
 
 

 ##
 File path: iotdb/src/main/java/org/apache/iotdb/db/service/TSServiceImpl.java
 ##
 @@ -104,6 +104,7 @@
   private ThreadLocal> queryRet = new 
ThreadLocal<>();
   private ThreadLocal zoneIds = new ThreadLocal<>();
   private IoTDBConfig config = IoTDBDescriptor.getInstance().getConfig();
+  private QueryContext context;
 
 Review comment:
   Why querycontext is not ThreadLocal like other member variables in this 
class?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 merged pull request #117: move all generated tsfiles for test into the target folder

2019-03-29 Thread GitBox
jixuan1989 merged pull request #117: move all generated tsfiles for test into 
the target folder
URL: https://github.com/apache/incubator-iotdb/pull/117
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 merged pull request #120: [IOTDB-70][IOTDB-71]fix jira issue 70 71

2019-03-29 Thread GitBox
jixuan1989 merged pull request #120: [IOTDB-70][IOTDB-71]fix jira issue 70 71
URL: https://github.com/apache/incubator-iotdb/pull/120
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 commented on issue #121: fix long size

2019-03-29 Thread GitBox
jixuan1989 commented on issue #121: fix long size
URL: https://github.com/apache/incubator-iotdb/pull/121#issuecomment-478032568
 
 
   +1.
   But this printf is meaningless for automatically UT..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jt2594838 commented on a change in pull request #123: provide unified query resource control interface

2019-04-02 Thread GitBox
jt2594838 commented on a change in pull request #123: provide unified query 
resource control interface
URL: https://github.com/apache/incubator-iotdb/pull/123#discussion_r271286958
 
 

 ##
 File path: iotdb/src/main/java/org/apache/iotdb/db/service/TSServiceImpl.java
 ##
 @@ -104,6 +104,7 @@
   private ThreadLocal> queryRet = new 
ThreadLocal<>();
   private ThreadLocal zoneIds = new ThreadLocal<>();
   private IoTDBConfig config = IoTDBDescriptor.getInstance().getConfig();
+  private QueryContext context;
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] MyXOF commented on issue #128: fix a doc bug of Readme.md

2019-04-02 Thread GitBox
MyXOF commented on issue #128: fix a doc bug of Readme.md
URL: https://github.com/apache/incubator-iotdb/pull/128#issuecomment-478996507
 
 
   Thanks for your contribution!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] MyXOF merged pull request #128: fix a doc bug of Readme.md

2019-04-02 Thread GitBox
MyXOF merged pull request #128: fix a doc bug of Readme.md
URL: https://github.com/apache/incubator-iotdb/pull/128
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] MyXOF commented on issue #122: Warm log for Overflow Exception

2019-04-02 Thread GitBox
MyXOF commented on issue #122: Warm log for Overflow Exception
URL: https://github.com/apache/incubator-iotdb/pull/122#issuecomment-478998303
 
 
   Anyone merges this pr?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] liukun4515 opened a new pull request #122: Warm log for Overflow Exception

2019-03-31 Thread GitBox
liukun4515 opened a new pull request #122: Warm log for Overflow Exception
URL: https://github.com/apache/incubator-iotdb/pull/122
 
 
   Jira issue:
   https://issues.apache.org/jira/browse/IOTDB-44
   https://issues.apache.org/jira/browse/IOTDB-46


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jt2594838 opened a new pull request #123: provide unified query resource control interface

2019-03-31 Thread GitBox
jt2594838 opened a new pull request #123: provide unified query resource 
control interface
URL: https://github.com/apache/incubator-iotdb/pull/123
 
 
   Previous query resource control involves calling methods in several 
different classes, which increases the difficulty of maintenance.
   All methods needed to secure resources are put together into 
QueryResourceManager and an execution order is provided in its JavaDoc in this 
PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] liukun4515 merged pull request #122: Warm log for Overflow Exception

2019-04-02 Thread GitBox
liukun4515 merged pull request #122: Warm log for Overflow Exception
URL: https://github.com/apache/incubator-iotdb/pull/122
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] FLOW4215 opened a new pull request #129: [IOTDB-37]A WAL check tool script is desired

2019-04-03 Thread GitBox
FLOW4215 opened a new pull request #129: [IOTDB-37]A WAL check tool script is 
desired
URL: https://github.com/apache/incubator-iotdb/pull/129
 
 
   upload start walchecker script into /iotdb/bin
   start-WalChecker.sh and start-WalChecker.bat


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267610511
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/receiver/ServerServiceImpl.java
 ##
 @@ -0,0 +1,740 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.receiver;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.conf.IoTDBConfig;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.conf.directories.Directories;
+import org.apache.iotdb.db.engine.filenode.FileNodeManager;
+import org.apache.iotdb.db.engine.filenode.OverflowChangeType;
+import org.apache.iotdb.db.engine.filenode.TsFileResource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.MetadataArgsErrorException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.metadata.MetadataConstant;
+import org.apache.iotdb.db.metadata.MetadataOperationType;
+import org.apache.iotdb.db.qp.executor.OverflowQPExecutor;
+import org.apache.iotdb.db.qp.physical.crud.InsertPlan;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetaData;
+import org.apache.iotdb.tsfile.file.metadata.ChunkMetaData;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadataIndex;
+import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.read.ReadOnlyTsFile;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.QueryExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ServerServiceImpl implements SyncService.Iface {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(ServerServiceImpl.class);
+
+  private static final FileNodeManager fileNodeManager = 
FileNodeManager.getInstance();
+  /**
+   * Metadata manager
+   **/
+  private static final MManager metadataManger = MManager.getInstance();
+
+  private static final String SYNC_SERVER = Constans.SYNC_SERVER;
+
+  private ThreadLocal uuid = new ThreadLocal<>();
+  /**
+   * String means storage group,List means the set of new files(path) in local 
IoTDB and String
+   * means path of new Files
+   **/
+  private ThreadLocal>> fileNodeMap = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
startTime
+   **/
+  private ThreadLocal>> fileNodeStartTime = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
endTime
+   **/
+  private ThreadLocal>> fileNodeEndTime = new 
ThreadLocal<>();
+
+  /**
+   * Total num of files that needs to be loaded
+   */
+  private ThreadLocal fileNum = new ThreadLocal<>();
+
+  /**
+   * IoTDB config

[GitHub] [incubator-iotdb] jt2594838 commented on issue #103: fix channel close bug in merge process

2019-03-20 Thread GitBox
jt2594838 commented on issue #103: fix channel close bug in merge process
URL: https://github.com/apache/incubator-iotdb/pull/103#issuecomment-475097054
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267612594
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/receiver/ServerServiceImpl.java
 ##
 @@ -0,0 +1,740 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.receiver;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.conf.IoTDBConfig;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.conf.directories.Directories;
+import org.apache.iotdb.db.engine.filenode.FileNodeManager;
+import org.apache.iotdb.db.engine.filenode.OverflowChangeType;
+import org.apache.iotdb.db.engine.filenode.TsFileResource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.MetadataArgsErrorException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.metadata.MetadataConstant;
+import org.apache.iotdb.db.metadata.MetadataOperationType;
+import org.apache.iotdb.db.qp.executor.OverflowQPExecutor;
+import org.apache.iotdb.db.qp.physical.crud.InsertPlan;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetaData;
+import org.apache.iotdb.tsfile.file.metadata.ChunkMetaData;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadataIndex;
+import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.read.ReadOnlyTsFile;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.QueryExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ServerServiceImpl implements SyncService.Iface {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(ServerServiceImpl.class);
+
+  private static final FileNodeManager fileNodeManager = 
FileNodeManager.getInstance();
+  /**
+   * Metadata manager
+   **/
+  private static final MManager metadataManger = MManager.getInstance();
+
+  private static final String SYNC_SERVER = Constans.SYNC_SERVER;
+
+  private ThreadLocal uuid = new ThreadLocal<>();
+  /**
+   * String means storage group,List means the set of new files(path) in local 
IoTDB and String
+   * means path of new Files
+   **/
+  private ThreadLocal>> fileNodeMap = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
startTime
+   **/
+  private ThreadLocal>> fileNodeStartTime = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
endTime
+   **/
+  private ThreadLocal>> fileNodeEndTime = new 
ThreadLocal<>();
+
+  /**
+   * Total num of files that needs to be loaded
+   */
+  private ThreadLocal fileNum = new ThreadLocal<>();
+
+  /**
+   * IoTDB config

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #107: [IOTDB-50]fix bug IOTDB-50

2019-03-21 Thread GitBox
Beyyes commented on a change in pull request #107: [IOTDB-50]fix bug IOTDB-50
URL: https://github.com/apache/incubator-iotdb/pull/107#discussion_r267648143
 
 

 ##
 File path: 
tsfile/src/test/java/org/apache/iotdb/tsfile/read/ReadOnlyTsFileTest.java
 ##
 @@ -111,6 +106,38 @@ public void queryTest() throws IOException {
   count++;
 }
 Assert.assertEquals(101, count);
+  }
 
+  @Test
+  public void test2() throws InterruptedException, WriteProcessException, 
IOException {
+int minRowCount = 1000, maxRowCount=10;
 
 Review comment:
   format


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #107: [IOTDB-50]fix bug IOTDB-50

2019-03-21 Thread GitBox
Beyyes commented on a change in pull request #107: [IOTDB-50]fix bug IOTDB-50
URL: https://github.com/apache/incubator-iotdb/pull/107#discussion_r267648247
 
 

 ##
 File path: 
tsfile/src/test/java/org/apache/iotdb/tsfile/utils/TsFileGeneratorForTest.java
 ##
 @@ -108,7 +113,7 @@ static private void generateSampleInputDataFile() throws 
IOException {
   if (i % 9 == 0) {
 d1 += ",s5," + "false";
   }
-  if (i % 10 == 0) {
+  if (i % 10 == 0 && i < minRowCount) {
 
 Review comment:
   why add `minRowCount` and `maxRowCount`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 commented on issue #100: [IOTDB-39]Refactor auto repair for TsFile reader and TsFile writer

2019-03-20 Thread GitBox
jixuan1989 commented on issue #100: [IOTDB-39]Refactor auto repair for TsFile  
reader and TsFile writer
URL: https://github.com/apache/incubator-iotdb/pull/100#issuecomment-475086493
 
 
   > Hi @jixuan1989 you code looks code and my only point is very minor. But 
overall I do not like the structure that much from an architectural point of 
view. I dislike that the user has to decide whether he wants to use the 
"robust" reader / writer or not. It should be the other way round that the 
"robust" reader / writer are the default and they have the "raw" reader / 
writer underneath them to use them. But we can solve this in a different issue 
perhaps. What do you think?
   
   +1.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267608549
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/receiver/ServerServiceImpl.java
 ##
 @@ -0,0 +1,740 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.receiver;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.conf.IoTDBConfig;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.conf.directories.Directories;
+import org.apache.iotdb.db.engine.filenode.FileNodeManager;
+import org.apache.iotdb.db.engine.filenode.OverflowChangeType;
+import org.apache.iotdb.db.engine.filenode.TsFileResource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.MetadataArgsErrorException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.metadata.MetadataConstant;
+import org.apache.iotdb.db.metadata.MetadataOperationType;
+import org.apache.iotdb.db.qp.executor.OverflowQPExecutor;
+import org.apache.iotdb.db.qp.physical.crud.InsertPlan;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetaData;
+import org.apache.iotdb.tsfile.file.metadata.ChunkMetaData;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadataIndex;
+import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.read.ReadOnlyTsFile;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.QueryExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ServerServiceImpl implements SyncService.Iface {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(ServerServiceImpl.class);
+
+  private static final FileNodeManager fileNodeManager = 
FileNodeManager.getInstance();
+  /**
+   * Metadata manager
+   **/
+  private static final MManager metadataManger = MManager.getInstance();
+
+  private static final String SYNC_SERVER = Constans.SYNC_SERVER;
+
+  private ThreadLocal uuid = new ThreadLocal<>();
+  /**
+   * String means storage group,List means the set of new files(path) in local 
IoTDB and String
+   * means path of new Files
+   **/
+  private ThreadLocal>> fileNodeMap = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
startTime
+   **/
+  private ThreadLocal>> fileNodeStartTime = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
endTime
+   **/
+  private ThreadLocal>> fileNodeEndTime = new 
ThreadLocal<>();
+
+  /**
+   * Total num of files that needs to be loaded
+   */
+  private ThreadLocal fileNum = new ThreadLocal<>();
+
+  /**
+   * IoTDB config

[GitHub] [incubator-iotdb] jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267608549
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/receiver/ServerServiceImpl.java
 ##
 @@ -0,0 +1,740 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.receiver;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.conf.IoTDBConfig;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.conf.directories.Directories;
+import org.apache.iotdb.db.engine.filenode.FileNodeManager;
+import org.apache.iotdb.db.engine.filenode.OverflowChangeType;
+import org.apache.iotdb.db.engine.filenode.TsFileResource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.MetadataArgsErrorException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.metadata.MetadataConstant;
+import org.apache.iotdb.db.metadata.MetadataOperationType;
+import org.apache.iotdb.db.qp.executor.OverflowQPExecutor;
+import org.apache.iotdb.db.qp.physical.crud.InsertPlan;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetaData;
+import org.apache.iotdb.tsfile.file.metadata.ChunkMetaData;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadataIndex;
+import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.read.ReadOnlyTsFile;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.QueryExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ServerServiceImpl implements SyncService.Iface {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(ServerServiceImpl.class);
+
+  private static final FileNodeManager fileNodeManager = 
FileNodeManager.getInstance();
+  /**
+   * Metadata manager
+   **/
+  private static final MManager metadataManger = MManager.getInstance();
+
+  private static final String SYNC_SERVER = Constans.SYNC_SERVER;
+
+  private ThreadLocal uuid = new ThreadLocal<>();
+  /**
+   * String means storage group,List means the set of new files(path) in local 
IoTDB and String
+   * means path of new Files
+   **/
+  private ThreadLocal>> fileNodeMap = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
startTime
+   **/
+  private ThreadLocal>> fileNodeStartTime = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
endTime
+   **/
+  private ThreadLocal>> fileNodeEndTime = new 
ThreadLocal<>();
+
+  /**
+   * Total num of files that needs to be loaded
+   */
+  private ThreadLocal fileNum = new ThreadLocal<>();
+
+  /**
+   * IoTDB config

[GitHub] [incubator-iotdb] jt2594838 commented on a change in pull request #105: [IOTDB-56] faster memtable.getSortedTimeValuePairList

2019-03-20 Thread GitBox
jt2594838 commented on a change in pull request #105: [IOTDB-56] faster 
memtable.getSortedTimeValuePairList
URL: https://github.com/apache/incubator-iotdb/pull/105#discussion_r267613440
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/engine/memtable/WritableMemChunk.java
 ##
 @@ -100,13 +101,16 @@ public void putBoolean(long t, boolean v) {
   // TODO: Consider using arrays to sort and remove duplicates
   public List getSortedTimeValuePairList() {
 int length = list.size();
-TreeMap treeMap = new TreeMap<>();
+
+Map treeMap = new HashMap<>(length, 1.0f);
 
 Review comment:
   You should rename this var too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267612695
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/sender/FileSenderImpl.java
 ##
 @@ -0,0 +1,507 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.sender;
+
+import java.io.BufferedReader;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.RandomAccessFile;
+import java.math.BigInteger;
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileLock;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.exception.SyncConnectionException;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.sync.conf.SyncSenderConfig;
+import org.apache.iotdb.db.sync.conf.SyncSenderDescriptor;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.thrift.TException;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.TSocket;
+import org.apache.thrift.transport.TTransport;
+import org.apache.thrift.transport.TTransportException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * FileSenderImpl is used to transfer tsfiles that needs to sync to receiver.
+ */
+public class FileSenderImpl implements FileSender {
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(FileSenderImpl.class);
+  private TTransport transport;
+  private SyncService.Client serviceClient;
+  private List schema = new ArrayList<>();
+
+  /**
+   * Files that need to be synchronized
+   */
+  private Map> validAllFiles;
+
+  /**
+   * All tsfiles in data directory
+   **/
+  private Map> currentLocalFiles;
+
+  /**
+   * Mark the start time of last sync
+   **/
+  private Date lastSyncTime = new Date();
+
+  /**
+   * If true, sync is in execution.
+   **/
+  private volatile boolean syncStatus = false;
+
+  /**
+   * Key means storage group, Set means corresponding tsfiles
+   **/
+  private Map> validFileSnapshot = new HashMap<>();
+
+  private FileManager fileManager = FileManager.getInstance();
+  private SyncSenderConfig config = 
SyncSenderDescriptor.getInstance().getConfig();
+
+  /**
+   * Monitor sync status.
+   */
+  private final Runnable monitorSyncStatus = () -> {
+Date oldTime = new Date();
+while (!Thread.interrupted()) {
+  Date currentTime = new Date();
+  if (currentTime.getTime() / 1000 == oldTime.getTime() / 1000) {
+continue;
+  }
+  if ((currentTime.getTime() - lastSyncTime.getTime())
+  % (config.getUploadCycleInSeconds() * 1000) == 0) {
+oldTime = currentTime;
+if (syncStatus) {
+  LOGGER.info("Sync process is in execution!");
+}
+  }
+}
+  };
+
+  private FileSenderImpl() {
+  }
+
+  public static final FileSenderImpl getInstance() {
+return InstanceHolder.INSTANCE;
+  }
+
+  /**
+   * Create a sender and sync files to the receiver.
+   *
+   * @param args not used
+   */
+  public static void main(String[] args)
+  throws InterruptedException, IOException, SyncConnectionException {
+Thread.currentThread().setName(ThreadName.SYNC_CLIENT.getName());
+FileSenderImpl fileSenderImpl = new FileSenderImpl();
+fileSenderImpl.verifySingleton();
+fileSenderImpl.startMonitor();
+fileSenderImpl.timedTask();
+  }
+
+  /**
+   * Start Monitor Thread, monitor sync status
+   */
+  public void startMonitor() {
+Thread syncMonitor = new 

[GitHub] [incubator-iotdb] little-emotion opened a new pull request #107: [IOTDB-50]fix bug IOTDB-50

2019-03-21 Thread GitBox
little-emotion opened a new pull request #107: [IOTDB-50]fix bug IOTDB-50
URL: https://github.com/apache/incubator-iotdb/pull/107
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 commented on a change in pull request #100: [IOTDB-39]Refactor auto repair for TsFile reader and TsFile writer

2019-03-20 Thread GitBox
jixuan1989 commented on a change in pull request #100: [IOTDB-39]Refactor auto 
repair for TsFile  reader and TsFile writer
URL: https://github.com/apache/incubator-iotdb/pull/100#discussion_r267603651
 
 

 ##
 File path: 
tsfile/src/test/java/org/apache/iotdb/tsfile/utils/TsFileGeneratorForTest.java
 ##
 @@ -159,10 +159,33 @@ static public void write() throws IOException, 
InterruptedException, WriteProces
 innerWriter = new TsFileWriter(file, schema, 
TSFileDescriptor.getInstance().getConfig());
 
 
 Review comment:
   indeed, we can merge this two classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267612594
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/receiver/ServerServiceImpl.java
 ##
 @@ -0,0 +1,740 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.receiver;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.conf.IoTDBConfig;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.conf.directories.Directories;
+import org.apache.iotdb.db.engine.filenode.FileNodeManager;
+import org.apache.iotdb.db.engine.filenode.OverflowChangeType;
+import org.apache.iotdb.db.engine.filenode.TsFileResource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.MetadataArgsErrorException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.metadata.MetadataConstant;
+import org.apache.iotdb.db.metadata.MetadataOperationType;
+import org.apache.iotdb.db.qp.executor.OverflowQPExecutor;
+import org.apache.iotdb.db.qp.physical.crud.InsertPlan;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetaData;
+import org.apache.iotdb.tsfile.file.metadata.ChunkMetaData;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadataIndex;
+import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.read.ReadOnlyTsFile;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.QueryExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ServerServiceImpl implements SyncService.Iface {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(ServerServiceImpl.class);
+
+  private static final FileNodeManager fileNodeManager = 
FileNodeManager.getInstance();
+  /**
+   * Metadata manager
+   **/
+  private static final MManager metadataManger = MManager.getInstance();
+
+  private static final String SYNC_SERVER = Constans.SYNC_SERVER;
+
+  private ThreadLocal uuid = new ThreadLocal<>();
+  /**
+   * String means storage group,List means the set of new files(path) in local 
IoTDB and String
+   * means path of new Files
+   **/
+  private ThreadLocal>> fileNodeMap = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
startTime
+   **/
+  private ThreadLocal>> fileNodeStartTime = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
endTime
+   **/
+  private ThreadLocal>> fileNodeEndTime = new 
ThreadLocal<>();
+
+  /**
+   * Total num of files that needs to be loaded
+   */
+  private ThreadLocal fileNum = new ThreadLocal<>();
+
+  /**
+   * IoTDB config

[GitHub] [incubator-iotdb] fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267613015
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/sender/FileSenderImpl.java
 ##
 @@ -0,0 +1,507 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.sender;
+
+import java.io.BufferedReader;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.RandomAccessFile;
+import java.math.BigInteger;
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileLock;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.exception.SyncConnectionException;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.sync.conf.SyncSenderConfig;
+import org.apache.iotdb.db.sync.conf.SyncSenderDescriptor;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.thrift.TException;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.TSocket;
+import org.apache.thrift.transport.TTransport;
+import org.apache.thrift.transport.TTransportException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * FileSenderImpl is used to transfer tsfiles that needs to sync to receiver.
+ */
+public class FileSenderImpl implements FileSender {
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(FileSenderImpl.class);
+  private TTransport transport;
+  private SyncService.Client serviceClient;
+  private List schema = new ArrayList<>();
+
+  /**
+   * Files that need to be synchronized
+   */
+  private Map> validAllFiles;
+
+  /**
+   * All tsfiles in data directory
+   **/
+  private Map> currentLocalFiles;
+
+  /**
+   * Mark the start time of last sync
+   **/
+  private Date lastSyncTime = new Date();
+
+  /**
+   * If true, sync is in execution.
+   **/
+  private volatile boolean syncStatus = false;
+
+  /**
+   * Key means storage group, Set means corresponding tsfiles
+   **/
+  private Map> validFileSnapshot = new HashMap<>();
+
+  private FileManager fileManager = FileManager.getInstance();
+  private SyncSenderConfig config = 
SyncSenderDescriptor.getInstance().getConfig();
+
+  /**
+   * Monitor sync status.
+   */
+  private final Runnable monitorSyncStatus = () -> {
+Date oldTime = new Date();
+while (!Thread.interrupted()) {
+  Date currentTime = new Date();
+  if (currentTime.getTime() / 1000 == oldTime.getTime() / 1000) {
+continue;
+  }
+  if ((currentTime.getTime() - lastSyncTime.getTime())
+  % (config.getUploadCycleInSeconds() * 1000) == 0) {
+oldTime = currentTime;
+if (syncStatus) {
+  LOGGER.info("Sync process is in execution!");
+}
+  }
+}
+  };
+
+  private FileSenderImpl() {
+  }
+
+  public static final FileSenderImpl getInstance() {
+return InstanceHolder.INSTANCE;
+  }
+
+  /**
+   * Create a sender and sync files to the receiver.
+   *
+   * @param args not used
+   */
+  public static void main(String[] args)
+  throws InterruptedException, IOException, SyncConnectionException {
+Thread.currentThread().setName(ThreadName.SYNC_CLIENT.getName());
+FileSenderImpl fileSenderImpl = new FileSenderImpl();
+fileSenderImpl.verifySingleton();
+fileSenderImpl.startMonitor();
+fileSenderImpl.timedTask();
+  }
+
+  /**
+   * Start Monitor Thread, monitor sync status
+   */
+  public void startMonitor() {
+Thread syncMonitor = new 

[GitHub] [incubator-iotdb] jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267607779
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/receiver/ServerServiceImpl.java
 ##
 @@ -0,0 +1,740 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.receiver;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.conf.IoTDBConfig;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.conf.directories.Directories;
+import org.apache.iotdb.db.engine.filenode.FileNodeManager;
+import org.apache.iotdb.db.engine.filenode.OverflowChangeType;
+import org.apache.iotdb.db.engine.filenode.TsFileResource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.MetadataArgsErrorException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.metadata.MetadataConstant;
+import org.apache.iotdb.db.metadata.MetadataOperationType;
+import org.apache.iotdb.db.qp.executor.OverflowQPExecutor;
+import org.apache.iotdb.db.qp.physical.crud.InsertPlan;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetaData;
+import org.apache.iotdb.tsfile.file.metadata.ChunkMetaData;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadataIndex;
+import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.read.ReadOnlyTsFile;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.QueryExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ServerServiceImpl implements SyncService.Iface {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(ServerServiceImpl.class);
+
+  private static final FileNodeManager fileNodeManager = 
FileNodeManager.getInstance();
+  /**
+   * Metadata manager
+   **/
+  private static final MManager metadataManger = MManager.getInstance();
+
+  private static final String SYNC_SERVER = Constans.SYNC_SERVER;
+
+  private ThreadLocal uuid = new ThreadLocal<>();
+  /**
+   * String means storage group,List means the set of new files(path) in local 
IoTDB and String
+   * means path of new Files
+   **/
+  private ThreadLocal>> fileNodeMap = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
startTime
+   **/
+  private ThreadLocal>> fileNodeStartTime = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
endTime
+   **/
+  private ThreadLocal>> fileNodeEndTime = new 
ThreadLocal<>();
+
+  /**
+   * Total num of files that needs to be loaded
+   */
+  private ThreadLocal fileNum = new ThreadLocal<>();
+
+  /**
+   * IoTDB config

[GitHub] [incubator-iotdb] jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267609452
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/sender/FileSenderImpl.java
 ##
 @@ -0,0 +1,507 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.sender;
+
+import java.io.BufferedReader;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.RandomAccessFile;
+import java.math.BigInteger;
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileLock;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.exception.SyncConnectionException;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.sync.conf.SyncSenderConfig;
+import org.apache.iotdb.db.sync.conf.SyncSenderDescriptor;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.thrift.TException;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.TSocket;
+import org.apache.thrift.transport.TTransport;
+import org.apache.thrift.transport.TTransportException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * FileSenderImpl is used to transfer tsfiles that needs to sync to receiver.
+ */
+public class FileSenderImpl implements FileSender {
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(FileSenderImpl.class);
+  private TTransport transport;
+  private SyncService.Client serviceClient;
+  private List schema = new ArrayList<>();
+
+  /**
+   * Files that need to be synchronized
+   */
+  private Map> validAllFiles;
+
+  /**
+   * All tsfiles in data directory
+   **/
+  private Map> currentLocalFiles;
+
+  /**
+   * Mark the start time of last sync
+   **/
+  private Date lastSyncTime = new Date();
+
+  /**
+   * If true, sync is in execution.
+   **/
+  private volatile boolean syncStatus = false;
+
+  /**
+   * Key means storage group, Set means corresponding tsfiles
+   **/
+  private Map> validFileSnapshot = new HashMap<>();
+
+  private FileManager fileManager = FileManager.getInstance();
+  private SyncSenderConfig config = 
SyncSenderDescriptor.getInstance().getConfig();
+
+  /**
+   * Monitor sync status.
+   */
+  private final Runnable monitorSyncStatus = () -> {
+Date oldTime = new Date();
+while (!Thread.interrupted()) {
+  Date currentTime = new Date();
+  if (currentTime.getTime() / 1000 == oldTime.getTime() / 1000) {
+continue;
+  }
+  if ((currentTime.getTime() - lastSyncTime.getTime())
+  % (config.getUploadCycleInSeconds() * 1000) == 0) {
+oldTime = currentTime;
+if (syncStatus) {
+  LOGGER.info("Sync process is in execution!");
+}
+  }
+}
+  };
+
+  private FileSenderImpl() {
+  }
+
+  public static final FileSenderImpl getInstance() {
+return InstanceHolder.INSTANCE;
+  }
+
+  /**
+   * Create a sender and sync files to the receiver.
+   *
+   * @param args not used
+   */
+  public static void main(String[] args)
+  throws InterruptedException, IOException, SyncConnectionException {
+Thread.currentThread().setName(ThreadName.SYNC_CLIENT.getName());
+FileSenderImpl fileSenderImpl = new FileSenderImpl();
+fileSenderImpl.verifySingleton();
+fileSenderImpl.startMonitor();
+fileSenderImpl.timedTask();
+  }
+
+  /**
+   * Start Monitor Thread, monitor sync status
+   */
+  public void startMonitor() {
+Thread syncMonitor = new 

[GitHub] [incubator-iotdb] jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267609279
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/sender/FileSenderImpl.java
 ##
 @@ -0,0 +1,507 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.sender;
+
+import java.io.BufferedReader;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.RandomAccessFile;
+import java.math.BigInteger;
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileLock;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.exception.SyncConnectionException;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.sync.conf.SyncSenderConfig;
+import org.apache.iotdb.db.sync.conf.SyncSenderDescriptor;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.thrift.TException;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.TSocket;
+import org.apache.thrift.transport.TTransport;
+import org.apache.thrift.transport.TTransportException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * FileSenderImpl is used to transfer tsfiles that needs to sync to receiver.
+ */
+public class FileSenderImpl implements FileSender {
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(FileSenderImpl.class);
+  private TTransport transport;
+  private SyncService.Client serviceClient;
+  private List schema = new ArrayList<>();
+
+  /**
+   * Files that need to be synchronized
+   */
+  private Map> validAllFiles;
+
+  /**
+   * All tsfiles in data directory
+   **/
+  private Map> currentLocalFiles;
+
+  /**
+   * Mark the start time of last sync
+   **/
+  private Date lastSyncTime = new Date();
+
+  /**
+   * If true, sync is in execution.
+   **/
+  private volatile boolean syncStatus = false;
+
+  /**
+   * Key means storage group, Set means corresponding tsfiles
+   **/
+  private Map> validFileSnapshot = new HashMap<>();
+
+  private FileManager fileManager = FileManager.getInstance();
+  private SyncSenderConfig config = 
SyncSenderDescriptor.getInstance().getConfig();
+
+  /**
+   * Monitor sync status.
+   */
+  private final Runnable monitorSyncStatus = () -> {
+Date oldTime = new Date();
+while (!Thread.interrupted()) {
+  Date currentTime = new Date();
+  if (currentTime.getTime() / 1000 == oldTime.getTime() / 1000) {
+continue;
+  }
+  if ((currentTime.getTime() - lastSyncTime.getTime())
+  % (config.getUploadCycleInSeconds() * 1000) == 0) {
+oldTime = currentTime;
+if (syncStatus) {
+  LOGGER.info("Sync process is in execution!");
+}
+  }
+}
+  };
+
+  private FileSenderImpl() {
+  }
+
+  public static final FileSenderImpl getInstance() {
+return InstanceHolder.INSTANCE;
+  }
+
+  /**
+   * Create a sender and sync files to the receiver.
+   *
+   * @param args not used
+   */
+  public static void main(String[] args)
+  throws InterruptedException, IOException, SyncConnectionException {
+Thread.currentThread().setName(ThreadName.SYNC_CLIENT.getName());
+FileSenderImpl fileSenderImpl = new FileSenderImpl();
+fileSenderImpl.verifySingleton();
+fileSenderImpl.startMonitor();
+fileSenderImpl.timedTask();
+  }
+
+  /**
+   * Start Monitor Thread, monitor sync status
+   */
+  public void startMonitor() {
+Thread syncMonitor = new 

[GitHub] [incubator-iotdb] fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267612594
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/receiver/ServerServiceImpl.java
 ##
 @@ -0,0 +1,740 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.receiver;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.conf.IoTDBConfig;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.conf.directories.Directories;
+import org.apache.iotdb.db.engine.filenode.FileNodeManager;
+import org.apache.iotdb.db.engine.filenode.OverflowChangeType;
+import org.apache.iotdb.db.engine.filenode.TsFileResource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.MetadataArgsErrorException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.metadata.MetadataConstant;
+import org.apache.iotdb.db.metadata.MetadataOperationType;
+import org.apache.iotdb.db.qp.executor.OverflowQPExecutor;
+import org.apache.iotdb.db.qp.physical.crud.InsertPlan;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetaData;
+import org.apache.iotdb.tsfile.file.metadata.ChunkMetaData;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadataIndex;
+import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.read.ReadOnlyTsFile;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.QueryExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ServerServiceImpl implements SyncService.Iface {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(ServerServiceImpl.class);
+
+  private static final FileNodeManager fileNodeManager = 
FileNodeManager.getInstance();
+  /**
+   * Metadata manager
+   **/
+  private static final MManager metadataManger = MManager.getInstance();
+
+  private static final String SYNC_SERVER = Constans.SYNC_SERVER;
+
+  private ThreadLocal uuid = new ThreadLocal<>();
+  /**
+   * String means storage group,List means the set of new files(path) in local 
IoTDB and String
+   * means path of new Files
+   **/
+  private ThreadLocal>> fileNodeMap = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
startTime
+   **/
+  private ThreadLocal>> fileNodeStartTime = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
endTime
+   **/
+  private ThreadLocal>> fileNodeEndTime = new 
ThreadLocal<>();
+
+  /**
+   * Total num of files that needs to be loaded
+   */
+  private ThreadLocal fileNum = new ThreadLocal<>();
+
+  /**
+   * IoTDB config

[GitHub] [incubator-iotdb] fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267615605
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/receiver/ServerServiceImpl.java
 ##
 @@ -0,0 +1,740 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.receiver;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.conf.IoTDBConfig;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.conf.directories.Directories;
+import org.apache.iotdb.db.engine.filenode.FileNodeManager;
+import org.apache.iotdb.db.engine.filenode.OverflowChangeType;
+import org.apache.iotdb.db.engine.filenode.TsFileResource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.MetadataArgsErrorException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.metadata.MetadataConstant;
+import org.apache.iotdb.db.metadata.MetadataOperationType;
+import org.apache.iotdb.db.qp.executor.OverflowQPExecutor;
+import org.apache.iotdb.db.qp.physical.crud.InsertPlan;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetaData;
+import org.apache.iotdb.tsfile.file.metadata.ChunkMetaData;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadataIndex;
+import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.read.ReadOnlyTsFile;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.QueryExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ServerServiceImpl implements SyncService.Iface {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(ServerServiceImpl.class);
+
+  private static final FileNodeManager fileNodeManager = 
FileNodeManager.getInstance();
+  /**
+   * Metadata manager
+   **/
+  private static final MManager metadataManger = MManager.getInstance();
+
+  private static final String SYNC_SERVER = Constans.SYNC_SERVER;
+
+  private ThreadLocal uuid = new ThreadLocal<>();
+  /**
+   * String means storage group,List means the set of new files(path) in local 
IoTDB and String
+   * means path of new Files
+   **/
+  private ThreadLocal>> fileNodeMap = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
startTime
+   **/
+  private ThreadLocal>> fileNodeStartTime = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
endTime
+   **/
+  private ThreadLocal>> fileNodeEndTime = new 
ThreadLocal<>();
+
+  /**
+   * Total num of files that needs to be loaded
+   */
+  private ThreadLocal fileNum = new ThreadLocal<>();
+
+  /**
+   * IoTDB config

[GitHub] [incubator-iotdb] jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
jt2594838 commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267607646
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/receiver/ServerServiceImpl.java
 ##
 @@ -0,0 +1,740 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.receiver;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileChannel;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Objects;
+import java.util.Set;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.conf.IoTDBConfig;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.conf.directories.Directories;
+import org.apache.iotdb.db.engine.filenode.FileNodeManager;
+import org.apache.iotdb.db.engine.filenode.OverflowChangeType;
+import org.apache.iotdb.db.engine.filenode.TsFileResource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.MetadataArgsErrorException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.metadata.MetadataConstant;
+import org.apache.iotdb.db.metadata.MetadataOperationType;
+import org.apache.iotdb.db.qp.executor.OverflowQPExecutor;
+import org.apache.iotdb.db.qp.physical.crud.InsertPlan;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.iotdb.tsfile.file.metadata.ChunkGroupMetaData;
+import org.apache.iotdb.tsfile.file.metadata.ChunkMetaData;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadata;
+import org.apache.iotdb.tsfile.file.metadata.TsDeviceMetadataIndex;
+import org.apache.iotdb.tsfile.file.metadata.enums.CompressionType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSEncoding;
+import org.apache.iotdb.tsfile.read.ReadOnlyTsFile;
+import org.apache.iotdb.tsfile.read.TsFileSequenceReader;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.QueryExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ServerServiceImpl implements SyncService.Iface {
+
+  private static final Logger logger = 
LoggerFactory.getLogger(ServerServiceImpl.class);
+
+  private static final FileNodeManager fileNodeManager = 
FileNodeManager.getInstance();
+  /**
+   * Metadata manager
+   **/
+  private static final MManager metadataManger = MManager.getInstance();
+
+  private static final String SYNC_SERVER = Constans.SYNC_SERVER;
+
+  private ThreadLocal uuid = new ThreadLocal<>();
+  /**
+   * String means storage group,List means the set of new files(path) in local 
IoTDB and String
+   * means path of new Files
+   **/
+  private ThreadLocal>> fileNodeMap = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
startTime
+   **/
+  private ThreadLocal>> fileNodeStartTime = new 
ThreadLocal<>();
+  /**
+   * Map String1 means timeseries String2 means path of new Files, long means 
endTime
+   **/
+  private ThreadLocal>> fileNodeEndTime = new 
ThreadLocal<>();
+
+  /**
+   * Total num of files that needs to be loaded
+   */
+  private ThreadLocal fileNum = new ThreadLocal<>();
+
+  /**
+   * IoTDB config

[GitHub] [incubator-iotdb] fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement post-back module, rename it to synchronization module

2019-03-20 Thread GitBox
fanhualta commented on a change in pull request #101: [IOTDB-51]Reimplement 
post-back module, rename it to synchronization module
URL: https://github.com/apache/incubator-iotdb/pull/101#discussion_r267613873
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/sync/sender/FileSenderImpl.java
 ##
 @@ -0,0 +1,507 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iotdb.db.sync.sender;
+
+import java.io.BufferedReader;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.RandomAccessFile;
+import java.math.BigInteger;
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.nio.channels.FileLock;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.UUID;
+import org.apache.iotdb.db.concurrent.ThreadName;
+import org.apache.iotdb.db.exception.SyncConnectionException;
+import org.apache.iotdb.db.sync.conf.Constans;
+import org.apache.iotdb.db.sync.conf.SyncSenderConfig;
+import org.apache.iotdb.db.sync.conf.SyncSenderDescriptor;
+import org.apache.iotdb.db.utils.SyncUtils;
+import org.apache.iotdb.service.sync.thrift.SyncDataStatus;
+import org.apache.iotdb.service.sync.thrift.SyncService;
+import org.apache.thrift.TException;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TProtocol;
+import org.apache.thrift.transport.TSocket;
+import org.apache.thrift.transport.TTransport;
+import org.apache.thrift.transport.TTransportException;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * FileSenderImpl is used to transfer tsfiles that needs to sync to receiver.
+ */
+public class FileSenderImpl implements FileSender {
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(FileSenderImpl.class);
+  private TTransport transport;
+  private SyncService.Client serviceClient;
+  private List schema = new ArrayList<>();
+
+  /**
+   * Files that need to be synchronized
+   */
+  private Map> validAllFiles;
+
+  /**
+   * All tsfiles in data directory
+   **/
+  private Map> currentLocalFiles;
+
+  /**
+   * Mark the start time of last sync
+   **/
+  private Date lastSyncTime = new Date();
+
+  /**
+   * If true, sync is in execution.
+   **/
+  private volatile boolean syncStatus = false;
+
+  /**
+   * Key means storage group, Set means corresponding tsfiles
+   **/
+  private Map> validFileSnapshot = new HashMap<>();
+
+  private FileManager fileManager = FileManager.getInstance();
+  private SyncSenderConfig config = 
SyncSenderDescriptor.getInstance().getConfig();
+
+  /**
+   * Monitor sync status.
+   */
+  private final Runnable monitorSyncStatus = () -> {
+Date oldTime = new Date();
+while (!Thread.interrupted()) {
+  Date currentTime = new Date();
+  if (currentTime.getTime() / 1000 == oldTime.getTime() / 1000) {
+continue;
+  }
+  if ((currentTime.getTime() - lastSyncTime.getTime())
+  % (config.getUploadCycleInSeconds() * 1000) == 0) {
+oldTime = currentTime;
+if (syncStatus) {
+  LOGGER.info("Sync process is in execution!");
+}
+  }
+}
+  };
+
+  private FileSenderImpl() {
+  }
+
+  public static final FileSenderImpl getInstance() {
+return InstanceHolder.INSTANCE;
+  }
+
+  /**
+   * Create a sender and sync files to the receiver.
+   *
+   * @param args not used
+   */
+  public static void main(String[] args)
+  throws InterruptedException, IOException, SyncConnectionException {
+Thread.currentThread().setName(ThreadName.SYNC_CLIENT.getName());
+FileSenderImpl fileSenderImpl = new FileSenderImpl();
+fileSenderImpl.verifySingleton();
+fileSenderImpl.startMonitor();
+fileSenderImpl.timedTask();
+  }
+
+  /**
+   * Start Monitor Thread, monitor sync status
+   */
+  public void startMonitor() {
+Thread syncMonitor = new 

[GitHub] [incubator-iotdb] jixuan1989 opened a new pull request #111: try to release memory asap in ReadOnlyMemChunk

2019-03-25 Thread GitBox
jixuan1989 opened a new pull request #111: try to release memory asap in 
ReadOnlyMemChunk
URL: https://github.com/apache/incubator-iotdb/pull/111
 
 
   In `ReadOnlyMemChunk`, the memSeries is useless after calling `init()`, so I 
set it to null to let GC collect it asap.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] LeiRui commented on a change in pull request #110: change FileChannel to TsFileInput for the future use of tsfile-spark-conn…

2019-03-25 Thread GitBox
LeiRui commented on a change in pull request #110: change FileChannel to 
TsFileInput for the future use of tsfile-spark-conn…
URL: https://github.com/apache/incubator-iotdb/pull/110#discussion_r268928814
 
 

 ##
 File path: 
tsfile/src/test/java/org/apache/iotdb/tsfile/file/header/PageHeaderTest.java
 ##
 @@ -119,8 +122,8 @@ private PageHeader deSerialized(int offset) {
 FileInputStream fis = null;
 PageHeader header = null;
 try {
-  fis = new FileInputStream(new File(PATH));
-  header = PageHeader.deserializeFrom(DATA_TYPE, fis.getChannel(), offset, 
true);
+  TsFileInput input = new DefaultTsFileInput(Paths.get(PATH));
 
 Review comment:
   JIRA IoTDB-63: Use TsFileInput instead of FileChannel as the input parameter 
of some functions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on issue #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-25 Thread GitBox
Beyyes commented on issue #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#issuecomment-476439927
 
 
   Aggregate and groupby performance experiment results should be showed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 commented on a change in pull request #110: change FileChannel to TsFileInput for the future use of tsfile-spark-conn…

2019-03-25 Thread GitBox
jixuan1989 commented on a change in pull request #110: change FileChannel to 
TsFileInput for the future use of tsfile-spark-conn…
URL: https://github.com/apache/incubator-iotdb/pull/110#discussion_r268908647
 
 

 ##
 File path: 
tsfile/src/test/java/org/apache/iotdb/tsfile/file/header/PageHeaderTest.java
 ##
 @@ -119,8 +122,8 @@ private PageHeader deSerialized(int offset) {
 FileInputStream fis = null;
 PageHeader header = null;
 try {
-  fis = new FileInputStream(new File(PATH));
-  header = PageHeader.deserializeFrom(DATA_TYPE, fis.getChannel(), offset, 
true);
+  TsFileInput input = new DefaultTsFileInput(Paths.get(PATH));
 
 Review comment:
   Hm... But DefaultTsFileInput class actually use a FileChannel... 
   So, will you override it in tsfile-spark?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] LeiRui commented on a change in pull request #110: change FileChannel to TsFileInput for the future use of tsfile-spark-conn…

2019-03-25 Thread GitBox
LeiRui commented on a change in pull request #110: change FileChannel to 
TsFileInput for the future use of tsfile-spark-conn…
URL: https://github.com/apache/incubator-iotdb/pull/110#discussion_r268917883
 
 

 ##
 File path: 
tsfile/src/test/java/org/apache/iotdb/tsfile/file/header/PageHeaderTest.java
 ##
 @@ -119,8 +122,8 @@ private PageHeader deSerialized(int offset) {
 FileInputStream fis = null;
 PageHeader header = null;
 try {
-  fis = new FileInputStream(new File(PATH));
-  header = PageHeader.deserializeFrom(DATA_TYPE, fis.getChannel(), offset, 
true);
+  TsFileInput input = new DefaultTsFileInput(Paths.get(PATH));
 
 Review comment:
   Yes. In TsFile-Spark-Connector I use HDFSInput to override TsFileInput. In 
HDFSInput FSDataInputStream and FileStatus are used instead of FileChannel to 
implement the interfaces in TsFileInput. That's why I need to change 
FileChannel in these functions to TsFileInput.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 commented on a change in pull request #110: change FileChannel to TsFileInput for the future use of tsfile-spark-conn…

2019-03-25 Thread GitBox
jixuan1989 commented on a change in pull request #110: change FileChannel to 
TsFileInput for the future use of tsfile-spark-conn…
URL: https://github.com/apache/incubator-iotdb/pull/110#discussion_r268925842
 
 

 ##
 File path: 
tsfile/src/test/java/org/apache/iotdb/tsfile/file/header/PageHeaderTest.java
 ##
 @@ -119,8 +122,8 @@ private PageHeader deSerialized(int offset) {
 FileInputStream fis = null;
 PageHeader header = null;
 try {
-  fis = new FileInputStream(new File(PATH));
-  header = PageHeader.deserializeFrom(DATA_TYPE, fis.getChannel(), offset, 
true);
+  TsFileInput input = new DefaultTsFileInput(Paths.get(PATH));
 
 Review comment:
   +1. But I suggest that u record this requirement on JIRA


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 opened a new pull request #112: [IOTDB-62]set SQL parse error as debug level log; fix some sonar issues

2019-03-25 Thread GitBox
jixuan1989 opened a new pull request #112: [IOTDB-62]set SQL parse error as 
debug level log; fix some sonar issues
URL: https://github.com/apache/incubator-iotdb/pull/112
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] mdf369 commented on a change in pull request #114: add check SG for MManager, MGraph and MTree

2019-03-27 Thread GitBox
mdf369 commented on a change in pull request #114: add check SG for MManager, 
MGraph and MTree
URL: https://github.com/apache/incubator-iotdb/pull/114#discussion_r269486551
 
 

 ##
 File path: iotdb/src/main/java/org/apache/iotdb/db/metadata/MTree.java
 ##
 @@ -212,6 +212,34 @@ public void setStorageGroup(String path) throws 
PathErrorException {
 setDataFileName(path, cur);
   }
 
+  /**
+   * check whether the input path is storage group or not
+   * @param path input path
+   * @return if it is storage group, return true. Else return false
+   */
+  public boolean checkStorageGroup(String path) {
+String[] nodeNames = path.split(DOUB_SEPARATOR);
+MNode cur = root;
+if (nodeNames.length <= 1 || !nodeNames[0].equals(root.getName())) {
 
 Review comment:
   I see, but only when "nodeNames.length > 1" the memtioned statement will be 
executed, so nodeNames[0] will not be null as well. The order is same as many 
implementations in MTree, so it should be fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] MyXOF merged pull request #114: add check SG for MManager, MGraph and MTree

2019-03-27 Thread GitBox
MyXOF merged pull request #114: add check SG for MManager, MGraph and MTree
URL: https://github.com/apache/incubator-iotdb/pull/114
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] MyXOF commented on a change in pull request #114: add check SG for MManager, MGraph and MTree

2019-03-27 Thread GitBox
MyXOF commented on a change in pull request #114: add check SG for MManager, 
MGraph and MTree
URL: https://github.com/apache/incubator-iotdb/pull/114#discussion_r269478041
 
 

 ##
 File path: iotdb/src/main/java/org/apache/iotdb/db/metadata/MTree.java
 ##
 @@ -212,6 +212,34 @@ public void setStorageGroup(String path) throws 
PathErrorException {
 setDataFileName(path, cur);
   }
 
+  /**
+   * check whether the input path is storage group or not
+   * @param path input path
+   * @return if it is storage group, return true. Else return false
+   */
+  public boolean checkStorageGroup(String path) {
+String[] nodeNames = path.split(DOUB_SEPARATOR);
+MNode cur = root;
+if (nodeNames.length <= 1 || !nodeNames[0].equals(root.getName())) {
+  return false;
+}
+int i = 1;
+while (i < nodeNames.length - 1) {
+  MNode temp = cur.getChild(nodeNames[i]);
+  if (temp == null && temp.isStorageLevel()) {
 
 Review comment:
   if temp is null how to invoke isStorageLevel() function?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] MyXOF commented on a change in pull request #114: add check SG for MManager, MGraph and MTree

2019-03-27 Thread GitBox
MyXOF commented on a change in pull request #114: add check SG for MManager, 
MGraph and MTree
URL: https://github.com/apache/incubator-iotdb/pull/114#discussion_r269477758
 
 

 ##
 File path: iotdb/src/main/java/org/apache/iotdb/db/metadata/MTree.java
 ##
 @@ -212,6 +212,34 @@ public void setStorageGroup(String path) throws 
PathErrorException {
 setDataFileName(path, cur);
   }
 
+  /**
+   * check whether the input path is storage group or not
+   * @param path input path
+   * @return if it is storage group, return true. Else return false
+   */
+  public boolean checkStorageGroup(String path) {
+String[] nodeNames = path.split(DOUB_SEPARATOR);
+MNode cur = root;
+if (nodeNames.length <= 1 || !nodeNames[0].equals(root.getName())) {
 
 Review comment:
   root.getName().equals(!nodeNames[0]) better? 
   since root.getName() is not null


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] mdf369 commented on a change in pull request #114: add check SG for MManager, MGraph and MTree

2019-03-27 Thread GitBox
mdf369 commented on a change in pull request #114: add check SG for MManager, 
MGraph and MTree
URL: https://github.com/apache/incubator-iotdb/pull/114#discussion_r269480675
 
 

 ##
 File path: iotdb/src/main/java/org/apache/iotdb/db/metadata/MTree.java
 ##
 @@ -212,6 +212,34 @@ public void setStorageGroup(String path) throws 
PathErrorException {
 setDataFileName(path, cur);
   }
 
+  /**
+   * check whether the input path is storage group or not
+   * @param path input path
+   * @return if it is storage group, return true. Else return false
+   */
+  public boolean checkStorageGroup(String path) {
+String[] nodeNames = path.split(DOUB_SEPARATOR);
+MNode cur = root;
+if (nodeNames.length <= 1 || !nodeNames[0].equals(root.getName())) {
+  return false;
+}
+int i = 1;
+while (i < nodeNames.length - 1) {
+  MNode temp = cur.getChild(nodeNames[i]);
+  if (temp == null && temp.isStorageLevel()) {
 
 Review comment:
   should be "or", fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] mdf369 commented on a change in pull request #114: add check SG for MManager, MGraph and MTree

2019-03-27 Thread GitBox
mdf369 commented on a change in pull request #114: add check SG for MManager, 
MGraph and MTree
URL: https://github.com/apache/incubator-iotdb/pull/114#discussion_r269480303
 
 

 ##
 File path: iotdb/src/main/java/org/apache/iotdb/db/metadata/MTree.java
 ##
 @@ -212,6 +212,34 @@ public void setStorageGroup(String path) throws 
PathErrorException {
 setDataFileName(path, cur);
   }
 
+  /**
+   * check whether the input path is storage group or not
+   * @param path input path
+   * @return if it is storage group, return true. Else return false
+   */
+  public boolean checkStorageGroup(String path) {
+String[] nodeNames = path.split(DOUB_SEPARATOR);
+MNode cur = root;
+if (nodeNames.length <= 1 || !nodeNames[0].equals(root.getName())) {
 
 Review comment:
   I don't see the difference?
   This is same as the implementation of setStorageGroup().


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] mdf369 edited a comment on issue #114: add check SG for MManager, MGraph and MTree

2019-03-27 Thread GitBox
mdf369 edited a comment on issue #114: add check SG for MManager, MGraph and 
MTree
URL: https://github.com/apache/incubator-iotdb/pull/114#issuecomment-477076324
 
 
   test coverage:
   ![屏幕快照 2019-03-27 下午6 02 
14](https://user-images.githubusercontent.com/16442316/55067851-6f6c8480-50bb-11e9-9b0e-f210e3227259.png)
   ![屏幕快照 2019-03-27 下午6 09 
48](https://user-images.githubusercontent.com/16442316/55067915-8d39e980-50bb-11e9-9486-d2fd6706d349.png)
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] mdf369 commented on issue #114: add check SG for MManager, MGraph and MTree

2019-03-27 Thread GitBox
mdf369 commented on issue #114: add check SG for MManager, MGraph and MTree
URL: https://github.com/apache/incubator-iotdb/pull/114#issuecomment-477076324
 
 
   test coverage:
   ![屏幕快照 2019-03-27 下午6 02 
14](https://user-images.githubusercontent.com/16442316/55067851-6f6c8480-50bb-11e9-9b0e-f210e3227259.png)
   ![屏幕快照 2019-03-27 下午6 03 
47](https://user-images.githubusercontent.com/16442316/55067852-70051b00-50bb-11e9-9744-0b11a2010fec.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] mdf369 opened a new pull request #114: add check SG for MManager, MGraph and MTree

2019-03-27 Thread GitBox
mdf369 opened a new pull request #114: add check SG for MManager, MGraph and 
MTree
URL: https://github.com/apache/incubator-iotdb/pull/114
 
 
   Add a method to check whether a input path has already been set to storage 
level or not. This method will be used in Cluster module. It helps a node to 
decide whether it is able to execute a cammand or not.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 opened a new pull request #113: [IOTDB-67] call valueDecoder reset() when reading data from a new Page

2019-03-26 Thread GitBox
jixuan1989 opened a new pull request #113: [IOTDB-67] call valueDecoder reset() 
when reading data from a new Page
URL: https://github.com/apache/incubator-iotdb/pull/113
 
 
   tsfile/example/src/main/java/org/apache/iotdb/tsfile/TsFileSequenceRead has 
a bug:
   
   if the data type is Float and the TsFile uses RLE encoding, when we read 
data by using this class, an exception will occur if there are more than 2 
pages in a chunk. 
   
   It is because that for each page, we need to either call 
`valueDecoder.reset()` or generate a new decoder. However, the example java 
file does not.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] jixuan1989 merged pull request #112: [IOTDB-62]set SQL parse error as debug level log; fix some sonar issues

2019-03-26 Thread GitBox
jixuan1989 merged pull request #112: [IOTDB-62]set SQL parse error as debug 
level log; fix some sonar issues
URL: https://github.com/apache/incubator-iotdb/pull/112
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268921776
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/aggregation/AggregationConstant.java
 ##
 @@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.aggregation;
+
+import org.apache.iotdb.tsfile.common.constant.StatisticConstant;
+
+public class AggregationConstant {
+
+  public static final String MIN_TIME = StatisticConstant.MIN_TIME;
 
 Review comment:
   Why this class is needed? Just class StatisticConstant isn't ok?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268932498
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/EngineQueryRouter.java
 ##
 @@ -88,6 +100,154 @@ public QueryDataSet query(QueryExpression queryExpression)
 }
   }
 
+  /**
+   * execute aggregation query.
+   */
+  public QueryDataSet aggregate(List selectedSeries, List aggres,
+  IExpression expression) throws QueryFilterOptimizationException, 
FileNodeManagerException,
+  IOException, PathErrorException, ProcessorException {
+
+long nextJobId = getNextJobId();
+QueryTokenManager.getInstance().setJobIdForCurrentRequestThread(nextJobId);
+
OpenedFilePathsManager.getInstance().setJobIdForCurrentRequestThread(nextJobId);
+
+QueryContext context = new QueryContext();
+
+if (expression != null) {
+  IExpression optimizedExpression = ExpressionOptimizer.getInstance()
+  .optimize(expression, selectedSeries);
+  AggregateEngineExecutor engineExecutor = new 
AggregateEngineExecutor(nextJobId,
+  selectedSeries, aggres, optimizedExpression);
+  if (optimizedExpression.getType() == ExpressionType.GLOBAL_TIME) {
+return engineExecutor.executeWithOutTimeGenerator(context);
+  } else {
+return engineExecutor.executeWithTimeGenerator(context);
+  }
+} else {
+  AggregateEngineExecutor engineExecutor = new 
AggregateEngineExecutor(nextJobId,
+  selectedSeries, aggres, null);
+  return engineExecutor.executeWithOutTimeGenerator(context);
+}
+  }
+
+  /**
+   * execute groupBy query.
+   *
+   * @param selectedSeries select path list
+   * @param aggres aggregation name list
+   * @param expression filter expression
+   * @param unit time granularity for interval partitioning, unit is ms.
+   * @param origin the datum time point for interval division is divided into 
a time interval for
+   *each TimeUnit time from this point forward and backward.
+   * @param intervals time intervals, closed interval.
+   */
+  public QueryDataSet groupBy(List selectedSeries, List aggres,
+  IExpression expression, long unit, long origin, List> 
intervals)
+  throws ProcessorException, QueryFilterOptimizationException, 
FileNodeManagerException,
+  PathErrorException, IOException {
+
+long nextJobId = getNextJobId();
+QueryTokenManager.getInstance().setJobIdForCurrentRequestThread(nextJobId);
+
OpenedFilePathsManager.getInstance().setJobIdForCurrentRequestThread(nextJobId);
+QueryContext context = new QueryContext();
+
+//check the legitimacy of intervals
+for (Pair pair : intervals) {
+  if (!(pair.left > 0 && pair.right > 0)) {
+throw new ProcessorException(
+String.format("Time interval<%d, %d> must be greater than 0.", 
pair.left, pair.right));
+  }
+  if (pair.right < pair.left) {
+throw new ProcessorException(String.format(
+"Interval starting time must be greater than the interval ending 
time, "
++ "found error interval<%d, %d>", pair.left, pair.right));
+  }
+}
+//merge intervals
+List> mergedIntervalList = mergeInterval(intervals);
+
+//construct groupBy intervals filter
+BinaryExpression intervalFilter = null;
+for (Pair pair : mergedIntervalList) {
+  BinaryExpression pairFilter = BinaryExpression
+  .and(new GlobalTimeExpression(TimeFilter.gtEq(pair.left)),
+  new GlobalTimeExpression(TimeFilter.ltEq(pair.right)));
+  if (intervalFilter != null) {
+intervalFilter = BinaryExpression.or(intervalFilter, pairFilter);
+  } else {
+intervalFilter = pairFilter;
+  }
+}
+
+//merge interval filter and filtering conditions after where statements
+if (expression == null) {
+  expression = intervalFilter;
+} else {
+  expression = BinaryExpression.and(expression, intervalFilter);
+}
+
+IExpression optimizedExpression = ExpressionOptimizer.getInstance()
+.optimize(expression, selectedSeries);
+if (optimizedExpression.getType() == ExpressionType.GLOBAL_TIME) {
+  GroupByWithOnlyTimeFilterDataSet groupByEngine = new 
GroupByWithOnlyTimeFilterDataSet(
+  nextJobId, selectedSeries, unit, origin, mergedIntervalList);
+  groupByEngine.initGroupBy(context, aggres, optimizedExpression);
+  return groupByEngine;
+} else {
+  GroupByWithValueFilterDataSet groupByEngine = new 
GroupByWithValueFilterDataSet(nextJobId,
+  selectedSeries, unit, origin, mergedIntervalList);
+  groupByEngine.initGroupBy(context, aggres, optimizedExpression);
 
 Review comment:
   Can the invoking of this method be moved to the construction method?


This is an automated message from the Apache Git 

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268951847
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithOnlyTimeFilterDataSet.java
 ##
 @@ -0,0 +1,281 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.engine.querycontext.QueryDataSource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggreResultData;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryDataSourceManager;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.IAggregateReader;
+import org.apache.iotdb.db.query.reader.IPointReader;
+import org.apache.iotdb.db.query.reader.merge.PriorityMergeReader;
+import org.apache.iotdb.db.query.reader.sequence.SequenceDataReader;
+import org.apache.iotdb.tsfile.file.header.PageHeader;
+import org.apache.iotdb.tsfile.read.common.BatchData;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.expression.impl.GlobalTimeExpression;
+import org.apache.iotdb.tsfile.read.filter.basic.Filter;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithOnlyTimeFilterDataSet extends GroupByEngine {
+
+  protected List unSequenceReaderList;
+  protected List sequenceReaderList;
+  private List batchDataList;
+  private List hasCachedSequenceDataList;
+  private Filter timeFilter;
+
+  /**
+   * constructor.
+   */
+  public GroupByWithOnlyTimeFilterDataSet(long jobId, List paths, long 
unit, long origin,
+  List> mergedIntervals) {
+super(jobId, paths, unit, origin, mergedIntervals);
+this.unSequenceReaderList = new ArrayList<>();
+this.sequenceReaderList = new ArrayList<>();
+this.timeFilter = null;
+this.hasCachedSequenceDataList = new ArrayList<>();
+this.batchDataList = new ArrayList<>();
+for (int i = 0; i < paths.size(); i++) {
+  hasCachedSequenceDataList.add(false);
+  batchDataList.add(null);
+}
+  }
+
+  /**
+   * init reader and aggregate function.
+   */
+  public void initGroupBy(QueryContext context, List aggres, 
IExpression expression)
+  throws FileNodeManagerException, PathErrorException, ProcessorException, 
IOException {
+initAggreFuction(aggres);
+//init reader
+QueryTokenManager.getInstance().beginQueryOfGivenQueryPaths(jobId, 
selectedSeries);
+if (expression != null) {
+  timeFilter = ((GlobalTimeExpression) expression).getFilter();
+}
+for (int i = 0; i < selectedSeries.size(); i++) {
+  QueryDataSource queryDataSource = QueryDataSourceManager
+  .getQueryDataSource(jobId, selectedSeries.get(i), context);
+
+  // sequence reader for sealed tsfile, unsealed tsfile, memory
+  SequenceDataReader sequenceReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
+  timeFilter, context, false);
+
+  // unseq reader for all chunk groups in unSeqFile, memory
+  PriorityMergeReader unSeqMergeReader = SeriesReaderFactory.getInstance()
+  
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
+
+  sequenceReaderList.add(sequenceReader);
+  unSequenceReaderList.add(unSeqMergeReader);
+}
+
+  }
+
+  @Override
+  public RowRecord next() throws IOException {
+if (!hasCachedTimeInterval) {
+  throw new IOException(
+  "need to call hasNext() before calling next() in 
GroupByWithOnlyTimeFilterDataSet.");
+ 

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268951389
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithOnlyTimeFilterDataSet.java
 ##
 @@ -0,0 +1,281 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.engine.querycontext.QueryDataSource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggreResultData;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryDataSourceManager;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.IAggregateReader;
+import org.apache.iotdb.db.query.reader.IPointReader;
+import org.apache.iotdb.db.query.reader.merge.PriorityMergeReader;
+import org.apache.iotdb.db.query.reader.sequence.SequenceDataReader;
+import org.apache.iotdb.tsfile.file.header.PageHeader;
+import org.apache.iotdb.tsfile.read.common.BatchData;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.expression.impl.GlobalTimeExpression;
+import org.apache.iotdb.tsfile.read.filter.basic.Filter;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithOnlyTimeFilterDataSet extends GroupByEngine {
+
+  protected List unSequenceReaderList;
+  protected List sequenceReaderList;
+  private List batchDataList;
+  private List hasCachedSequenceDataList;
+  private Filter timeFilter;
+
+  /**
+   * constructor.
+   */
+  public GroupByWithOnlyTimeFilterDataSet(long jobId, List paths, long 
unit, long origin,
+  List> mergedIntervals) {
+super(jobId, paths, unit, origin, mergedIntervals);
+this.unSequenceReaderList = new ArrayList<>();
+this.sequenceReaderList = new ArrayList<>();
+this.timeFilter = null;
+this.hasCachedSequenceDataList = new ArrayList<>();
+this.batchDataList = new ArrayList<>();
+for (int i = 0; i < paths.size(); i++) {
+  hasCachedSequenceDataList.add(false);
+  batchDataList.add(null);
+}
+  }
+
+  /**
+   * init reader and aggregate function.
+   */
+  public void initGroupBy(QueryContext context, List aggres, 
IExpression expression)
+  throws FileNodeManagerException, PathErrorException, ProcessorException, 
IOException {
+initAggreFuction(aggres);
+//init reader
+QueryTokenManager.getInstance().beginQueryOfGivenQueryPaths(jobId, 
selectedSeries);
+if (expression != null) {
+  timeFilter = ((GlobalTimeExpression) expression).getFilter();
+}
+for (int i = 0; i < selectedSeries.size(); i++) {
+  QueryDataSource queryDataSource = QueryDataSourceManager
+  .getQueryDataSource(jobId, selectedSeries.get(i), context);
+
+  // sequence reader for sealed tsfile, unsealed tsfile, memory
+  SequenceDataReader sequenceReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
+  timeFilter, context, false);
+
+  // unseq reader for all chunk groups in unSeqFile, memory
+  PriorityMergeReader unSeqMergeReader = SeriesReaderFactory.getInstance()
+  
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
+
+  sequenceReaderList.add(sequenceReader);
+  unSequenceReaderList.add(unSeqMergeReader);
+}
+
+  }
+
+  @Override
+  public RowRecord next() throws IOException {
+if (!hasCachedTimeInterval) {
+  throw new IOException(
+  "need to call hasNext() before calling next() in 
GroupByWithOnlyTimeFilterDataSet.");
+ 

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268956961
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithValueFilterDataSet.java
 ##
 @@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.db.query.timegenerator.EngineTimeGenerator;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.timegenerator.TimeGenerator;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithValueFilterDataSet extends GroupByEngine {
+
+
+  private List allDataReaderList;
+  private TimeGenerator timestampGenerator;
+  private long timestamp;
+  private boolean hasCachedTimestamp;
+
+  //group by batch calculation size.
 
 Review comment:
   `/** **/` format is better.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268956872
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithOnlyTimeFilterDataSet.java
 ##
 @@ -0,0 +1,281 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.engine.querycontext.QueryDataSource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggreResultData;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryDataSourceManager;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.IAggregateReader;
+import org.apache.iotdb.db.query.reader.IPointReader;
+import org.apache.iotdb.db.query.reader.merge.PriorityMergeReader;
+import org.apache.iotdb.db.query.reader.sequence.SequenceDataReader;
+import org.apache.iotdb.tsfile.file.header.PageHeader;
+import org.apache.iotdb.tsfile.read.common.BatchData;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.expression.impl.GlobalTimeExpression;
+import org.apache.iotdb.tsfile.read.filter.basic.Filter;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithOnlyTimeFilterDataSet extends GroupByEngine {
+
+  protected List unSequenceReaderList;
+  protected List sequenceReaderList;
+  private List batchDataList;
+  private List hasCachedSequenceDataList;
+  private Filter timeFilter;
+
+  /**
+   * constructor.
+   */
+  public GroupByWithOnlyTimeFilterDataSet(long jobId, List paths, long 
unit, long origin,
+  List> mergedIntervals) {
+super(jobId, paths, unit, origin, mergedIntervals);
+this.unSequenceReaderList = new ArrayList<>();
+this.sequenceReaderList = new ArrayList<>();
+this.timeFilter = null;
+this.hasCachedSequenceDataList = new ArrayList<>();
+this.batchDataList = new ArrayList<>();
+for (int i = 0; i < paths.size(); i++) {
+  hasCachedSequenceDataList.add(false);
+  batchDataList.add(null);
+}
+  }
+
+  /**
+   * init reader and aggregate function.
+   */
+  public void initGroupBy(QueryContext context, List aggres, 
IExpression expression)
+  throws FileNodeManagerException, PathErrorException, ProcessorException, 
IOException {
+initAggreFuction(aggres);
+//init reader
+QueryTokenManager.getInstance().beginQueryOfGivenQueryPaths(jobId, 
selectedSeries);
+if (expression != null) {
+  timeFilter = ((GlobalTimeExpression) expression).getFilter();
+}
+for (int i = 0; i < selectedSeries.size(); i++) {
+  QueryDataSource queryDataSource = QueryDataSourceManager
+  .getQueryDataSource(jobId, selectedSeries.get(i), context);
+
+  // sequence reader for sealed tsfile, unsealed tsfile, memory
+  SequenceDataReader sequenceReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
+  timeFilter, context, false);
+
+  // unseq reader for all chunk groups in unSeqFile, memory
+  PriorityMergeReader unSeqMergeReader = SeriesReaderFactory.getInstance()
+  
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
+
+  sequenceReaderList.add(sequenceReader);
+  unSequenceReaderList.add(unSeqMergeReader);
+}
+
+  }
+
+  @Override
+  public RowRecord next() throws IOException {
+if (!hasCachedTimeInterval) {
+  throw new IOException(
+  "need to call hasNext() before calling next() in 
GroupByWithOnlyTimeFilterDataSet.");
+ 

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268957467
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithValueFilterDataSet.java
 ##
 @@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.db.query.timegenerator.EngineTimeGenerator;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.timegenerator.TimeGenerator;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithValueFilterDataSet extends GroupByEngine {
+
+
+  private List allDataReaderList;
+  private TimeGenerator timestampGenerator;
+  private long timestamp;
+  private boolean hasCachedTimestamp;
+
+  //group by batch calculation size.
+  private int timeStampFetchSize;
+
+  /**
+   * constructor.
+   */
+  public GroupByWithValueFilterDataSet(long jobId, List paths, long 
unit, long origin,
+  List> mergedIntervals) {
+super(jobId, paths, unit, origin, mergedIntervals);
+this.allDataReaderList = new ArrayList<>();
+this.timeStampFetchSize = 10 * 
IoTDBDescriptor.getInstance().getConfig().getFetchSize();
 
 Review comment:
   [important]
   Just define `timeStampFetchSize = 10 * 
IoTDBDescriptor.getInstance().getConfig().getFetchSize();` is not good. At 
least, we should define the size according to the value filter size and 
selected paths size.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268950743
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByEngine.java
 ##
 @@ -0,0 +1,172 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.query.aggregation.AggreFuncFactory;
+import org.apache.iotdb.db.query.aggregation.AggreResultData;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.tsfile.exception.write.UnSupportedDataTypeException;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public abstract class GroupByEngine extends QueryDataSet {
+
+  protected long jobId;
+  protected List selectedSeries;
+  private long unit;
+  private long origin;
+  protected List> mergedIntervals;
+
+  protected long startTime;
+  protected long endTime;
+  protected int usedIndex;
+  protected List functions;
+  protected boolean hasCachedTimeInterval;
+
+  /**
+   * groupBy query.
+   */
+  public GroupByEngine(long jobId, List paths, long unit, long origin,
+  List> mergedIntervals) {
+super(paths);
+this.jobId = jobId;
+this.selectedSeries = paths;
+this.unit = unit;
+this.origin = origin;
+this.mergedIntervals = mergedIntervals;
+
+this.functions = new ArrayList<>();
+
+//init group by time partition
 
 Review comment:
   I think all comments format should be unified. A blank after `//`?  Same as 
other comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268925973
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByEngine.java
 ##
 @@ -0,0 +1,172 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
 
 Review comment:
   remove unused imports


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268920037
 
 

 ##
 File path: 
tsfile/src/main/java/org/apache/iotdb/tsfile/read/common/BatchData.java
 ##
 @@ -527,4 +527,62 @@ public void setAnObject(int idx, Comparable v) {
   public int length() {
 return this.timeLength;
   }
+
+  public int getCurIdx() {
+return curIdx;
+  }
+
+  public long getTimeByIndex(int idx){
+rangeCheckForTime(idx);
+return this.timeRet.get(idx / timeCapacity)[idx % timeCapacity];
+  }
+
+  public long getLongByIndex(int idx){
+rangeCheck(idx);
+return this.longRet.get(idx / timeCapacity)[idx % timeCapacity];
+  }
+
+  public double getDoubleByIndex(int idx) {
 
 Review comment:
   All these methods can be set as private.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268958594
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithValueFilterDataSet.java
 ##
 @@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.db.query.timegenerator.EngineTimeGenerator;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.timegenerator.TimeGenerator;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithValueFilterDataSet extends GroupByEngine {
+
+
+  private List allDataReaderList;
+  private TimeGenerator timestampGenerator;
+  private long timestamp;
 
 Review comment:
   Add some comments for these two variables?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268926105
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByEngine.java
 ##
 @@ -0,0 +1,172 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.query.aggregation.AggreFuncFactory;
+import org.apache.iotdb.db.query.aggregation.AggreResultData;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.tsfile.exception.write.UnSupportedDataTypeException;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public abstract class GroupByEngine extends QueryDataSet {
 
 Review comment:
   This class is better ending with `DataSet`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268926495
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithValueFilterDataSet.java
 ##
 @@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.db.query.timegenerator.EngineTimeGenerator;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.timegenerator.TimeGenerator;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithValueFilterDataSet extends GroupByEngine {
+
+
+  private List allDataReaderList;
+  private TimeGenerator timestampGenerator;
+  private long timestamp;
+  private boolean hasCachedTimestamp;
+
+  //group by batch calculation size.
+  private int timeStampFetchSize;
+
+  /**
+   * constructor.
+   */
+  public GroupByWithValueFilterDataSet(long jobId, List paths, long 
unit, long origin,
+  List> mergedIntervals) {
+super(jobId, paths, unit, origin, mergedIntervals);
+this.allDataReaderList = new ArrayList<>();
+this.timeStampFetchSize = 10 * 
IoTDBDescriptor.getInstance().getConfig().getFetchSize();
+  }
+
+  /**
+   * init reader and aggregate function.
+   */
+  public void initGroupBy(QueryContext context, List aggres, 
IExpression expression)
+  throws FileNodeManagerException, PathErrorException, ProcessorException, 
IOException {
+initAggreFuction(aggres);
+
+QueryTokenManager.getInstance().beginQueryOfGivenExpression(jobId, 
expression);
+QueryTokenManager.getInstance().beginQueryOfGivenQueryPaths(jobId, 
selectedSeries);
+this.timestampGenerator = new EngineTimeGenerator(jobId, expression, 
context);
+this.allDataReaderList = SeriesReaderFactory
+.getByTimestampReadersOfSelectedPaths(jobId, selectedSeries, context);
+  }
+
+  @Override
+  public RowRecord next() throws IOException {
+if (!hasCachedTimeInterval) {
+  throw new IOException(
+  "need to call hasNext() before calling next() in 
GroupByWithOnlyTimeFilterDataSet.");
+}
+hasCachedTimeInterval = false;
+for (AggregateFunction function : functions) {
+  function.init();
+}
+
+long[] timestampArray = new long[timeStampFetchSize];
+int timeArrayLength = 0;
+if (hasCachedTimestamp) {
+  if (timestamp < endTime) {
+hasCachedTimestamp = false;
+timestampArray[timeArrayLength++] = timestamp;
+  } else {
+//所有域均为空
 
 Review comment:
   Chinese..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268926413
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByEngine.java
 ##
 @@ -0,0 +1,172 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.metadata.MManager;
+import org.apache.iotdb.db.query.aggregation.AggreFuncFactory;
+import org.apache.iotdb.db.query.aggregation.AggreResultData;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.tsfile.exception.write.UnSupportedDataTypeException;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.dataset.QueryDataSet;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public abstract class GroupByEngine extends QueryDataSet {
 
 Review comment:
   Some problems hinted by IDEA and Sonar, you can solve it yourself :) Such as 
`private`, exception problems and so on.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268931066
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/EngineQueryRouter.java
 ##
 @@ -88,6 +100,154 @@ public QueryDataSet query(QueryExpression queryExpression)
 }
   }
 
+  /**
+   * execute aggregation query.
+   */
+  public QueryDataSet aggregate(List selectedSeries, List aggres,
+  IExpression expression) throws QueryFilterOptimizationException, 
FileNodeManagerException,
+  IOException, PathErrorException, ProcessorException {
+
+long nextJobId = getNextJobId();
+QueryTokenManager.getInstance().setJobIdForCurrentRequestThread(nextJobId);
+
OpenedFilePathsManager.getInstance().setJobIdForCurrentRequestThread(nextJobId);
+
+QueryContext context = new QueryContext();
+
+if (expression != null) {
+  IExpression optimizedExpression = ExpressionOptimizer.getInstance()
+  .optimize(expression, selectedSeries);
+  AggregateEngineExecutor engineExecutor = new 
AggregateEngineExecutor(nextJobId,
+  selectedSeries, aggres, optimizedExpression);
+  if (optimizedExpression.getType() == ExpressionType.GLOBAL_TIME) {
+return engineExecutor.executeWithOutTimeGenerator(context);
+  } else {
+return engineExecutor.executeWithTimeGenerator(context);
+  }
+} else {
+  AggregateEngineExecutor engineExecutor = new 
AggregateEngineExecutor(nextJobId,
+  selectedSeries, aggres, null);
+  return engineExecutor.executeWithOutTimeGenerator(context);
+}
+  }
+
+  /**
+   * execute groupBy query.
+   *
+   * @param selectedSeries select path list
+   * @param aggres aggregation name list
+   * @param expression filter expression
+   * @param unit time granularity for interval partitioning, unit is ms.
+   * @param origin the datum time point for interval division is divided into 
a time interval for
+   *each TimeUnit time from this point forward and backward.
+   * @param intervals time intervals, closed interval.
+   */
+  public QueryDataSet groupBy(List selectedSeries, List aggres,
+  IExpression expression, long unit, long origin, List> 
intervals)
+  throws ProcessorException, QueryFilterOptimizationException, 
FileNodeManagerException,
+  PathErrorException, IOException {
+
+long nextJobId = getNextJobId();
+QueryTokenManager.getInstance().setJobIdForCurrentRequestThread(nextJobId);
+
OpenedFilePathsManager.getInstance().setJobIdForCurrentRequestThread(nextJobId);
+QueryContext context = new QueryContext();
+
+//check the legitimacy of intervals
+for (Pair pair : intervals) {
+  if (!(pair.left > 0 && pair.right > 0)) {
+throw new ProcessorException(
+String.format("Time interval<%d, %d> must be greater than 0.", 
pair.left, pair.right));
+  }
+  if (pair.right < pair.left) {
+throw new ProcessorException(String.format(
+"Interval starting time must be greater than the interval ending 
time, "
++ "found error interval<%d, %d>", pair.left, pair.right));
+  }
+}
+//merge intervals
+List> mergedIntervalList = mergeInterval(intervals);
+
+//construct groupBy intervals filter
+BinaryExpression intervalFilter = null;
+for (Pair pair : mergedIntervalList) {
+  BinaryExpression pairFilter = BinaryExpression
+  .and(new GlobalTimeExpression(TimeFilter.gtEq(pair.left)),
+  new GlobalTimeExpression(TimeFilter.ltEq(pair.right)));
+  if (intervalFilter != null) {
+intervalFilter = BinaryExpression.or(intervalFilter, pairFilter);
+  } else {
+intervalFilter = pairFilter;
+  }
+}
+
+//merge interval filter and filtering conditions after where statements
+if (expression == null) {
+  expression = intervalFilter;
+} else {
+  expression = BinaryExpression.and(expression, intervalFilter);
+}
+
+IExpression optimizedExpression = ExpressionOptimizer.getInstance()
+.optimize(expression, selectedSeries);
+if (optimizedExpression.getType() == ExpressionType.GLOBAL_TIME) {
+  GroupByWithOnlyTimeFilterDataSet groupByEngine = new 
GroupByWithOnlyTimeFilterDataSet(
+  nextJobId, selectedSeries, unit, origin, mergedIntervalList);
+  groupByEngine.initGroupBy(context, aggres, optimizedExpression);
+  return groupByEngine;
+} else {
+  GroupByWithValueFilterDataSet groupByEngine = new 
GroupByWithValueFilterDataSet(nextJobId,
+  selectedSeries, unit, origin, mergedIntervalList);
+  groupByEngine.initGroupBy(context, aggres, optimizedExpression);
+  return groupByEngine;
+}
+  }
+
+  /**
+   * execute fill query.
+   *
+   * @param fillPaths select path list
+   * @param queryTime timestamp
+   * @param fillType type IFill map
+   */
+  

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268957298
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithValueFilterDataSet.java
 ##
 @@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.db.query.timegenerator.EngineTimeGenerator;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.timegenerator.TimeGenerator;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithValueFilterDataSet extends GroupByEngine {
+
+
+  private List allDataReaderList;
+  private TimeGenerator timestampGenerator;
+  private long timestamp;
+  private boolean hasCachedTimestamp;
+
+  //group by batch calculation size.
+  private int timeStampFetchSize;
+
+  /**
+   * constructor.
+   */
+  public GroupByWithValueFilterDataSet(long jobId, List paths, long 
unit, long origin,
+  List> mergedIntervals) {
+super(jobId, paths, unit, origin, mergedIntervals);
+this.allDataReaderList = new ArrayList<>();
+this.timeStampFetchSize = 10 * 
IoTDBDescriptor.getInstance().getConfig().getFetchSize();
+  }
+
+  /**
+   * init reader and aggregate function.
+   */
+  public void initGroupBy(QueryContext context, List aggres, 
IExpression expression)
+  throws FileNodeManagerException, PathErrorException, ProcessorException, 
IOException {
+initAggreFuction(aggres);
+
+QueryTokenManager.getInstance().beginQueryOfGivenExpression(jobId, 
expression);
+QueryTokenManager.getInstance().beginQueryOfGivenQueryPaths(jobId, 
selectedSeries);
+this.timestampGenerator = new EngineTimeGenerator(jobId, expression, 
context);
+this.allDataReaderList = SeriesReaderFactory
+.getByTimestampReadersOfSelectedPaths(jobId, selectedSeries, context);
+  }
+
+  @Override
+  public RowRecord next() throws IOException {
+if (!hasCachedTimeInterval) {
+  throw new IOException(
+  "need to call hasNext() before calling next() in 
GroupByWithOnlyTimeFilterDataSet.");
+}
+hasCachedTimeInterval = false;
+for (AggregateFunction function : functions) {
+  function.init();
+}
+
+long[] timestampArray = new long[timeStampFetchSize];
+int timeArrayLength = 0;
+if (hasCachedTimestamp) {
+  if (timestamp < endTime) {
+hasCachedTimestamp = false;
+timestampArray[timeArrayLength++] = timestamp;
+  } else {
+//所有域均为空
+return constructRowRecord();
+  }
+}
+
+while (timestampGenerator.hasNext()) {
+  //construct timestamp list
+  for (int cnt = 1; cnt < timeStampFetchSize; cnt++) {
+if (!timestampGenerator.hasNext()) {
+  break;
+}
+timestamp = timestampGenerator.next();
+if (timestamp < endTime) {
+  timestampArray[timeArrayLength++] = timestamp;
+} else {
+  hasCachedTimestamp = true;
+  break;
+}
+  }
+
+  //cal result using timestamp list
+  for (int i = 0; i < selectedSeries.size(); i++) {
+functions.get(i).calcAggregationUsingTimestamps(
+timestampArray, timeArrayLength, allDataReaderList.get(i));
+  }
+
+  timeArrayLength = 0;
+  //judge if it's end
+  if (timestamp >= endTime) {

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268949826
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithOnlyTimeFilterDataSet.java
 ##
 @@ -0,0 +1,281 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.engine.querycontext.QueryDataSource;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggreResultData;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryDataSourceManager;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.IAggregateReader;
+import org.apache.iotdb.db.query.reader.IPointReader;
+import org.apache.iotdb.db.query.reader.merge.PriorityMergeReader;
+import org.apache.iotdb.db.query.reader.sequence.SequenceDataReader;
+import org.apache.iotdb.tsfile.file.header.PageHeader;
+import org.apache.iotdb.tsfile.read.common.BatchData;
+import org.apache.iotdb.tsfile.read.common.Field;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.expression.impl.GlobalTimeExpression;
+import org.apache.iotdb.tsfile.read.filter.basic.Filter;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithOnlyTimeFilterDataSet extends GroupByEngine {
+
+  protected List unSequenceReaderList;
+  protected List sequenceReaderList;
+  private List batchDataList;
+  private List hasCachedSequenceDataList;
+  private Filter timeFilter;
+
+  /**
+   * constructor.
+   */
+  public GroupByWithOnlyTimeFilterDataSet(long jobId, List paths, long 
unit, long origin,
+  List> mergedIntervals) {
+super(jobId, paths, unit, origin, mergedIntervals);
+this.unSequenceReaderList = new ArrayList<>();
+this.sequenceReaderList = new ArrayList<>();
+this.timeFilter = null;
+this.hasCachedSequenceDataList = new ArrayList<>();
+this.batchDataList = new ArrayList<>();
+for (int i = 0; i < paths.size(); i++) {
+  hasCachedSequenceDataList.add(false);
+  batchDataList.add(null);
+}
+  }
+
+  /**
+   * init reader and aggregate function.
+   */
+  public void initGroupBy(QueryContext context, List aggres, 
IExpression expression)
+  throws FileNodeManagerException, PathErrorException, ProcessorException, 
IOException {
+initAggreFuction(aggres);
+//init reader
+QueryTokenManager.getInstance().beginQueryOfGivenQueryPaths(jobId, 
selectedSeries);
+if (expression != null) {
+  timeFilter = ((GlobalTimeExpression) expression).getFilter();
+}
+for (int i = 0; i < selectedSeries.size(); i++) {
+  QueryDataSource queryDataSource = QueryDataSourceManager
+  .getQueryDataSource(jobId, selectedSeries.get(i), context);
+
+  // sequence reader for sealed tsfile, unsealed tsfile, memory
+  SequenceDataReader sequenceReader = new 
SequenceDataReader(queryDataSource.getSeqDataSource(),
+  timeFilter, context, false);
+
+  // unseq reader for all chunk groups in unSeqFile, memory
+  PriorityMergeReader unSeqMergeReader = SeriesReaderFactory.getInstance()
+  
.createUnSeqMergeReader(queryDataSource.getOverflowSeriesDataSource(), 
timeFilter);
+
+  sequenceReaderList.add(sequenceReader);
+  unSequenceReaderList.add(unSeqMergeReader);
+}
+
+  }
+
+  @Override
+  public RowRecord next() throws IOException {
+if (!hasCachedTimeInterval) {
+  throw new IOException(
+  "need to call hasNext() before calling next() in 
GroupByWithOnlyTimeFilterDataSet.");
+ 

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268959842
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithValueFilterDataSet.java
 ##
 @@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.db.query.timegenerator.EngineTimeGenerator;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.timegenerator.TimeGenerator;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithValueFilterDataSet extends GroupByEngine {
+
+
+  private List allDataReaderList;
+  private TimeGenerator timestampGenerator;
+  private long timestamp;
+  private boolean hasCachedTimestamp;
+
+  //group by batch calculation size.
+  private int timeStampFetchSize;
+
+  /**
+   * constructor.
+   */
+  public GroupByWithValueFilterDataSet(long jobId, List paths, long 
unit, long origin,
+  List> mergedIntervals) {
+super(jobId, paths, unit, origin, mergedIntervals);
+this.allDataReaderList = new ArrayList<>();
+this.timeStampFetchSize = 10 * 
IoTDBDescriptor.getInstance().getConfig().getFetchSize();
+  }
+
+  /**
+   * init reader and aggregate function.
+   */
+  public void initGroupBy(QueryContext context, List aggres, 
IExpression expression)
+  throws FileNodeManagerException, PathErrorException, ProcessorException, 
IOException {
+initAggreFuction(aggres);
+
+QueryTokenManager.getInstance().beginQueryOfGivenExpression(jobId, 
expression);
+QueryTokenManager.getInstance().beginQueryOfGivenQueryPaths(jobId, 
selectedSeries);
+this.timestampGenerator = new EngineTimeGenerator(jobId, expression, 
context);
+this.allDataReaderList = SeriesReaderFactory
+.getByTimestampReadersOfSelectedPaths(jobId, selectedSeries, context);
+  }
+
+  @Override
+  public RowRecord next() throws IOException {
+if (!hasCachedTimeInterval) {
+  throw new IOException(
+  "need to call hasNext() before calling next() in 
GroupByWithOnlyTimeFilterDataSet.");
+}
+hasCachedTimeInterval = false;
+for (AggregateFunction function : functions) {
+  function.init();
+}
+
+long[] timestampArray = new long[timeStampFetchSize];
+int timeArrayLength = 0;
+if (hasCachedTimestamp) {
+  if (timestamp < endTime) {
+hasCachedTimestamp = false;
+timestampArray[timeArrayLength++] = timestamp;
+  } else {
+//所有域均为空
+return constructRowRecord();
+  }
+}
+
+while (timestampGenerator.hasNext()) {
+  //construct timestamp list
+  for (int cnt = 1; cnt < timeStampFetchSize; cnt++) {
+if (!timestampGenerator.hasNext()) {
+  break;
+}
+timestamp = timestampGenerator.next();
+if (timestamp < endTime) {
+  timestampArray[timeArrayLength++] = timestamp;
+} else {
+  hasCachedTimestamp = true;
+  break;
+}
+  }
+
+  //cal result using timestamp list
+  for (int i = 0; i < selectedSeries.size(); i++) {
+functions.get(i).calcAggregationUsingTimestamps(
+timestampArray, timeArrayLength, allDataReaderList.get(i));
+  }
+
+  timeArrayLength = 0;
+  //judge if it's end
+  if (timestamp >= endTime) {

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268920755
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/aggregation/AggregateFunction.java
 ##
 @@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.aggregation;
+
+import java.io.IOException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.reader.IPointReader;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.tsfile.file.header.PageHeader;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.read.common.BatchData;
+
+public abstract class AggregateFunction {
+
+  protected String name;
+  protected AggreResultData resultData;
+  protected TSDataType resultDataType;
+
+  /**
+   * construct.
+   *
+   * @param name aggregate function name.
+   * @param dataType result data type.
+   */
+  public AggregateFunction(String name, TSDataType dataType) {
+this.name = name;
+this.resultDataType = dataType;
+resultData = new AggreResultData(dataType);
+  }
+
+  public abstract void init();
+
+  public abstract AggreResultData getResult();
+
+  /**
+   * 
+   * Calculate the aggregation using PageHeader.
+   * 
+   *
+   * @param pageHeader PageHeader
+   */
+  public abstract void calculateValueFromPageHeader(PageHeader pageHeader)
+  throws ProcessorException;
+
+  /**
+   * 
+   * Could not calculate using calculateValueFromPageHeader 
directly. Calculate the
+   * aggregation according to all decompressed data in this page.
+   * 
+   *
+   * @param dataInThisPage the data in the DataPage
+   * @param unsequenceReader unsequence data reader
+   * @throws IOException TsFile data read exception
+   * @throws ProcessorException wrong aggregation method parameter
+   */
+  public abstract void calculateValueFromPageData(BatchData dataInThisPage,
+  IPointReader unsequenceReader) throws IOException, ProcessorException;
+
+  /**
+   * 
+   * Could not calculate using calculateValueFromPageHeader 
directly. Calculate the
+   * aggregation according to all decompressed data in this page.
+   * 
+   *
+   * @param dataInThisPage the data in the DataPage
+   * @param unsequenceReader unsequence data reader
+   * @param bound the time upper bounder of data in unsequence data reader
+   * @throws IOException TsFile data read exception
+   * @throws ProcessorException wrong aggregation method parameter
+   */
+  public abstract void calculateValueFromPageData(BatchData dataInThisPage,
+  IPointReader unsequenceReader, long bound) throws IOException, 
ProcessorException;
+
+  /**
+   * 
+   * Calculate the aggregation with data in unsequenceReader.
+   * 
+   *
+   * @param unsequenceReader unsequence data reader
+   */
+  public abstract void calculateValueFromUnsequenceReader(IPointReader 
unsequenceReader)
+  throws IOException, ProcessorException;
+
+  /**
+   * 
+   * Calculate the aggregation with data whose timestamp is less than bound in 
unsequenceReader.
+   * 
+   *
+   * @param unsequenceReader unsequence data reader
+   * @param bound the time upper bounder of data in unsequence data reader
+   * @throws IOException TsFile data read exception
+   */
+  public abstract void calculateValueFromUnsequenceReader(IPointReader 
unsequenceReader, long bound)
+  throws IOException, ProcessorException;
+
+  /**
+   * 
+   * This method is calculate the aggregation using the common timestamps of 
cross series filter.
+   * 
+   *
+   * @throws IOException TsFile data read error
+   * @throws ProcessorException wrong aggregation method parameter
 
 Review comment:
   no `ProcessorException`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git 

[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268925168
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/aggregation/impl/MeanAggrFunc.java
 ##
 @@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.aggregation.impl;
+
+import java.io.IOException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggreResultData;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.reader.IPointReader;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.db.utils.TimeValuePair;
+import org.apache.iotdb.db.utils.TsPrimitiveType;
+import org.apache.iotdb.tsfile.file.header.PageHeader;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.read.common.BatchData;
+
+public class MeanAggrFunc extends AggregateFunction {
+
+  protected double sum = 0.0;
+  private int cnt = 0;
+  private TSDataType seriesDataType;
+
+  public MeanAggrFunc(String name, TSDataType seriesDataType) {
+super(name, TSDataType.DOUBLE);
+this.seriesDataType = seriesDataType;
+  }
+
+  @Override
+  public void init() {
+resultData.reSet();
+sum = 0.0;
+cnt = 0;
+  }
+
+  @Override
+  public AggreResultData getResult() {
+if (cnt > 0) {
+  resultData.setTimestamp(0);
+  resultData.setDoubleRet(sum / cnt);
+}
+return resultData;
+  }
+
+  @Override
+  public void calculateValueFromPageHeader(PageHeader pageHeader) throws 
ProcessorException {
 
 Review comment:
   Exception never thrown, we can remove it. Same as other classes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268929846
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/EngineQueryRouter.java
 ##
 @@ -88,6 +100,154 @@ public QueryDataSet query(QueryExpression queryExpression)
 }
   }
 
+  /**
+   * execute aggregation query.
+   */
+  public QueryDataSet aggregate(List selectedSeries, List aggres,
+  IExpression expression) throws QueryFilterOptimizationException, 
FileNodeManagerException,
 
 Review comment:
   Exception dealing way in `Aggregate`, `groupby` and `fill` is different from 
`query` method. You can see sonar hint.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-26 Thread GitBox
Beyyes commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r268922107
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/aggregation/AggregateFunction.java
 ##
 @@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.aggregation;
+
+import java.io.IOException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.reader.IPointReader;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.tsfile.file.header.PageHeader;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.read.common.BatchData;
+
+public abstract class AggregateFunction {
+
+  protected String name;
+  protected AggreResultData resultData;
+  protected TSDataType resultDataType;
 
 Review comment:
   can be set as private


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] little-emotion commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-27 Thread GitBox
little-emotion commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r269552009
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/dataset/EngineDataSetWithoutTimeGenerator.java
 ##
 @@ -16,13 +16,15 @@
  * specific language governing permissions and limitations
  * under the License.
  */
+
 package org.apache.iotdb.db.query.dataset;
 
 import java.io.IOException;
 import java.util.HashSet;
 import java.util.List;
 import java.util.PriorityQueue;
 import java.util.Set;
+import org.apache.iotdb.db.query.reader.IPointReader;
 import org.apache.iotdb.db.query.reader.IReader;
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] little-emotion commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-27 Thread GitBox
little-emotion commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r269612335
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByWithValueFilterDataSet.java
 ##
 @@ -0,0 +1,145 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.conf.IoTDBDescriptor;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
+import org.apache.iotdb.db.exception.PathErrorException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.aggregation.AggregateFunction;
+import org.apache.iotdb.db.query.context.QueryContext;
+import org.apache.iotdb.db.query.control.QueryTokenManager;
+import org.apache.iotdb.db.query.factory.SeriesReaderFactory;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.db.query.timegenerator.EngineTimeGenerator;
+import org.apache.iotdb.tsfile.read.common.Path;
+import org.apache.iotdb.tsfile.read.common.RowRecord;
+import org.apache.iotdb.tsfile.read.expression.IExpression;
+import org.apache.iotdb.tsfile.read.query.timegenerator.TimeGenerator;
+import org.apache.iotdb.tsfile.utils.Pair;
+
+public class GroupByWithValueFilterDataSet extends GroupByEngine {
+
+
+  private List allDataReaderList;
+  private TimeGenerator timestampGenerator;
+  private long timestamp;
+  private boolean hasCachedTimestamp;
+
+  //group by batch calculation size.
+  private int timeStampFetchSize;
+
+  /**
+   * constructor.
+   */
+  public GroupByWithValueFilterDataSet(long jobId, List paths, long 
unit, long origin,
+  List> mergedIntervals) {
+super(jobId, paths, unit, origin, mergedIntervals);
+this.allDataReaderList = new ArrayList<>();
+this.timeStampFetchSize = 10 * 
IoTDBDescriptor.getInstance().getConfig().getFetchSize();
+  }
+
+  /**
+   * init reader and aggregate function.
+   */
+  public void initGroupBy(QueryContext context, List aggres, 
IExpression expression)
+  throws FileNodeManagerException, PathErrorException, ProcessorException, 
IOException {
+initAggreFuction(aggres);
+
+QueryTokenManager.getInstance().beginQueryOfGivenExpression(jobId, 
expression);
+QueryTokenManager.getInstance().beginQueryOfGivenQueryPaths(jobId, 
selectedSeries);
+this.timestampGenerator = new EngineTimeGenerator(jobId, expression, 
context);
+this.allDataReaderList = SeriesReaderFactory
+.getByTimestampReadersOfSelectedPaths(jobId, selectedSeries, context);
+  }
+
+  @Override
+  public RowRecord next() throws IOException {
+if (!hasCachedTimeInterval) {
+  throw new IOException(
+  "need to call hasNext() before calling next() in 
GroupByWithOnlyTimeFilterDataSet.");
+}
+hasCachedTimeInterval = false;
+for (AggregateFunction function : functions) {
+  function.init();
+}
+
+long[] timestampArray = new long[timeStampFetchSize];
+int timeArrayLength = 0;
+if (hasCachedTimestamp) {
+  if (timestamp < endTime) {
+hasCachedTimestamp = false;
+timestampArray[timeArrayLength++] = timestamp;
+  } else {
+//所有域均为空
 
 Review comment:
   removed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] Beyyes merged pull request #103: fix channel close bug in merge process

2019-03-27 Thread GitBox
Beyyes merged pull request #103: fix channel close bug in merge process
URL: https://github.com/apache/incubator-iotdb/pull/103
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] little-emotion commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-27 Thread GitBox
little-emotion commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r269611175
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/executor/GroupByEngine.java
 ##
 @@ -0,0 +1,172 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.executor;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import org.apache.iotdb.db.exception.FileNodeManagerException;
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] little-emotion commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-27 Thread GitBox
little-emotion commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r269614347
 
 

 ##
 File path: 
iotdb/src/test/java/org/apache/iotdb/db/query/reader/sequence/SealedTsFilesReaderTest.java
 ##
 @@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.reader.sequence;
+
+import static org.junit.Assert.*;
+
+import org.junit.Test;
+
+public class SealedTsFilesReaderTest {
 
 Review comment:
   deleted


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-iotdb] little-emotion commented on a change in pull request #97: [IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill

2019-03-27 Thread GitBox
little-emotion commented on a change in pull request #97: 
[IOTDB-47][IOTDB-54][IOTDB-59][IOTDB-60]Aggregate+GroupBy+Fill
URL: https://github.com/apache/incubator-iotdb/pull/97#discussion_r269614088
 
 

 ##
 File path: 
iotdb/src/main/java/org/apache/iotdb/db/query/aggregation/AggregateFunction.java
 ##
 @@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iotdb.db.query.aggregation;
+
+import java.io.IOException;
+import org.apache.iotdb.db.exception.ProcessorException;
+import org.apache.iotdb.db.query.reader.IPointReader;
+import org.apache.iotdb.db.query.reader.merge.EngineReaderByTimeStamp;
+import org.apache.iotdb.tsfile.file.header.PageHeader;
+import org.apache.iotdb.tsfile.file.metadata.enums.TSDataType;
+import org.apache.iotdb.tsfile.read.common.BatchData;
+
+public abstract class AggregateFunction {
+
+  protected String name;
+  protected AggreResultData resultData;
+  protected TSDataType resultDataType;
+
+  /**
+   * construct.
+   *
+   * @param name aggregate function name.
+   * @param dataType result data type.
+   */
+  public AggregateFunction(String name, TSDataType dataType) {
+this.name = name;
+this.resultDataType = dataType;
+resultData = new AggreResultData(dataType);
+  }
+
+  public abstract void init();
+
+  public abstract AggreResultData getResult();
+
+  /**
+   * 
+   * Calculate the aggregation using PageHeader.
+   * 
+   *
+   * @param pageHeader PageHeader
+   */
+  public abstract void calculateValueFromPageHeader(PageHeader pageHeader)
+  throws ProcessorException;
+
+  /**
+   * 
+   * Could not calculate using calculateValueFromPageHeader 
directly. Calculate the
+   * aggregation according to all decompressed data in this page.
+   * 
+   *
+   * @param dataInThisPage the data in the DataPage
+   * @param unsequenceReader unsequence data reader
+   * @throws IOException TsFile data read exception
+   * @throws ProcessorException wrong aggregation method parameter
+   */
+  public abstract void calculateValueFromPageData(BatchData dataInThisPage,
+  IPointReader unsequenceReader) throws IOException, ProcessorException;
+
+  /**
+   * 
+   * Could not calculate using calculateValueFromPageHeader 
directly. Calculate the
+   * aggregation according to all decompressed data in this page.
+   * 
+   *
+   * @param dataInThisPage the data in the DataPage
+   * @param unsequenceReader unsequence data reader
+   * @param bound the time upper bounder of data in unsequence data reader
+   * @throws IOException TsFile data read exception
+   * @throws ProcessorException wrong aggregation method parameter
+   */
+  public abstract void calculateValueFromPageData(BatchData dataInThisPage,
+  IPointReader unsequenceReader, long bound) throws IOException, 
ProcessorException;
+
+  /**
+   * 
+   * Calculate the aggregation with data in unsequenceReader.
+   * 
+   *
+   * @param unsequenceReader unsequence data reader
+   */
+  public abstract void calculateValueFromUnsequenceReader(IPointReader 
unsequenceReader)
+  throws IOException, ProcessorException;
+
+  /**
+   * 
+   * Calculate the aggregation with data whose timestamp is less than bound in 
unsequenceReader.
+   * 
+   *
+   * @param unsequenceReader unsequence data reader
+   * @param bound the time upper bounder of data in unsequence data reader
+   * @throws IOException TsFile data read exception
+   */
+  public abstract void calculateValueFromUnsequenceReader(IPointReader 
unsequenceReader, long bound)
+  throws IOException, ProcessorException;
+
+  /**
+   * 
+   * This method is calculate the aggregation using the common timestamps of 
cross series filter.
+   * 
+   *
+   * @throws IOException TsFile data read error
+   * @throws ProcessorException wrong aggregation method parameter
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   7   8   9   10   >