[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3789: [CARBONDATA-3864] Store Size Optimization

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3789:
URL: https://github.com/apache/carbondata/pull/3789#issuecomment-671330840


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1944/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3858: [CARBONDATA-3919] Improve concurrent query performance

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3858:
URL: https://github.com/apache/carbondata/pull/3858#issuecomment-671330199


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1938/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671360833


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3681/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (CARBONDATA-3948) Bloom index fails to create when index server goes to fallback mode

2020-08-10 Thread Kunal Kapoor (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Kapoor resolved CARBONDATA-3948.
--
Fix Version/s: 2.1.0
   Resolution: Fixed

> Bloom index fails to create when index server goes to fallback mode
> ---
>
> Key: CARBONDATA-3948
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3948
> Project: CarbonData
>  Issue Type: Bug
>  Components: spark-integration
>Reporter: Vikram Ahuja
>Priority: Minor
> Fix For: 2.1.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When index server goes to fallback mode and if create bloom index is 
> triggered, create bloom index fails



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [carbondata] asfgit closed pull request #3775: [CARBONDATA-3948] Fix for Bloom Index create failure

2020-08-10 Thread GitBox


asfgit closed pull request #3775:
URL: https://github.com/apache/carbondata/pull/3775


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3874: [CARBONDATA-3931]Fix Secondary index with index column as DateType giving wrong results

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3874:
URL: https://github.com/apache/carbondata/pull/3874#issuecomment-671282761


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1936/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3778: [WIP][CARBONDATA-3916] Support array complex type with SI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3778:
URL: https://github.com/apache/carbondata/pull/3778#issuecomment-671341729


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1943/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] kunal642 commented on pull request #3775: [CARBONDATA-3948] Fix for Bloom Index create failure

2020-08-10 Thread GitBox


kunal642 commented on pull request #3775:
URL: https://github.com/apache/carbondata/pull/3775#issuecomment-671387564


   LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3858: [CARBONDATA-3919] Improve concurrent query performance

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3858:
URL: https://github.com/apache/carbondata/pull/3858#issuecomment-671287567


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3677/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671288255







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3879: [WIP] Handling the addition of geo column to hive at the time of table creation.

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3879:
URL: https://github.com/apache/carbondata/pull/3879#issuecomment-671355369


   Build Failed  with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3680/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3879: [WIP] Handling the addition of geo column to hive at the time of table creation.

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3879:
URL: https://github.com/apache/carbondata/pull/3879#issuecomment-671356109


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1941/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467937372



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/ORCCarbonWriter.java
##
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.*;
+
+import org.apache.carbondata.sdk.file.utils.SDKUtil;
+
+import org.apache.hadoop.hive.ql.io.orc.OrcStruct;
+import org.apache.hadoop.hive.ql.io.orc.Reader;
+import org.apache.hadoop.hive.ql.io.orc.RecordReader;
+import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
+import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
+import org.apache.hadoop.io.Text;
+
+/**
+ * Implementation to write ORC rows in CSV format to carbondata file.
+ */
+public class ORCCarbonWriter extends CSVCarbonWriter {
+  private CSVCarbonWriter csvCarbonWriter = null;
+  private Reader orcReader = null;
+  private File[] dataFiles;
+
+  ORCCarbonWriter(CSVCarbonWriter csvCarbonWriter) {
+this.csvCarbonWriter = csvCarbonWriter;
+  }
+
+  @Override
+  public void setDataFiles(File[] dataFiles) {
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * Load ORC file in iterative way.
+   */
+  @Override
+  public void write() throws IOException {
+if (this.dataFiles == null || this.dataFiles.length == 0) {
+  throw new RuntimeException("'withOrcPath()' must be called to support 
loading ORC files");
+}
+if (this.csvCarbonWriter == null) {
+  throw new RuntimeException("csv carbon writer can not be null");
+}
+Arrays.sort(this.dataFiles);
+for (File dataFile : this.dataFiles) {
+  this.loadSingleFile(dataFile);
+}
+  }
+
+  private void loadSingleFile(File file) throws IOException {
+orcReader = SDKUtil.buildOrcReader(file.getPath());
+ObjectInspector objectInspector = orcReader.getObjectInspector();
+RecordReader recordReader = orcReader.rows();
+if (objectInspector instanceof StructObjectInspector) {
+  StructObjectInspector structObjectInspector =
+  (StructObjectInspector) orcReader.getObjectInspector();
+  while (recordReader.hasNext()) {
+Object record = recordReader.next(null); // to remove duplicacy.
+List valueList = 
structObjectInspector.getStructFieldsDataAsList(record);
+for (int i = 0; i < valueList.size(); i++) {
+  valueList.set(i, parseOrcObject(valueList.get(i), 0));
+}
+this.csvCarbonWriter.write(valueList.toArray());
+  }
+} else {
+  while (recordReader.hasNext()) {
+Object record = recordReader.next(null); // to remove duplicacy.
+this.csvCarbonWriter.write(new Object[]{parseOrcObject(record, 0)});
+  }
+}
+  }
+
+  private String parseOrcObject(Object recordObject, int level) {
+if (recordObject instanceof OrcStruct) {
+  Objects.requireNonNull(orcReader);
+  StructObjectInspector structObjectInspector = (StructObjectInspector) 
orcReader
+  .getObjectInspector();
+  List value = 
structObjectInspector.getStructFieldsDataAsList(recordObject);
+  for (int i = 0; i < value.size(); i++) {
+value.set(i, parseOrcObject(value.get(i), level + 1));
+  }
+  String str = listToString(value, level);
+  if (str.length() > 0) {
+return str.substring(0, str.length() - 1);
+  }
+  return null;
+} else if (recordObject instanceof ArrayList) {
+  ArrayList listValue = (ArrayList) recordObject;
+  for (int i = 0; i < listValue.size(); i++) {
+listValue.set(i, parseOrcObject(listValue.get(i), level + 1));
+  }
+  String str = listToString(listValue, level);
+  if (str.length() > 0) {
+return str.substring(0, str.length() - 1);
+  }
+  return null;
+} else if (recordObject instanceof LinkedHashMap) {
+  LinkedHashMap keyValueRow = (LinkedHashMap) 
recordObject;
+  for (Map.Entry entry : keyValueRow.entrySet()) {
+Object val = parseOrcObject(keyValueRow.get(entry.getKey()), level + 
2);
+keyValueRow.put(entry.getKey(), val);
+  }

[GitHub] [carbondata] kunal642 commented on a change in pull request #3878: [CARBONDATA-3947]Fixed Hive read/write operation for Insert into Select operation.

2020-08-10 Thread GitBox


kunal642 commented on a change in pull request #3878:
URL: https://github.com/apache/carbondata/pull/3878#discussion_r467938413



##
File path: 
integration/hive/src/main/java/org/apache/carbondata/hive/util/HiveCarbonUtil.java
##
@@ -155,7 +155,7 @@ public static CarbonLoadModel getCarbonLoadModel(String 
tableName, String databa
 return loadModel;
   }
 
-  private static TableInfo getTableInfo(String tableName, String databaseName, 
String location,
+  public static TableInfo getTableInfo(String tableName, String databaseName, 
String location,

Review comment:
   revert this change after moving getCarbonTable to HiveCarbonUtil





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3885: [CARBONDATA-3946] Support IndexServer with Presto Engine

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3885:
URL: https://github.com/apache/carbondata/pull/3885#issuecomment-671278053


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1935/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467933189



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##
@@ -594,6 +614,428 @@ public CarbonWriterBuilder withJsonInput(Schema 
carbonSchema) {
 return this;
   }
 
+  private void validateCsvFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+for (File dataFile : dataFiles) {

Review comment:
   Added null check





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467936152



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##
@@ -660,13 +1102,41 @@ public CarbonWriter build() throws IOException, 
InvalidLoadOptionException {
   // removed from the load. LoadWithoutConverter flag is going to point to 
the Loader Builder
   // which will skip Conversion Step.
   loadModel.setLoadWithoutConverterStep(true);
-  return new AvroCarbonWriter(loadModel, hadoopConf, this.avroSchema);
+  AvroCarbonWriter avroCarbonWriter = new AvroCarbonWriter(loadModel,
+  hadoopConf, this.avroSchema);
+  if (this.filePath != null && this.filePath.length() != 0) {
+avroCarbonWriter.setDataFiles(this.dataFiles);
+  }
+  return avroCarbonWriter;
 } else if (this.writerType == WRITER_TYPE.JSON) {
   loadModel.setJsonFileLoad(true);
-  return new JsonCarbonWriter(loadModel, hadoopConf);
+  JsonCarbonWriter jsonCarbonWriter = new JsonCarbonWriter(loadModel, 
hadoopConf);
+  if (this.filePath != null && this.filePath.length() != 0) {
+jsonCarbonWriter.setDataFiles(this.dataFiles);
+  }
+  return jsonCarbonWriter;
+} else if (this.writerType == WRITER_TYPE.PARQUET) {
+  loadModel.setLoadWithoutConverterStep(true);
+  AvroCarbonWriter avroCarbonWriter = new AvroCarbonWriter(loadModel,
+  hadoopConf, this.avroSchema);
+  ParquetCarbonWriter parquetCarbonWriter = new 
ParquetCarbonWriter(avroCarbonWriter);
+  parquetCarbonWriter.setDataFiles(this.dataFiles);
+  return parquetCarbonWriter;
+} else if (this.writerType == WRITER_TYPE.ORC) {
+  CSVCarbonWriter csvCarbonWriter = new CSVCarbonWriter(loadModel, 
hadoopConf);
+  ORCCarbonWriter orcCarbonWriter = new ORCCarbonWriter(csvCarbonWriter);
+  orcCarbonWriter.setDataFiles(this.dataFiles);
+  return orcCarbonWriter;
 } else {
   // CSV
-  return new CSVCarbonWriter(loadModel, hadoopConf);
+  CSVCarbonWriter csvCarbonWriter = new CSVCarbonWriter(loadModel, 
hadoopConf);
+  if (this.filePath != null && this.filePath.length() != 0) {

Review comment:
   Done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467936238



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##
@@ -660,13 +1102,41 @@ public CarbonWriter build() throws IOException, 
InvalidLoadOptionException {
   // removed from the load. LoadWithoutConverter flag is going to point to 
the Loader Builder
   // which will skip Conversion Step.
   loadModel.setLoadWithoutConverterStep(true);
-  return new AvroCarbonWriter(loadModel, hadoopConf, this.avroSchema);
+  AvroCarbonWriter avroCarbonWriter = new AvroCarbonWriter(loadModel,
+  hadoopConf, this.avroSchema);
+  if (this.filePath != null && this.filePath.length() != 0) {
+avroCarbonWriter.setDataFiles(this.dataFiles);
+  }
+  return avroCarbonWriter;
 } else if (this.writerType == WRITER_TYPE.JSON) {
   loadModel.setJsonFileLoad(true);
-  return new JsonCarbonWriter(loadModel, hadoopConf);
+  JsonCarbonWriter jsonCarbonWriter = new JsonCarbonWriter(loadModel, 
hadoopConf);
+  if (this.filePath != null && this.filePath.length() != 0) {
+jsonCarbonWriter.setDataFiles(this.dataFiles);
+  }
+  return jsonCarbonWriter;
+} else if (this.writerType == WRITER_TYPE.PARQUET) {
+  loadModel.setLoadWithoutConverterStep(true);
+  AvroCarbonWriter avroCarbonWriter = new AvroCarbonWriter(loadModel,
+  hadoopConf, this.avroSchema);
+  ParquetCarbonWriter parquetCarbonWriter = new 
ParquetCarbonWriter(avroCarbonWriter);
+  parquetCarbonWriter.setDataFiles(this.dataFiles);
+  return parquetCarbonWriter;
+} else if (this.writerType == WRITER_TYPE.ORC) {
+  CSVCarbonWriter csvCarbonWriter = new CSVCarbonWriter(loadModel, 
hadoopConf);
+  ORCCarbonWriter orcCarbonWriter = new ORCCarbonWriter(csvCarbonWriter);
+  orcCarbonWriter.setDataFiles(this.dataFiles);
+  return orcCarbonWriter;
 } else {
   // CSV
-  return new CSVCarbonWriter(loadModel, hadoopConf);
+  CSVCarbonWriter csvCarbonWriter = new CSVCarbonWriter(loadModel, 
hadoopConf);
+  if (this.filePath != null && this.filePath.length() != 0) {
+csvCarbonWriter.setDataFiles(this.dataFiles);
+if (!this.options.containsKey("fileHeader")) {

Review comment:
   Done

##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/JsonCarbonWriter.java
##
@@ -91,4 +102,39 @@ public void close() throws IOException {
   throw new IOException(e);
 }
   }
+
+  private void loadSingleFile(File file) throws IOException {
+try {
+  Reader reader = SDKUtil.buildJsonReader(file);
+  JSONParser jsonParser = new JSONParser();
+  Object jsonRecord = jsonParser.parse(reader);
+  if (jsonRecord instanceof JSONArray) {
+JSONArray jsonArray = (JSONArray) jsonRecord;
+for (Object record : jsonArray) {
+  this.write(record.toString());
+}
+  } else {
+this.write(jsonRecord.toString());
+  }
+} catch (Exception e) {
+  e.printStackTrace();

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467936053



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##
@@ -660,13 +1102,41 @@ public CarbonWriter build() throws IOException, 
InvalidLoadOptionException {
   // removed from the load. LoadWithoutConverter flag is going to point to 
the Loader Builder
   // which will skip Conversion Step.
   loadModel.setLoadWithoutConverterStep(true);
-  return new AvroCarbonWriter(loadModel, hadoopConf, this.avroSchema);
+  AvroCarbonWriter avroCarbonWriter = new AvroCarbonWriter(loadModel,
+  hadoopConf, this.avroSchema);
+  if (this.filePath != null && this.filePath.length() != 0) {
+avroCarbonWriter.setDataFiles(this.dataFiles);
+  }
+  return avroCarbonWriter;
 } else if (this.writerType == WRITER_TYPE.JSON) {
   loadModel.setJsonFileLoad(true);
-  return new JsonCarbonWriter(loadModel, hadoopConf);
+  JsonCarbonWriter jsonCarbonWriter = new JsonCarbonWriter(loadModel, 
hadoopConf);
+  if (this.filePath != null && this.filePath.length() != 0) {

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467935938



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##
@@ -594,6 +614,428 @@ public CarbonWriterBuilder withJsonInput(Schema 
carbonSchema) {
 return this;
   }
 
+  private void validateCsvFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+for (File dataFile : dataFiles) {
+  try {
+CsvParser csvParser = SDKUtil.buildCsvParser();
+csvParser.beginParsing(dataFile);
+  } catch (IllegalArgumentException ex) {
+if (ex.getCause() instanceof FileNotFoundException) {
+  throw new FileNotFoundException("File " + dataFile +
+  " not found to build carbon writer.");
+}
+throw ex;
+  }
+}
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading CSV files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withCsvPath(String filePath) throws IOException {
+if (filePath.length() == 0) {
+  throw new IllegalArgumentException("filePath can not be empty");
+}
+this.filePath = filePath;
+this.isDirectory = new File(filePath).isDirectory();
+this.withCsvInput();
+this.validateCsvFiles();
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts CSV files directory and
+   * list of file which has to be loaded.
+   *
+   * @param filePath directory where the CSV file exists.
+   * @param fileList list of files which has to be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withCsvPath(String filePath, List 
fileList)
+  throws IOException {
+this.fileList = fileList;
+this.withCsvPath(filePath);
+return this;
+  }
+
+  private void validateJsonFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+for (File dataFile : dataFiles) {
+  try {
+new JSONParser().parse(SDKUtil.buildJsonReader(dataFile));
+  } catch (FileNotFoundException ex) {
+throw new FileNotFoundException("File " + dataFile + " not found to 
build carbon writer.");
+  } catch (ParseException ex) {
+throw new RuntimeException("File " + dataFile + " is not in json 
format.");
+  }
+}
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading JSON files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withJsonPath(String filePath) throws IOException {
+if (filePath.length() == 0) {
+  throw new IllegalArgumentException("filePath can not be empty");
+}
+this.filePath = filePath;
+this.isDirectory = new File(filePath).isDirectory();
+this.withJsonInput();
+this.validateJsonFiles();
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts JSON file directory and
+   * list of file which has to be loaded.
+   *
+   * @param filePath directory where the json file exists.
+   * @param fileList list of files which has to be loaded.
+   * @return CarbonWriterBuilder
+   * @throws IOException
+   */
+  public CarbonWriterBuilder withJsonPath(String filePath, List 
fileList)
+  throws IOException {
+this.fileList = fileList;
+this.withJsonPath(filePath);
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading Parquet files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withParquetPath(String filePath) throws 
IOException {
+if (filePath.length() == 0) {
+  throw new IllegalArgumentException("filePath can not be empty");
+}
+this.filePath = filePath;
+this.isDirectory = new File(filePath).isDirectory();
+this.writerType = WRITER_TYPE.PARQUET;
+this.validateParquetFiles();
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts parquet files directory and
+   * list of file which has to be loaded.
+   *
+   * @param filePath directory where the parquet file exists.
+   * @param fileList list of files which has to be loaded.
+   * @return CarbonWriterBuilder
+   * @throws IOException
+   */
+  public CarbonWriterBuilder withParquetPath(String filePath, List 
fileList)
+  throws IOException {
+this.fileList = fileList;
+this.withParquetPath(filePath);
+return this;
+  }
+
+  private void validateParquetFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+org.apache.avro.Schema parquetSchema = null;
+for (File dataFile : dataFiles) {
+  try {
+ParquetReader parquetReader =
+

[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467935085



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##
@@ -594,6 +614,428 @@ public CarbonWriterBuilder withJsonInput(Schema 
carbonSchema) {
 return this;
   }
 
+  private void validateCsvFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+for (File dataFile : dataFiles) {
+  try {
+CsvParser csvParser = SDKUtil.buildCsvParser();
+csvParser.beginParsing(dataFile);
+  } catch (IllegalArgumentException ex) {
+if (ex.getCause() instanceof FileNotFoundException) {
+  throw new FileNotFoundException("File " + dataFile +
+  " not found to build carbon writer.");
+}
+throw ex;
+  }
+}
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading CSV files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withCsvPath(String filePath) throws IOException {
+if (filePath.length() == 0) {
+  throw new IllegalArgumentException("filePath can not be empty");
+}
+this.filePath = filePath;
+this.isDirectory = new File(filePath).isDirectory();
+this.withCsvInput();
+this.validateCsvFiles();
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts CSV files directory and
+   * list of file which has to be loaded.
+   *
+   * @param filePath directory where the CSV file exists.
+   * @param fileList list of files which has to be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withCsvPath(String filePath, List 
fileList)
+  throws IOException {
+this.fileList = fileList;
+this.withCsvPath(filePath);
+return this;
+  }
+
+  private void validateJsonFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+for (File dataFile : dataFiles) {
+  try {
+new JSONParser().parse(SDKUtil.buildJsonReader(dataFile));
+  } catch (FileNotFoundException ex) {
+throw new FileNotFoundException("File " + dataFile + " not found to 
build carbon writer.");
+  } catch (ParseException ex) {
+throw new RuntimeException("File " + dataFile + " is not in json 
format.");
+  }
+}
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading JSON files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withJsonPath(String filePath) throws IOException {
+if (filePath.length() == 0) {
+  throw new IllegalArgumentException("filePath can not be empty");
+}
+this.filePath = filePath;
+this.isDirectory = new File(filePath).isDirectory();
+this.withJsonInput();
+this.validateJsonFiles();
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts JSON file directory and
+   * list of file which has to be loaded.
+   *
+   * @param filePath directory where the json file exists.
+   * @param fileList list of files which has to be loaded.
+   * @return CarbonWriterBuilder
+   * @throws IOException
+   */
+  public CarbonWriterBuilder withJsonPath(String filePath, List 
fileList)
+  throws IOException {
+this.fileList = fileList;
+this.withJsonPath(filePath);
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading Parquet files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withParquetPath(String filePath) throws 
IOException {
+if (filePath.length() == 0) {
+  throw new IllegalArgumentException("filePath can not be empty");
+}
+this.filePath = filePath;
+this.isDirectory = new File(filePath).isDirectory();
+this.writerType = WRITER_TYPE.PARQUET;
+this.validateParquetFiles();
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts parquet files directory and
+   * list of file which has to be loaded.
+   *
+   * @param filePath directory where the parquet file exists.
+   * @param fileList list of files which has to be loaded.
+   * @return CarbonWriterBuilder
+   * @throws IOException
+   */
+  public CarbonWriterBuilder withParquetPath(String filePath, List 
fileList)
+  throws IOException {
+this.fileList = fileList;
+this.withParquetPath(filePath);
+return this;
+  }
+
+  private void validateParquetFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+org.apache.avro.Schema parquetSchema = null;
+for (File dataFile : dataFiles) {
+  try {
+ParquetReader parquetReader =
+

[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3773: [CARBONDATA-3830]Presto array columns read support

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3773:
URL: https://github.com/apache/carbondata/pull/3773#issuecomment-671270077


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3673/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467933961



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##
@@ -594,6 +614,428 @@ public CarbonWriterBuilder withJsonInput(Schema 
carbonSchema) {
 return this;
   }
 
+  private void validateCsvFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+for (File dataFile : dataFiles) {
+  try {
+CsvParser csvParser = SDKUtil.buildCsvParser();
+csvParser.beginParsing(dataFile);
+  } catch (IllegalArgumentException ex) {
+if (ex.getCause() instanceof FileNotFoundException) {
+  throw new FileNotFoundException("File " + dataFile +
+  " not found to build carbon writer.");
+}
+throw ex;
+  }
+}
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading CSV files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withCsvPath(String filePath) throws IOException {
+if (filePath.length() == 0) {

Review comment:
   Changed to StringUtils.isEmpty(filePath)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3773: [CARBONDATA-3830]Presto array columns read support

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3773:
URL: https://github.com/apache/carbondata/pull/3773#issuecomment-671273851


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1934/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Karan980 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


Karan980 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671296004


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3789: [CARBONDATA-3864] Store Size Optimization

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3789:
URL: https://github.com/apache/carbondata/pull/3789#issuecomment-671331431


   Build Failed  with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3683/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#issuecomment-671332155


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1939/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3778: [WIP][CARBONDATA-3916] Support array complex type with SI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3778:
URL: https://github.com/apache/carbondata/pull/3778#issuecomment-671340388


   Build Failed  with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3682/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671363697


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1942/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3874: [CARBONDATA-3931]Fix Secondary index with index column as DateType giving wrong results

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3874:
URL: https://github.com/apache/carbondata/pull/3874#issuecomment-671279079


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3675/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467930559



##
File path: sdk/sdk/pom.xml
##
@@ -48,6 +48,28 @@
   httpclient
   ${httpclient.version}
 
+
+  org.apache.parquet
+  parquet-avro
+  1.10.0

Review comment:
   Changed to 1.11.0





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] kunal642 commented on a change in pull request #3878: [CARBONDATA-3947]Fixed Hive read/write operation for Insert into Select operation.

2020-08-10 Thread GitBox


kunal642 commented on a change in pull request #3878:
URL: https://github.com/apache/carbondata/pull/3878#discussion_r467937194



##
File path: 
integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonInputFormat.java
##
@@ -213,6 +223,45 @@ private QueryModel getQueryModel(Configuration 
configuration, String path)
 .build();
   }
 
+  private static CarbonTable getCarbonTable(Configuration tableProperties)
+  throws InvalidConfigurationException, IOException, SQLException {
+String[] tableUniqueName = tableProperties.get("name").split("\\.");
+String databaseName = tableUniqueName[0];
+String tableName = tableUniqueName[1];
+String tablePath = 
tableProperties.get(hive_metastoreConstants.META_TABLE_LOCATION);
+String columns = 
tableProperties.get(hive_metastoreConstants.META_TABLE_COLUMNS);
+String sortColumns = tableProperties.get("sort_columns");
+String columnTypes = 
tableProperties.get(hive_metastoreConstants.META_TABLE_COLUMN_TYPES);
+String partitionColumns =
+
tableProperties.get(hive_metastoreConstants.META_TABLE_PARTITION_COLUMNS);
+String partitionColumnTypes =
+
tableProperties.get(hive_metastoreConstants.META_TABLE_PARTITION_COLUMN_TYPES);
+if (partitionColumns != null) {
+  columns = columns + "," + partitionColumns;
+  columnTypes = columnTypes + ":" + partitionColumnTypes;
+}
+String[] columnTypeArray = 
HiveCarbonUtil.splitSchemaStringToArray(columnTypes);
+
+AbsoluteTableIdentifier absoluteTableIdentifier = AbsoluteTableIdentifier
+.from(tableProperties.get(hive_metastoreConstants.META_TABLE_LOCATION),
+getDatabaseName(tableProperties), getTableName(tableProperties));
+String schemaPath =
+
CarbonTablePath.getSchemaFilePath(absoluteTableIdentifier.getTablePath(), 
tableProperties);
+
+CarbonTable carbonTable;
+String carbonDataFile = CarbonUtil.getFilePathExternalFilePath(schemaPath, 
tableProperties);
+if (carbonDataFile == null) {

Review comment:
   This method is called only when carbondataFile does not exist. Refer: 
https://github.com/apache/carbondata/pull/3878/files#diff-e4da0735e7ef96dca3a7bfedb5a8039dR96
   
   Please remove this uncessary listing and directly use the 
CarbonTable.buildFromTableInfo() to build carbon table.
   
   No need for inferSchema code also





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467936785



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/JsonCarbonWriter.java
##
@@ -91,4 +102,39 @@ public void close() throws IOException {
   throw new IOException(e);
 }
   }
+
+  private void loadSingleFile(File file) throws IOException {
+try {
+  Reader reader = SDKUtil.buildJsonReader(file);
+  JSONParser jsonParser = new JSONParser();
+  Object jsonRecord = jsonParser.parse(reader);
+  if (jsonRecord instanceof JSONArray) {
+JSONArray jsonArray = (JSONArray) jsonRecord;
+for (Object record : jsonArray) {
+  this.write(record.toString());
+}
+  } else {
+this.write(jsonRecord.toString());
+  }
+} catch (Exception e) {
+  e.printStackTrace();
+  throw new IOException(e.getMessage());

Review comment:
   closed now.

##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/ORCCarbonWriter.java
##
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.*;
+
+import org.apache.carbondata.sdk.file.utils.SDKUtil;
+
+import org.apache.hadoop.hive.ql.io.orc.OrcStruct;
+import org.apache.hadoop.hive.ql.io.orc.Reader;
+import org.apache.hadoop.hive.ql.io.orc.RecordReader;
+import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
+import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
+import org.apache.hadoop.io.Text;
+
+/**
+ * Implementation to write ORC rows in CSV format to carbondata file.
+ */
+public class ORCCarbonWriter extends CSVCarbonWriter {
+  private CSVCarbonWriter csvCarbonWriter = null;
+  private Reader orcReader = null;
+  private File[] dataFiles;
+
+  ORCCarbonWriter(CSVCarbonWriter csvCarbonWriter) {
+this.csvCarbonWriter = csvCarbonWriter;
+  }
+
+  @Override
+  public void setDataFiles(File[] dataFiles) {
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * Load ORC file in iterative way.
+   */
+  @Override
+  public void write() throws IOException {
+if (this.dataFiles == null || this.dataFiles.length == 0) {
+  throw new RuntimeException("'withOrcPath()' must be called to support 
loading ORC files");
+}
+if (this.csvCarbonWriter == null) {
+  throw new RuntimeException("csv carbon writer can not be null");
+}
+Arrays.sort(this.dataFiles);
+for (File dataFile : this.dataFiles) {
+  this.loadSingleFile(dataFile);
+}
+  }
+
+  private void loadSingleFile(File file) throws IOException {
+orcReader = SDKUtil.buildOrcReader(file.getPath());
+ObjectInspector objectInspector = orcReader.getObjectInspector();
+RecordReader recordReader = orcReader.rows();
+if (objectInspector instanceof StructObjectInspector) {
+  StructObjectInspector structObjectInspector =
+  (StructObjectInspector) orcReader.getObjectInspector();
+  while (recordReader.hasNext()) {
+Object record = recordReader.next(null); // to remove duplicacy.
+List valueList = 
structObjectInspector.getStructFieldsDataAsList(record);
+for (int i = 0; i < valueList.size(); i++) {
+  valueList.set(i, parseOrcObject(valueList.get(i), 0));
+}
+this.csvCarbonWriter.write(valueList.toArray());
+  }
+} else {
+  while (recordReader.hasNext()) {
+Object record = recordReader.next(null); // to remove duplicacy.
+this.csvCarbonWriter.write(new Object[]{parseOrcObject(record, 0)});
+  }
+}
+  }
+
+  private String parseOrcObject(Object recordObject, int level) {
+if (recordObject instanceof OrcStruct) {
+  Objects.requireNonNull(orcReader);
+  StructObjectInspector structObjectInspector = (StructObjectInspector) 
orcReader
+  .getObjectInspector();
+  List value = 
structObjectInspector.getStructFieldsDataAsList(recordObject);
+  for (int i = 0; i < value.size(); i++) {
+value.set(i, parseOrcObject(value.get(i), level + 1));
+  }
+  String str = 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3878: [CARBONDATA-3947]Fixed Hive read/write operation for Insert into Select operation.

2020-08-10 Thread GitBox


kunal642 commented on a change in pull request #3878:
URL: https://github.com/apache/carbondata/pull/3878#discussion_r467937442



##
File path: 
integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonInputFormat.java
##
@@ -202,7 +212,7 @@ protected void setFilterPredicates(Configuration 
configuration, CarbonTable carb
   }
 
   private QueryModel getQueryModel(Configuration configuration, String path)
-  throws IOException, InvalidConfigurationException {
+  throws IOException, InvalidConfigurationException, SQLException {

Review comment:
   Which method is throwing SQLException?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


Indhumathi27 commented on pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#issuecomment-671268912


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3885: [CARBONDATA-3946] Support IndexServer with Presto Engine

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3885:
URL: https://github.com/apache/carbondata/pull/3885#issuecomment-671276686


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3674/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#issuecomment-671334639


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3678/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on a change in pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on a change in pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#discussion_r467934354



##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##
@@ -594,6 +614,428 @@ public CarbonWriterBuilder withJsonInput(Schema 
carbonSchema) {
 return this;
   }
 
+  private void validateCsvFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+for (File dataFile : dataFiles) {
+  try {
+CsvParser csvParser = SDKUtil.buildCsvParser();
+csvParser.beginParsing(dataFile);
+  } catch (IllegalArgumentException ex) {
+if (ex.getCause() instanceof FileNotFoundException) {
+  throw new FileNotFoundException("File " + dataFile +
+  " not found to build carbon writer.");
+}
+throw ex;
+  }
+}
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading CSV files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withCsvPath(String filePath) throws IOException {
+if (filePath.length() == 0) {
+  throw new IllegalArgumentException("filePath can not be empty");
+}
+this.filePath = filePath;
+this.isDirectory = new File(filePath).isDirectory();

Review comment:
   Handled for all type of path with help of using CarbonFile.

##
File path: 
sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonWriterBuilder.java
##
@@ -594,6 +614,428 @@ public CarbonWriterBuilder withJsonInput(Schema 
carbonSchema) {
 return this;
   }
 
+  private void validateCsvFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+for (File dataFile : dataFiles) {
+  try {
+CsvParser csvParser = SDKUtil.buildCsvParser();
+csvParser.beginParsing(dataFile);
+  } catch (IllegalArgumentException ex) {
+if (ex.getCause() instanceof FileNotFoundException) {
+  throw new FileNotFoundException("File " + dataFile +
+  " not found to build carbon writer.");
+}
+throw ex;
+  }
+}
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading CSV files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withCsvPath(String filePath) throws IOException {
+if (filePath.length() == 0) {
+  throw new IllegalArgumentException("filePath can not be empty");
+}
+this.filePath = filePath;
+this.isDirectory = new File(filePath).isDirectory();
+this.withCsvInput();
+this.validateCsvFiles();
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts CSV files directory and
+   * list of file which has to be loaded.
+   *
+   * @param filePath directory where the CSV file exists.
+   * @param fileList list of files which has to be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withCsvPath(String filePath, List 
fileList)
+  throws IOException {
+this.fileList = fileList;
+this.withCsvPath(filePath);
+return this;
+  }
+
+  private void validateJsonFiles() throws IOException {
+File[] dataFiles = this.extractDataFiles();
+for (File dataFile : dataFiles) {
+  try {
+new JSONParser().parse(SDKUtil.buildJsonReader(dataFile));
+  } catch (FileNotFoundException ex) {
+throw new FileNotFoundException("File " + dataFile + " not found to 
build carbon writer.");
+  } catch (ParseException ex) {
+throw new RuntimeException("File " + dataFile + " is not in json 
format.");
+  }
+}
+this.dataFiles = dataFiles;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts loading JSON files.
+   *
+   * @param filePath absolute path under which files should be loaded.
+   * @return CarbonWriterBuilder
+   */
+  public CarbonWriterBuilder withJsonPath(String filePath) throws IOException {
+if (filePath.length() == 0) {
+  throw new IllegalArgumentException("filePath can not be empty");
+}
+this.filePath = filePath;
+this.isDirectory = new File(filePath).isDirectory();
+this.withJsonInput();
+this.validateJsonFiles();
+return this;
+  }
+
+  /**
+   * to build a {@link CarbonWriter}, which accepts JSON file directory and
+   * list of file which has to be loaded.
+   *
+   * @param filePath directory where the json file exists.
+   * @param fileList list of files which has to be loaded.
+   * @return CarbonWriterBuilder
+   * @throws IOException
+   */
+  public CarbonWriterBuilder withJsonPath(String filePath, List 
fileList)
+  throws IOException {
+this.fileList = fileList;
+this.withJsonPath(filePath);
+return this;
+  }
+
+  /**
+   * to build a {@link 

[GitHub] [carbondata] kunal642 commented on a change in pull request #3878: [CARBONDATA-3947]Fixed Hive read/write operation for Insert into Select operation.

2020-08-10 Thread GitBox


kunal642 commented on a change in pull request #3878:
URL: https://github.com/apache/carbondata/pull/3878#discussion_r467935312



##
File path: 
integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonInputFormat.java
##
@@ -213,6 +223,45 @@ private QueryModel getQueryModel(Configuration 
configuration, String path)
 .build();
   }
 
+  private static CarbonTable getCarbonTable(Configuration tableProperties)

Review comment:
   Please move this method to HiveCarbonUtil class.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#issuecomment-671479305


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1946/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3778: [WIP][CARBONDATA-3916] Support array complex type with SI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3778:
URL: https://github.com/apache/carbondata/pull/3778#issuecomment-671496551


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3687/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] nihal0107 commented on pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#issuecomment-671481032


   retest this please.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #3778: [WIP][CARBONDATA-3916] Support array complex type with SI

2020-08-10 Thread GitBox


Indhumathi27 commented on a change in pull request #3778:
URL: https://github.com/apache/carbondata/pull/3778#discussion_r467978296



##
File path: 
index/secondary-index/src/test/scala/org/apache/carbondata/spark/testsuite/secondaryindex/TestSIWithComplexArrayType.scala
##
@@ -0,0 +1,136 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.spark.testsuite.secondaryindex
+
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.test.util.QueryTest
+import org.scalatest.BeforeAndAfterEach
+
+import 
org.apache.carbondata.spark.testsuite.secondaryindex.TestSecondaryIndexUtils.isFilterPushedDownToSI
+
+class TestSIWithComplexArrayType extends QueryTest with BeforeAndAfterEach {
+
+  override def beforeEach(): Unit = {
+sql("drop table if exists complextable")
+  }
+
+  override def afterEach(): Unit = {
+sql("drop index if exists index_1 on complextable")
+sql("drop table if exists complextable")
+  }
+
+  test("test array on secondary index") {

Review comment:
   currenlty, for d), for query having more than one array contains filter 
will not SI. query will hit main table only. added testcase for c)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Karan980 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


Karan980 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671451650


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#issuecomment-671460512


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3685/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3778: [WIP][CARBONDATA-3916] Support array complex type with SI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3778:
URL: https://github.com/apache/carbondata/pull/3778#issuecomment-671499205


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1948/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


Indhumathi27 commented on pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#issuecomment-671410491


   LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671524252


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3688/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671525912


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1949/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671603256


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3690/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671605765


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1951/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#issuecomment-671536738


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1950/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Karan980 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


Karan980 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671536687


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#issuecomment-671551924


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3689/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (CARBONDATA-3949) Select filter query fails from presto-cli on MV table

2020-08-10 Thread Chetan Bhat (Jira)
Chetan Bhat created CARBONDATA-3949:
---

 Summary: Select filter query fails from presto-cli on MV table
 Key: CARBONDATA-3949
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3949
 Project: CarbonData
  Issue Type: Bug
  Components: presto-integration
Affects Versions: 2.0.0
 Environment: Spark 2.4.5. PrestoSQL 316
Reporter: Chetan Bhat


>From sparksql create table , load data and create MV

spark-sql> CREATE TABLE uniqdata(CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED as carbondata 
TBLPROPERTIES('local_dictionary_enable'='true','local_dictionary_threshold'='1000');
Time taken: 0.753 seconds
spark-sql> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into 
table uniqdata OPTIONS('DELIMITER'=',', 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
OK
OK
Time taken: 1.992 seconds
spark-sql> CREATE MATERIALIZED VIEW mv1 as select cust_id, cust_name, 
count(cust_id) from uniqdata group by cust_id, cust_name;
OK
Time taken: 4.336 seconds

 

>From presto cli select filter query on table with MV fails.

presto:chetan> select * from uniqdata where CUST_ID IS NULL or BIGINT_COLUMN1 
=1233720368578 or DECIMAL_COLUMN1 = 12345678901.123458 or Double_COLUMN1 = 
1.12345674897976E10 or INTEGER_COLUMN1 IS NULL ;
Query 20200804_092703_00253_ed34h failed: Unable to get file status:

*Log-*
2020-08-04T18:09:55.975+0800 INFO Query-20200804_100955_00300_ed34h-2642 stdout 
2020-08-04 18:09:55 WARN AbstractDFSCarbonFile:458 - Exception occurred: File 
hdfs://hacluster/user/sparkhive/warehouse/chetan.db/uniqdata_string/Metadata 
does not exist.
java.io.FileNotFoundException: File 
hdfs://hacluster/user/sparkhive/warehouse/chetan.db/uniqdata_string/Metadata 
does not exist.
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:1058)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$1000(DistributedFileSystem.java:131)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1118)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$24.doCall(DistributedFileSystem.java:1115)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:1125)
 at org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:270)
 at 
org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.listFiles(AbstractDFSCarbonFile.java:456)
 at 
org.apache.carbondata.core.datastore.filesystem.AbstractDFSCarbonFile.listFiles(AbstractDFSCarbonFile.java:559)
 at 
org.apache.carbondata.core.util.path.CarbonTablePath.getActualSchemaFilePath(CarbonTablePath.java:189)
 at 
org.apache.carbondata.core.util.path.CarbonTablePath.getSchemaFilePath(CarbonTablePath.java:168)
 at 
org.apache.carbondata.presto.impl.CarbonTableReader.updateSchemaTables(CarbonTableReader.java:147)
 at 
org.apache.carbondata.presto.impl.CarbonTableReader.getCarbonCache(CarbonTableReader.java:128)
 at 
org.apache.carbondata.presto.CarbondataSplitManager.getSplits(CarbondataSplitManager.java:145)
 at 
io.prestosql.spi.connector.classloader.ClassLoaderSafeConnectorSplitManager.getSplits(ClassLoaderSafeConnectorSplitManager.java:50)
 at io.prestosql.split.SplitManager.getSplits(SplitManager.java:85)
 at 
io.prestosql.sql.planner.DistributedExecutionPlanner$Visitor.visitScanAndFilter(DistributedExecutionPlanner.java:189)
 at 
io.prestosql.sql.planner.DistributedExecutionPlanner$Visitor.visitFilter(DistributedExecutionPlanner.java:257)
 at 
io.prestosql.sql.planner.DistributedExecutionPlanner$Visitor.visitFilter(DistributedExecutionPlanner.java:149)
 at io.prestosql.sql.planner.plan.FilterNode.accept(FilterNode.java:72)
 at 
io.prestosql.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:119)
 at 
io.prestosql.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:124)
 at 
io.prestosql.sql.planner.DistributedExecutionPlanner.plan(DistributedExecutionPlanner.java:96)
 at 
io.prestosql.execution.SqlQueryExecution.planDistribution(SqlQueryExecution.java:425)
 at io.prestosql.execution.SqlQueryExecution.start(SqlQueryExecution.java:321)
 at io.prestosql.$gen.Presto_31620200804_042858_1.run(Unknown Source)
 at io.prestosql.execution.SqlQueryManager.createQuery(SqlQueryManager.java:239)
 at 
io.prestosql.dispatcher.LocalDispatchQuery.lambda$startExecution$4(LocalDispatchQuery.java:105)
 at 

[GitHub] [carbondata] nihal0107 commented on pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


nihal0107 commented on pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#issuecomment-671727875


   retest this please.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Karan980 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


Karan980 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671729420


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] xubo245 commented on a change in pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


xubo245 commented on a change in pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#discussion_r468283508



##
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonInsertFromStageCommand.scala
##
@@ -499,25 +499,31 @@ case class CarbonInsertFromStageCommand(
* return the loading files failed to create
*/
   private def createStageLoadingFiles(
+  stagePath: String,
   executorService: ExecutorService,
   stageFiles: Array[(CarbonFile, CarbonFile)]): Array[(CarbonFile, 
CarbonFile)] = {
 stageFiles.map { files =>
   executorService.submit(new Callable[(CarbonFile, CarbonFile, Boolean)] {
 override def call(): (CarbonFile, CarbonFile, Boolean) = {
-  // Get the loading files path
-  val stageLoadingFile =
-FileFactory.getCarbonFile(files._1.getAbsolutePath +
-  CarbonTablePath.LOADING_FILE_SUFFIX);
-  // Try to create loading files
-  // make isFailed to be true if createNewFile return false.
-  // the reason can be file exists or exceptions.
-  var isFailed = !stageLoadingFile.createNewFile()
-  // if file exists, modify the lastModifiedTime of the file.
-  if (isFailed) {
-// make isFailed to be true if setLastModifiedTime return false.
-isFailed = 
!stageLoadingFile.setLastModifiedTime(System.currentTimeMillis());
+  try {
+// Get the loading files path
+val stageLoadingFile =
+  FileFactory.getCarbonFile(stagePath +
+CarbonCommonConstants.FILE_SEPARATOR +
+files._1.getName + CarbonTablePath.LOADING_FILE_SUFFIX);
+// Try to create loading files
+// make isFailed to be true if createNewFile return false.
+// the reason can be file exists or exceptions.
+var isFailed = !stageLoadingFile.createNewFile()
+// if file exists, modify the lastmodifiedtime of the file.
+if (isFailed) {
+  // make isFailed to be true if setLastModifiedTime return false.
+  isFailed = 
!stageLoadingFile.setLastModifiedTime(System.currentTimeMillis());
+}
+(files._1, files._2, isFailed)
+  } catch {
+case _ : Exception => (files._1, files._2, true)

Review comment:
   Why  return (files._1, files._2, true) when exception happen

##
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonInsertFromStageCommand.scala
##
@@ -499,25 +499,31 @@ case class CarbonInsertFromStageCommand(
* return the loading files failed to create
*/
   private def createStageLoadingFiles(
+  stagePath: String,
   executorService: ExecutorService,
   stageFiles: Array[(CarbonFile, CarbonFile)]): Array[(CarbonFile, 
CarbonFile)] = {
 stageFiles.map { files =>
   executorService.submit(new Callable[(CarbonFile, CarbonFile, Boolean)] {
 override def call(): (CarbonFile, CarbonFile, Boolean) = {
-  // Get the loading files path
-  val stageLoadingFile =
-FileFactory.getCarbonFile(files._1.getAbsolutePath +
-  CarbonTablePath.LOADING_FILE_SUFFIX);
-  // Try to create loading files
-  // make isFailed to be true if createNewFile return false.
-  // the reason can be file exists or exceptions.
-  var isFailed = !stageLoadingFile.createNewFile()
-  // if file exists, modify the lastModifiedTime of the file.
-  if (isFailed) {
-// make isFailed to be true if setLastModifiedTime return false.
-isFailed = 
!stageLoadingFile.setLastModifiedTime(System.currentTimeMillis());
+  try {
+// Get the loading files path
+val stageLoadingFile =
+  FileFactory.getCarbonFile(stagePath +
+CarbonCommonConstants.FILE_SEPARATOR +
+files._1.getName + CarbonTablePath.LOADING_FILE_SUFFIX);
+// Try to create loading files
+// make isFailed to be true if createNewFile return false.
+// the reason can be file exists or exceptions.
+var isFailed = !stageLoadingFile.createNewFile()
+// if file exists, modify the lastmodifiedtime of the file.
+if (isFailed) {
+  // make isFailed to be true if setLastModifiedTime return false.
+  isFailed = 
!stageLoadingFile.setLastModifiedTime(System.currentTimeMillis());
+}
+(files._1, files._2, isFailed)
+  } catch {
+case _ : Exception => (files._1, files._2, true)

Review comment:
   Why  return (files._1, files._2, true) when exception happen?





This is an automated message from the Apache Git Service.
To respond to the 

[GitHub] [carbondata] xubo245 commented on a change in pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


xubo245 commented on a change in pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#discussion_r468283733



##
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonInsertFromStageCommand.scala
##
@@ -499,25 +499,31 @@ case class CarbonInsertFromStageCommand(
* return the loading files failed to create
*/
   private def createStageLoadingFiles(
+  stagePath: String,
   executorService: ExecutorService,
   stageFiles: Array[(CarbonFile, CarbonFile)]): Array[(CarbonFile, 
CarbonFile)] = {
 stageFiles.map { files =>
   executorService.submit(new Callable[(CarbonFile, CarbonFile, Boolean)] {
 override def call(): (CarbonFile, CarbonFile, Boolean) = {
-  // Get the loading files path
-  val stageLoadingFile =
-FileFactory.getCarbonFile(files._1.getAbsolutePath +
-  CarbonTablePath.LOADING_FILE_SUFFIX);
-  // Try to create loading files
-  // make isFailed to be true if createNewFile return false.
-  // the reason can be file exists or exceptions.
-  var isFailed = !stageLoadingFile.createNewFile()
-  // if file exists, modify the lastModifiedTime of the file.
-  if (isFailed) {
-// make isFailed to be true if setLastModifiedTime return false.
-isFailed = 
!stageLoadingFile.setLastModifiedTime(System.currentTimeMillis());
+  try {
+// Get the loading files path
+val stageLoadingFile =
+  FileFactory.getCarbonFile(stagePath +
+CarbonCommonConstants.FILE_SEPARATOR +

Review comment:
   Suggestion: use File.separator instead of 
CarbonCommonConstants.FILE_SEPARATOR, File.separator can support linux and 
window。





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] xubo245 commented on pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


xubo245 commented on pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#issuecomment-671681041


   reviewed, please check it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] xubo245 edited a comment on pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


xubo245 edited a comment on pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#issuecomment-671680064


   Please  optimize the PR title and  description(insertstage).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] xubo245 commented on a change in pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


xubo245 commented on a change in pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#discussion_r468283782



##
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonInsertFromStageCommand.scala
##
@@ -557,25 +564,30 @@ case class CarbonInsertFromStageCommand(
* Return the files failed to delete
*/
   private def deleteStageFiles(
+  stagePath: String,
   executorService: ExecutorService,
   stageFiles: Array[(CarbonFile, CarbonFile)]): Array[(CarbonFile, 
CarbonFile)] = {
 stageFiles.map { files =>
   executorService.submit(new Callable[(CarbonFile, CarbonFile, Boolean)] {
 override def call(): (CarbonFile, CarbonFile, Boolean) = {
   // Delete three types of file: stage|.success|.loading
-  val stageLoadingFile =
-FileFactory.getCarbonFile(files._1.getAbsolutePath
-  + CarbonTablePath.LOADING_FILE_SUFFIX);
-  var isFailed = false
-  // If delete() return false, maybe the reason is FileNotFount or 
FileFailedClean.
-  // Considering FileNotFound means file clean successfully.
-  // We need double check the file exists or not when delete() return 
false.
-  if (!(files._1.delete() && files._2.delete() && 
stageLoadingFile.delete())) {
-// If the file still exists,  make isFailed to be true
-// So we can retry to delete this file.
-isFailed = files._1.exists() || files._1.exists() || 
stageLoadingFile.exists()
+  try {
+val stageLoadingFile = FileFactory.getCarbonFile(stagePath +
+  CarbonCommonConstants.FILE_SEPARATOR +
+  files._1.getName + CarbonTablePath.LOADING_FILE_SUFFIX);
+var isFailed = false
+// If delete() return false, maybe the reason is FileNotFount or 
FileFailedClean.
+// Considering FileNotFound means FileCleanSucessfully.
+// We need double check the file exists or not when delete() 
return false.
+if (!files._1.delete() || !files._2.delete() || 
!stageLoadingFile.delete()) {
+  // If the file still exists,  make isFailed to be true
+  // So we can retry to delete this file.
+  isFailed = files._1.exists() || files._1.exists() || 
stageLoadingFile.exists()
+}
+(files._1, files._2, isFailed)
+  } catch {
+case _: Exception => (files._1, files._2, true)

Review comment:
   Why return (files._1, files._2, true) when exception happen?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3819: [CARBONDATA-3855]support carbon SDK to load data from different files

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3819:
URL: https://github.com/apache/carbondata/pull/3819#issuecomment-671730400


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1952/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] xubo245 commented on a change in pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


xubo245 commented on a change in pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#discussion_r468283023



##
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonInsertFromStageCommand.scala
##
@@ -557,25 +564,30 @@ case class CarbonInsertFromStageCommand(
* Return the files failed to delete
*/
   private def deleteStageFiles(
+  stagePath: String,
   executorService: ExecutorService,
   stageFiles: Array[(CarbonFile, CarbonFile)]): Array[(CarbonFile, 
CarbonFile)] = {
 stageFiles.map { files =>
   executorService.submit(new Callable[(CarbonFile, CarbonFile, Boolean)] {
 override def call(): (CarbonFile, CarbonFile, Boolean) = {
   // Delete three types of file: stage|.success|.loading
-  val stageLoadingFile =
-FileFactory.getCarbonFile(files._1.getAbsolutePath
-  + CarbonTablePath.LOADING_FILE_SUFFIX);
-  var isFailed = false
-  // If delete() return false, maybe the reason is FileNotFount or 
FileFailedClean.
-  // Considering FileNotFound means file clean successfully.
-  // We need double check the file exists or not when delete() return 
false.
-  if (!(files._1.delete() && files._2.delete() && 
stageLoadingFile.delete())) {
-// If the file still exists,  make isFailed to be true
-// So we can retry to delete this file.
-isFailed = files._1.exists() || files._1.exists() || 
stageLoadingFile.exists()
+  try {
+val stageLoadingFile = FileFactory.getCarbonFile(stagePath +
+  CarbonCommonConstants.FILE_SEPARATOR +

Review comment:
   Suggestion:  use  File.separator  instead of   
CarbonCommonConstants.FILE_SEPARATOR,  File.separator  can  support  linux and 
window。





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] xubo245 commented on pull request #3886: [CARBONDATA-3944] Delete stage files was interrupted when IOException…

2020-08-10 Thread GitBox


xubo245 commented on pull request #3886:
URL: https://github.com/apache/carbondata/pull/3886#issuecomment-671680064


   Please  optimize the PR title.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3775: [CARBONDATA-3948] Fix for Bloom Index create failure

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3775:
URL: https://github.com/apache/carbondata/pull/3775#issuecomment-671211837


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3670/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3775: [CARBONDATA-3948] Fix for Bloom Index create failure

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3775:
URL: https://github.com/apache/carbondata/pull/3775#issuecomment-671213118


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1931/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3858: [CARBONDATA-3919] Improve concurrent query performance

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3858:
URL: https://github.com/apache/carbondata/pull/3858#issuecomment-671218383


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3672/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] VenuReddy2103 commented on a change in pull request #3873: [WIP] Repair SI Command

2020-08-10 Thread GitBox


VenuReddy2103 commented on a change in pull request #3873:
URL: https://github.com/apache/carbondata/pull/3873#discussion_r467745302



##
File path: 
core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
##
@@ -2221,6 +2221,18 @@ private CarbonCommonConstants() {
*/
   public static final String FACT_FILE_UPDATED = "update";
 
+  /**
+   * Configured property to enable/disable Load Failed Segments in SI Table 
during Load command
+   */
+  @CarbonProperty

Review comment:
   Mark it as `(dynamicConfigurable = true)`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] VenuReddy2103 commented on a change in pull request #3873: [WIP] Repair SI Command

2020-08-10 Thread GitBox


VenuReddy2103 commented on a change in pull request #3873:
URL: https://github.com/apache/carbondata/pull/3873#discussion_r467745302



##
File path: 
core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
##
@@ -2221,6 +2221,18 @@ private CarbonCommonConstants() {
*/
   public static final String FACT_FILE_UPDATED = "update";
 
+  /**
+   * Configured property to enable/disable Load Failed Segments in SI Table 
during Load command
+   */
+  @CarbonProperty

Review comment:
   Mark it as (dynamicConfigurable = true)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] VenuReddy2103 commented on a change in pull request #3873: [WIP] Repair SI Command

2020-08-10 Thread GitBox


VenuReddy2103 commented on a change in pull request #3873:
URL: https://github.com/apache/carbondata/pull/3873#discussion_r467745944



##
File path: 
core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
##
@@ -2221,6 +2221,18 @@ private CarbonCommonConstants() {
*/
   public static final String FACT_FILE_UPDATED = "update";
 
+  /**
+   * Configured property to enable/disable Load Failed Segments in SI Table 
during Load command
+   */
+  @CarbonProperty
+  public static final String CARBON_LOAD_SI_REPAIR = "carbon.load.si.repair";
+
+  /**
+   * Default value for Load Failed segments in SI table during load command 
property
+   */
+  @CarbonProperty

Review comment:
   This is just a default value. Not the property. Please remove 
`@CarbonProperty`annotation





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Indhumathi27 commented on a change in pull request #3874: [CARBONDATA-3931]Fix Secondary index with index column as DateType giving wrong results

2020-08-10 Thread GitBox


Indhumathi27 commented on a change in pull request #3874:
URL: https://github.com/apache/carbondata/pull/3874#discussion_r467745874



##
File path: 
integration/spark/src/main/scala/org/apache/spark/sql/secondaryindex/query/SecondaryIndexQueryResultProcessor.java
##
@@ -249,10 +249,17 @@ private void processResult(List> 
detailQueryResultItera
   private Object[] prepareRowObjectForSorting(Object[] row) {
 ByteArrayWrapper wrapper = (ByteArrayWrapper) row[0];
 // ByteBuffer[] noDictionaryBuffer = new ByteBuffer[noDictionaryCount];
-
 List dimensions = segmentProperties.getDimensions();
 Object[] preparedRow = new Object[dimensions.size() + measureCount];
 
+// get dictionary values for date type
+byte[] dictionaryKey = wrapper.getDictionaryKey();
+int[] keyArray = ByteUtil.convertBytesToIntArray(dictionaryKey);
+Object[] dictionaryValues = new Object[dimensionColumnCount + 
measureCount];
+for (int i = 0; i < keyArray.length; i++) {
+  dictionaryValues[i] = keyArray[i];

Review comment:
   removed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] VenuReddy2103 commented on a change in pull request #3873: [WIP] Repair SI Command

2020-08-10 Thread GitBox


VenuReddy2103 commented on a change in pull request #3873:
URL: https://github.com/apache/carbondata/pull/3873#discussion_r467746492



##
File path: 
core/src/main/java/org/apache/carbondata/core/util/CarbonProperties.java
##
@@ -2076,4 +2076,17 @@ public static boolean isAuditEnabled() {
   public static void setAuditEnabled(boolean enabled) {
 getInstance().addProperty(CarbonCommonConstants.CARBON_ENABLE_AUDIT, 
String.valueOf(enabled));
   }
+
+  public boolean isSIRepairEnabledInLoad() {
+String configuredValue = 
getSessionPropertyValue(CarbonCommonConstants.CARBON_LOAD_SI_REPAIR);
+if (configuredValue == null) {
+  // if not set in session properties then check carbon.properties for the 
same
+  configuredValue = 
getProperty(CarbonCommonConstants.CARBON_LOAD_SI_REPAIR);

Review comment:
   `getProperty(String key)` will internally call 
`getSessionPropertyValue(key)` to get the session level configured value if 
available otherwise returns carbon level vaue. so remove the explicit call to 
`getSessionPropertyValue()`
   Also suggest you to use `getProperty(String key, String defaultValue)` 
instead of `getProperty(String key)`. We can avoid these null checks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671219121


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1932/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671220336


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbonPRBuilder2.3/3671/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] VenuReddy2103 commented on a change in pull request #3873: [WIP] Repair SI Command

2020-08-10 Thread GitBox


VenuReddy2103 commented on a change in pull request #3873:
URL: https://github.com/apache/carbondata/pull/3873#discussion_r467749124



##
File path: docs/index/secondary-index-guide.md
##
@@ -188,4 +188,13 @@ where we have old stores.
 Syntax
   ```
   REGISTER INDEX TABLE index_name ON [TABLE] [db_name.]table_name
+  ```
+
+### Repair index Command
+This command is used to reload segments in the SI table in case when there is 
some mismatch in the number
+of segments in main table and the SI table

Review comment:
   looks to be a formatting issue





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] VenuReddy2103 commented on a change in pull request #3873: [WIP] Repair SI Command

2020-08-10 Thread GitBox


VenuReddy2103 commented on a change in pull request #3873:
URL: https://github.com/apache/carbondata/pull/3873#discussion_r467748504



##
File path: core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
##
@@ -155,6 +141,12 @@ private boolean validateKeyValue(String key, String value) 
throws InvalidConfigu
   case CARBON_PUSH_ROW_FILTERS_FOR_VECTOR:
   case CARBON_ENABLE_INDEX_SERVER:
   case CARBON_QUERY_STAGE_INPUT:
+  case CARBON_LOAD_SI_REPAIR:
+isValid = CarbonUtil.validateBoolean(value);

Review comment:
   CARBON_LOAD_SI_REPAIR switch case can be just a fall-through without 
case block. Same like above switch cases having same case block.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on pull request #3858: [CARBONDATA-3919] Improve concurrent query performance

2020-08-10 Thread GitBox


ajantha-bhat commented on pull request #3858:
URL: https://github.com/apache/carbondata/pull/3858#issuecomment-671226769


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA1 commented on pull request #3858: [CARBONDATA-3919] Improve concurrent query performance

2020-08-10 Thread GitBox


CarbonDataQA1 commented on pull request #3858:
URL: https://github.com/apache/carbondata/pull/3858#issuecomment-671226040


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12545/job/ApacheCarbon_PR_Builder_2.4.5/1933/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] Karan980 commented on pull request #3876: TestingCI

2020-08-10 Thread GitBox


Karan980 commented on pull request #3876:
URL: https://github.com/apache/carbondata/pull/3876#issuecomment-671225892


   retest this please



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] QiangCai commented on pull request #3778: [WIP][CARBONDATA-3916] Support array complex type with SI

2020-08-10 Thread GitBox


QiangCai commented on pull request #3778:
URL: https://github.com/apache/carbondata/pull/3778#issuecomment-671194649


   please rebase it



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org