[GitHub] carbondata issue #2830: [CARBONDATA-3025]Added CLI enhancements

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2830
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9270/



---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228042421
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/encoding/EncodingFactory.java
 ---
@@ -66,6 +66,14 @@ public abstract ColumnPageEncoder 
createEncoder(TableSpec.ColumnSpec columnSpec,
*/
   public ColumnPageDecoder createDecoder(List encodings, 
List encoderMetas,
   String compressor) throws IOException {
+return createDecoder(encodings, encoderMetas, compressor, false);
+  }
+
+  /**
+   * Return new decoder based on encoder metadata read from file
--- End diff --

added comment


---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228042373
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/encoding/EncodingFactory.java
 ---
@@ -66,6 +66,14 @@ public abstract ColumnPageEncoder 
createEncoder(TableSpec.ColumnSpec columnSpec,
*/
   public ColumnPageDecoder createDecoder(List encodings, 
List encoderMetas,
   String compressor) throws IOException {
+return createDecoder(encodings, encoderMetas, compressor, false);
+  }
+
+  /**
+   * Return new decoder based on encoder metadata read from file
--- End diff --

added comment


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread ajantha-bhat
Github user ajantha-bhat commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r228041857
  
--- Diff: store/CSDK/main.cpp ---
@@ -21,6 +21,7 @@
 #include 
--- End diff --

Can we move this to test ? because this is not a product code. 


---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228041948
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/VarLengthColumnPageBase.java
 ---
@@ -176,7 +179,7 @@ private static ColumnPage 
getDecimalColumnPage(TableSpec.ColumnSpec columnSpec,
 rowOffset.putInt(counter, offset);
 
 VarLengthColumnPageBase page;
-if (unsafe) {
+if (unsafe && !meta.isFillCompleteVector()) {
--- End diff --

ok


---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228041838
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/VarLengthColumnPageBase.java
 ---
@@ -176,7 +179,7 @@ private static ColumnPage 
getDecimalColumnPage(TableSpec.ColumnSpec columnSpec,
 rowOffset.putInt(counter, offset);
 
 VarLengthColumnPageBase page;
-if (unsafe) {
+if (unsafe && !meta.isFillCompleteVector()) {
--- End diff --

ok


---


[GitHub] carbondata issue #2830: [CARBONDATA-3025]Added CLI enhancements

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2830
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1004/



---


[GitHub] carbondata issue #2829: [CARBONDATA-3025]add more metadata in carbon file fo...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2829
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1002/



---


[GitHub] carbondata issue #2852: [WIP]Column Schema objects are present in Driver eve...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2852
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1001/



---


[GitHub] carbondata issue #2816: [CARBONDATA-3003] Suppor read batch row in CSDK

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2816
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1003/



---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread ajantha-bhat
Github user ajantha-bhat commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r228040809
  
--- Diff: 
store/sdk/src/test/java/org/apache/carbondata/sdk/file/CarbonReaderTest.java ---
@@ -1522,4 +1522,208 @@ public boolean accept(File dir, String name) {
   e.printStackTrace();
 }
   }
+
+   @Test
+  public void testReadNextRowWithRowUtil() {
--- End diff --

As class not required. Test cases are not required for it


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread ajantha-bhat
Github user ajantha-bhat commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r228040609
  
--- Diff: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/RowUtil.java ---
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.sdk.file;
--- End diff --

I think we don't need this class at all. This is just a typecast. If user 
already knows data type, he can typecast instead of calling a method that does 
typecast. 

please remove this call


---


[GitHub] carbondata issue #2830: [CARBONDATA-3025]Added CLI enhancements

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2830
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1217/



---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228040371
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/encoding/ColumnPageEncoderMeta.java
 ---
@@ -49,6 +49,8 @@
   // Make it protected for RLEEncoderMeta
   protected String compressorName;
 
+  private transient boolean fillCompleteVector;
--- End diff --

ok


---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228040196
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/SafeDecimalColumnPage.java
 ---
@@ -193,6 +193,30 @@ public void convertValue(ColumnPageValueConverter 
codec) {
 }
   }
 
+  @Override public byte[] getBytePage() {
--- End diff --

ok


---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228040088
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/chunk/store/impl/safe/AbstractNonDictionaryVectorFiller.java
 ---
@@ -0,0 +1,278 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.core.datastore.chunk.store.impl.safe;
+
+import java.nio.ByteBuffer;
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.metadata.datatype.DataType;
+import org.apache.carbondata.core.metadata.datatype.DataTypes;
+import org.apache.carbondata.core.scan.result.vector.CarbonColumnVector;
+import org.apache.carbondata.core.util.ByteUtil;
+import org.apache.carbondata.core.util.DataTypeUtil;
+
+public abstract class AbstractNonDictionaryVectorFiller {
+
+  protected int lengthSize;
+  protected int numberOfRows;
+
+  public AbstractNonDictionaryVectorFiller(int lengthSize, int 
numberOfRows) {
+this.lengthSize = lengthSize;
+this.numberOfRows = numberOfRows;
+  }
+
+  public abstract void fillVector(byte[] data, CarbonColumnVector vector, 
ByteBuffer buffer);
+
+  public int getLengthFromBuffer(ByteBuffer buffer) {
+return buffer.getShort();
+  }
+}
+
+class NonDictionaryVectorFillerFactory {
+
+  public static AbstractNonDictionaryVectorFiller getVectorFiller(DataType 
type, int lengthSize,
+  int numberOfRows) {
+if (type == DataTypes.STRING) {
+  return new StringVectorFiller(lengthSize, numberOfRows);
+} else if (type == DataTypes.VARCHAR) {
+  return new LongStringVectorFiller(lengthSize, numberOfRows);
+} else if (type == DataTypes.TIMESTAMP) {
+  return new TimeStampVectorFiller(lengthSize, numberOfRows);
+} else if (type == DataTypes.BOOLEAN) {
+  return new BooleanVectorFiller(lengthSize, numberOfRows);
+} else if (type == DataTypes.SHORT) {
+  return new ShortVectorFiller(lengthSize, numberOfRows);
+} else if (type == DataTypes.INT) {
+  return new IntVectorFiller(lengthSize, numberOfRows);
+} else if (type == DataTypes.LONG) {
+  return new LongVectorFiller(lengthSize, numberOfRows);
+} else {
+  throw new UnsupportedOperationException("Not supported datatype : " 
+ type);
+}
+
+  }
+
+}
+
+class StringVectorFiller extends AbstractNonDictionaryVectorFiller {
+
+  public StringVectorFiller(int lengthSize, int numberOfRows) {
+super(lengthSize, numberOfRows);
+  }
+
+  @Override
+  public void fillVector(byte[] data, CarbonColumnVector vector, 
ByteBuffer buffer) {
+// start position will be used to store the current data position
+int startOffset = 0;
+// as first position will be start from length of bytes as data is 
stored first in the memory
+// block we need to skip first two bytes this is because first two 
bytes will be length of the
+// data which we have to skip
+int currentOffset = lengthSize;
+ByteUtil.UnsafeComparer comparator = ByteUtil.UnsafeComparer.INSTANCE;
+for (int i = 0; i < numberOfRows - 1; i++) {
+  buffer.position(startOffset);
+  startOffset += getLengthFromBuffer(buffer) + lengthSize;
+  int length = startOffset - (currentOffset);
+  if 
(comparator.equals(CarbonCommonConstants.MEMBER_DEFAULT_VAL_ARRAY, 0,
+  CarbonCommonConstants.MEMBER_DEFAULT_VAL_ARRAY.length, data, 
currentOffset, length)) {
+vector.putNull(i);
+  } else {
+vector.putByteArray(i, currentOffset, length, data);
+  }
+  currentOffset = startOffset + lengthSize;
+}
+// Handle last row
+int length = (data.length - currentOffset);
+if (comparator.equals(CarbonCommonConstants.MEMBER_DEFAULT_VAL_ARRAY, 
0,
+

[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228039991
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/chunk/store/impl/safe/AbstractNonDictionaryVectorFiller.java
 ---
@@ -0,0 +1,278 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.core.datastore.chunk.store.impl.safe;
+
+import java.nio.ByteBuffer;
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.metadata.datatype.DataType;
+import org.apache.carbondata.core.metadata.datatype.DataTypes;
+import org.apache.carbondata.core.scan.result.vector.CarbonColumnVector;
+import org.apache.carbondata.core.util.ByteUtil;
+import org.apache.carbondata.core.util.DataTypeUtil;
+
+public abstract class AbstractNonDictionaryVectorFiller {
--- End diff --

ok


---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228039614
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 ---
@@ -1845,6 +1845,18 @@
   public static final int CARBON_MINMAX_ALLOWED_BYTE_COUNT_MIN = 10;
   public static final int CARBON_MINMAX_ALLOWED_BYTE_COUNT_MAX = 1000;
 
+  /**
+   * When enabled complete row filters will be handled by carbon in case 
of vector.
+   * If it is disabled then only page level pruning will be done by carbon 
and row level filtering
+   * will be done by spark for vector.
+   * There is no change in flow for non-vector based queries.
--- End diff --

will make it as false by default in other pending PR. Since this PR is 
focused only on full scan many tests fail that's why it defaults it to true. 


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread ajantha-bhat
Github user ajantha-bhat commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r228039418
  
--- Diff: docs/csdk-guide.md ---
@@ -68,20 +68,42 @@ JNIEnv *initJVM() {
 bool readFromLocalWithoutProjection(JNIEnv *env) {
 
 CarbonReader carbonReaderClass;
-carbonReaderClass.builder(env, "../resources/carbondata", "test");
+carbonReaderClass.builder(env, "../resources/carbondata");
--- End diff --

Already main.cpp has these same examples. Please give a link to that file 
and remove from here. 
Because if any future change happens, we need not have to change at two 
places and keep duplicate code samples.


---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228039285
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/chunk/impl/VariableLengthDimensionColumnPage.java
 ---
@@ -54,10 +75,15 @@ public VariableLengthDimensionColumnPage(byte[] 
dataChunks, int[] invertedIndex,
 }
 dataChunkStore = DimensionChunkStoreFactory.INSTANCE
 .getDimensionChunkStore(0, isExplicitSorted, numberOfRows, 
totalSize, dimStoreType,
-dictionary);
-dataChunkStore.putArray(invertedIndex, invertedIndexReverse, 
dataChunks);
+dictionary, vectorInfo != null);
+if (vectorInfo != null) {
+  dataChunkStore.fillVector(invertedIndex, invertedIndexReverse, 
dataChunks, vectorInfo);
+} else {
+  dataChunkStore.putArray(invertedIndex, invertedIndexReverse, 
dataChunks);
+}
   }
 
+
--- End diff --

ok


---


[GitHub] carbondata pull request #2819: [CARBONDATA-3012] Added support for full scan...

2018-10-24 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2819#discussion_r228038986
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/chunk/impl/MeasureRawColumnChunk.java
 ---
@@ -105,6 +106,22 @@ public ColumnPage convertToColumnPageWithOutCache(int 
index) {
 }
   }
 
+  /**
+   * Convert raw data with specified page number processed to 
DimensionColumnDataChunk and fill the
+   * vector
+   *
+   * @param pageNumber page number to decode and fill the vector
+   * @param vectorInfo vector to be filled with column page
+   */
+  public void convertToColumnPageAndFillVector(int pageNumber, 
ColumnVectorInfo vectorInfo) {
+assert pageNumber < pagesCount;
+try {
+  chunkReader.decodeColumnPageAndFillVector(this, pageNumber, 
vectorInfo);
+} catch (IOException | MemoryException e) {
+  throw new RuntimeException(e);
--- End diff --

Because those are checked exceptions, need to handle and throw the same 
exceptions till callers


---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1213/



---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9266/



---


[GitHub] carbondata pull request #2829: [CARBONDATA-3025]add more metadata in carbon ...

2018-10-24 Thread akashrn5
Github user akashrn5 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2829#discussion_r228034556
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 ---
@@ -1845,6 +1845,21 @@
   public static final int CARBON_MINMAX_ALLOWED_BYTE_COUNT_MIN = 10;
   public static final int CARBON_MINMAX_ALLOWED_BYTE_COUNT_MAX = 1000;
 
+  /**
+   * Written by detail to be written in carbondata footer for better 
maintanability
+   */
+  public static final String CARBON_WRITTEN_BY_FOOTER_INFO = "written_by";
+
+  /**
+   * carbon version detail to be written in carbondata footer for better 
maintanability
+   */
+  public static final String CARBON_VERSION_FOOTER_INFO = "version";
--- End diff --

yes, even this suits better


---


[GitHub] carbondata pull request #2852: [WIP]Column Schema objects are present in Dri...

2018-10-24 Thread Indhumathi27
GitHub user Indhumathi27 opened a pull request:

https://github.com/apache/carbondata/pull/2852

[WIP]Column Schema objects are present in Driver even after dropping table

**Problem:**
Column Schema objects are present in Driver even after dropping table.

**Solution:**
After dropping table, remove entry of tableInfo from CarbonMetaDataInstance.

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [ ] Testing done
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Indhumathi27/carbondata memory_leak_driver

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2852.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2852


commit 182acdf3b2edd09403303fb15959f4d058e2c759
Author: Indhumathi27 
Date:   2018-10-25T04:46:32Z

Column Schema objects are present in Driver even after dropping table




---


[GitHub] carbondata issue #2851: [CARBONDATA-3040][BloomDataMap] Fix bug for merging ...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2851
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9265/



---


[GitHub] carbondata issue #2851: [CARBONDATA-3040][BloomDataMap] Fix bug for merging ...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2851
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1212/



---


[jira] [Created] (CARBONDATA-3042) Column Schema objects are present in Driver even after dropping table

2018-10-24 Thread Indhumathi Muthumurugesh (JIRA)
Indhumathi Muthumurugesh created CARBONDATA-3042:


 Summary: Column Schema objects are present in Driver even after 
dropping table
 Key: CARBONDATA-3042
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3042
 Project: CarbonData
  Issue Type: Improvement
Reporter: Indhumathi Muthumurugesh






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2816: [CARBONDATA-3003] Suppor read batch row in CS...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2816#discussion_r228028896
  
--- Diff: README.md ---
@@ -61,6 +61,7 @@ CarbonData is built using Apache Maven, to [build 
CarbonData](https://github.com
  * [CarbonData Pre-aggregate 
DataMap](https://github.com/apache/carbondata/blob/master/docs/preaggregate-datamap-guide.md)
 
  * [CarbonData Timeseries 
DataMap](https://github.com/apache/carbondata/blob/master/docs/timeseries-datamap-guide.md)
 
 * [SDK 
Guide](https://github.com/apache/carbondata/blob/master/docs/sdk-guide.md) 
+* [CSDK 
Guide](https://github.com/apache/carbondata/blob/master/docs/CSDK-guide.md)
--- End diff --

ok, done


---


[GitHub] carbondata pull request #2816: [CARBONDATA-3003] Suppor read batch row in CS...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2816#discussion_r228028346
  
--- Diff: store/CSDK/main.cpp ---
@@ -21,6 +21,8 @@
 #include 
 #include 
 #include "CarbonReader.h"
+#include "CarbonRow.h"
+#include 
 
 using namespace std;
--- End diff --

This is main file in C/C++. but only for test. In the future, CSDK will 
support test framework(such as googletest) to instead of  main.cpp.


---


[GitHub] carbondata issue #2851: [CARBONDATA-3040][BloomDataMap] Fix bug for merging ...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2851
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/999/



---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1000/



---


[GitHub] carbondata pull request #2816: [CARBONDATA-3003] Suppor read batch row in CS...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2816#discussion_r228026273
  
--- Diff: store/CSDK/CMakeLists.txt ---
@@ -1,17 +1,17 @@
-cmake_minimum_required (VERSION 2.8)
-project (CJDK)
+cmake_minimum_required(VERSION 2.8)
--- End diff --

ok, added


---


[jira] [Updated] (CARBONDATA-3040) Fix bug for merging bloom index

2018-10-24 Thread jiangmanhua (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiangmanhua updated CARBONDATA-3040:

Summary: Fix bug for merging bloom index  (was: Add checking before merging 
bloom index)

> Fix bug for merging bloom index
> ---
>
> Key: CARBONDATA-3040
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3040
> Project: CarbonData
>  Issue Type: Bug
>Reporter: jiangmanhua
>Assignee: jiangmanhua
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2816: [CARBONDATA-3003] Suppor read batch row in CS...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2816#discussion_r228025690
  
--- Diff: store/CSDK/main.cpp ---
@@ -21,6 +21,8 @@
 #include 
 #include 
 #include "CarbonReader.h"
+#include "CarbonRow.h"
+#include 
--- End diff --

ok, I also change the others.


---


[GitHub] carbondata pull request #2816: [CARBONDATA-3003] Suppor read batch row in CS...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2816#discussion_r228025597
  
--- Diff: store/CSDK/CarbonReader.cpp ---
@@ -17,6 +17,7 @@
 
 #include "CarbonReader.h"
 #include 
+#include 
--- End diff --

ok, done


---


[GitHub] carbondata pull request #2816: [CARBONDATA-3003] Suppor read batch row in CS...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2816#discussion_r228025429
  
--- Diff: store/CSDK/CMakeLists.txt ---
@@ -1,17 +1,17 @@
-cmake_minimum_required (VERSION 2.8)
-project (CJDK)
+cmake_minimum_required(VERSION 2.8)
--- End diff --

I think no need to add license header in this file.
CMakeLists.txt like pom.xml, it's not code file. 
Tensorflow and caffe project also didn't add license header in 
CMakeLists.txt .


---


[GitHub] carbondata pull request #2816: [CARBONDATA-3003] Suppor read batch row in CS...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2816#discussion_r228024318
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/scan/result/iterator/ChunkRowIterator.java
 ---
@@ -74,4 +76,13 @@ public ChunkRowIterator(CarbonIterator 
iterator) {
 return currentChunk.next();
   }
 
+  /**
+   * get
--- End diff --

ok,optimized


---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
retest this please


---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9264/



---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1211/



---


[GitHub] carbondata issue #2850: [WIP] Added concurrent reading through SDK

2018-10-24 Thread xuchuanyin
Github user xuchuanyin commented on the issue:

https://github.com/apache/carbondata/pull/2850
  
emm, but in your implementation, most of the work has to be done by the 
user (multi-thread handling). CarbonData itself only split the input data and 
return multiple readers. If this is the solution, why not just tell the user to 
generate multiple CarbonReaders by passing only part of the input dir each time 
they create the reader?

Addition to my proposal, I think we can add a buffer for the records. When 
`CarbonReader.next` is called, we can retrieve the record from the buffer and 
fill the buffer asynchronously. When`CarbonReader.hasNext` is called, we can 
first detect this from the buffer, if it is empty, we will then detect this 
from the recordReader and fill the buffer asynchronously.


---


[GitHub] carbondata pull request #2851: [CARBONDATA-3040][BloomDataMap] Add checking ...

2018-10-24 Thread xuchuanyin
Github user xuchuanyin commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2851#discussion_r228010573
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/events/DataMapEvents.scala
 ---
@@ -60,7 +60,8 @@ case class BuildDataMapPreExecutionEvent(sparkSession: 
SparkSession,
  * example: bloom datamap, Lucene datamap
  */
 case class BuildDataMapPostExecutionEvent(sparkSession: SparkSession,
-identifier: AbsoluteTableIdentifier, segmentIdList: Seq[String], 
isFromRebuild: Boolean)
+identifier: AbsoluteTableIdentifier, segmentIdList: Seq[String],
+isFromRebuild: Boolean, dmName: String)
--- End diff --

You can adjust the sequence of the parameters by moving `dmName` after 
`identifier`. This will make the method easy to understand: for some table's 
some datamap, for the corresponding segments doing rebuild or not


---


[GitHub] carbondata pull request #2851: [CARBONDATA-3040][BloomDataMap] Add checking ...

2018-10-24 Thread xuchuanyin
Github user xuchuanyin commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2851#discussion_r228010750
  
--- Diff: 
datamap/bloom/src/main/java/org/apache/carbondata/datamap/bloom/BloomIndexFileStore.java
 ---
@@ -70,27 +73,37 @@ public boolean accept(CarbonFile file) {
   }
 });
 
+// check whether need to merge
 String mergeShardPath = dmSegmentPathString + File.separator + 
MERGE_BLOOM_INDEX_SHARD_NAME;
 String mergeInprogressFile = dmSegmentPathString + File.separator + 
MERGE_INPROGRESS_FILE;
 try {
-  // delete mergeShard folder if exists
-  if (FileFactory.isFileExist(mergeShardPath)) {
-FileFactory.deleteFile(mergeShardPath, 
FileFactory.getFileType(mergeShardPath));
+  if (shardPaths.length == 0 || 
FileFactory.isFileExist(mergeShardPath)) {
+LOGGER.info("No shard data to merge or already merged for path " + 
mergeShardPath);
--- End diff --

when will this line be reached?


---


[GitHub] carbondata pull request #2851: [CARBONDATA-3040][BloomDataMap] Add checking ...

2018-10-24 Thread xuchuanyin
Github user xuchuanyin commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2851#discussion_r228009613
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/events/MergeBloomIndexEventListener.scala
 ---
@@ -48,8 +48,14 @@ class MergeBloomIndexEventListener extends 
OperationEventListener with Logging {
   _.getDataMapSchema.getProviderName.equalsIgnoreCase(
 DataMapClassProvider.BLOOMFILTER.getShortName))
 
-// for load process, filter lazy datamap
-if (!datamapPostEvent.isFromRebuild) {
+if (datamapPostEvent.isFromRebuild) {
+  if (null != datamapPostEvent.dmName) {
+// for rebuild process, event will be called for each datamap
--- End diff --

'for each datamap' or 'only for specific datamap'?


---


[jira] [Created] (CARBONDATA-3041) Provide a global parameter to specify the minimum amount of data loaded by a node

2018-10-24 Thread wangsen (JIRA)
wangsen created CARBONDATA-3041:
---

 Summary: Provide a global parameter to specify the minimum amount 
of data loaded by a  node
 Key: CARBONDATA-3041
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3041
 Project: CarbonData
  Issue Type: Improvement
  Components: data-load
Affects Versions: 1.5.0
Reporter: wangsen
Assignee: wangsen
 Fix For: 1.5.1


Provide a global parameter to specify the minimum amount of data loaded by a 
node



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/998/



---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r228010100
  
--- Diff: 
store/sdk/src/test/java/org/apache/carbondata/sdk/file/CarbonReaderTest.java ---
@@ -1522,4 +1522,204 @@ public boolean accept(File dir, String name) {
   e.printStackTrace();
 }
   }
+
+   @Test
+  public void testReadNextRowWithRowUtil() {
+String path = "./carbondata";
+try {
+  FileUtils.deleteDirectory(new File(path));
+
+  Field[] fields = new Field[12];
+  fields[0] = new Field("stringField", DataTypes.STRING);
+  fields[1] = new Field("shortField", DataTypes.SHORT);
+  fields[2] = new Field("intField", DataTypes.INT);
+  fields[3] = new Field("longField", DataTypes.LONG);
+  fields[4] = new Field("doubleField", DataTypes.DOUBLE);
+  fields[5] = new Field("boolField", DataTypes.BOOLEAN);
+  fields[6] = new Field("dateField", DataTypes.DATE);
+  fields[7] = new Field("timeField", DataTypes.TIMESTAMP);
+  fields[8] = new Field("decimalField", DataTypes.createDecimalType(8, 
2));
+  fields[9] = new Field("varcharField", DataTypes.VARCHAR);
+  fields[10] = new Field("arrayField", 
DataTypes.createArrayType(DataTypes.STRING));
+  fields[11] = new Field("floatField", DataTypes.FLOAT);
+  Map map = new HashMap<>();
+  map.put("complex_delimiter_level_1", "#");
+  CarbonWriter writer = CarbonWriter.builder()
+  .outputPath(path)
+  .withLoadOptions(map)
+  .withCsvInput(new Schema(fields)).build();
+
+  for (int i = 0; i < 10; i++) {
+String[] row2 = new String[]{
+"robot" + (i % 10),
+String.valueOf(i % 1),
+String.valueOf(i),
+String.valueOf(Long.MAX_VALUE - i),
+String.valueOf((double) i / 2),
+String.valueOf(true),
+"2019-03-02",
+"2019-02-12 03:03:34",
+"12.345",
+"varchar",
+"Hello#World#From#Carbon",
+"1.23"
+};
+writer.write(row2);
+  }
+  writer.close();
+
+  File[] dataFiles = new File(path).listFiles(new FilenameFilter() {
+@Override
+public boolean accept(File dir, String name) {
+  if (name == null) {
+return false;
+  }
+  return name.endsWith("carbonindex");
+}
+  });
+  if (dataFiles == null || dataFiles.length < 1) {
+throw new RuntimeException("Carbon index file not exists.");
+  }
+  Schema schema = CarbonSchemaReader
+  .readSchemaInIndexFile(dataFiles[0].getAbsolutePath())
+  .asOriginOrder();
+  // Transform the schema
+  int count = 0;
+  for (int i = 0; i < schema.getFields().length; i++) {
+if (!((schema.getFields())[i].getFieldName().contains("."))) {
+  count++;
+}
+  }
+  String[] strings = new String[count];
+  int index = 0;
+  for (int i = 0; i < schema.getFields().length; i++) {
+if (!((schema.getFields())[i].getFieldName().contains("."))) {
+  strings[index] = (schema.getFields())[i].getFieldName();
+  index++;
+}
+  }
+  // Read data
+  CarbonReader reader = CarbonReader
+  .builder(path, "_temp")
+  .projection(strings)
+  .build();
+
+  int i = 0;
+  while (reader.hasNext()) {
+Object[] data = (Object[]) reader.readNextRow();
+
+assert (RowUtil.getString(data, 0).equals("robot" + i));
+assertEquals(RowUtil.getShort(data, 1), i);
+assertEquals(RowUtil.getInt(data, 2), i);
+assertEquals(RowUtil.getLong(data, 3), Long.MAX_VALUE - i);
+assertEquals(RowUtil.getDouble(data, 4), ((double) i) / 2);
+assert (RowUtil.getBoolean(data, 5));
+assertEquals(RowUtil.getInt(data, 6), 17957);
+assert (RowUtil.getDecimal(data, 8).equals("12.35"));
+assert (RowUtil.getVarchar(data, 9).equals("varchar"));
+
+Object[] arr = RowUtil.getArray(data, 10);
+assert (arr[0].equals("Hello"));
+assert (arr[1].equals("World"));
+assert (arr[2].equals("From"));
+assert (arr[3].equals("Carbon"));
+
+assertEquals(RowUtil.getFloat(data, 11), (float) 1.23);
+i++;
+  }
+  reader.close();
+} catch (Throwable e) {
+  e.printStackTrace();
+} finally {
+  try {
+FileUtils.deleteDirectory(new File(path));
+  } catch 

[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r228009608
  
--- Diff: docs/csdk-guide.md ---
@@ -68,20 +68,42 @@ JNIEnv *initJVM() {
 bool readFromLocalWithoutProjection(JNIEnv *env) {
 
 CarbonReader carbonReaderClass;
-carbonReaderClass.builder(env, "../resources/carbondata", "test");
+carbonReaderClass.builder(env, "../resources/carbondata");
 carbonReaderClass.build();
 
+printf("\nRead data from local without projection:\n");
+
+CarbonRow carbonRow(env);
 while (carbonReaderClass.hasNext()) {
-jobjectArray row = carbonReaderClass.readNextRow();
-jsize length = env->GetArrayLength(row);
+jobject row = carbonReaderClass.readNextCarbonRow();
--- End diff --

ok, done


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r228009633
  
--- Diff: docs/csdk-guide.md ---
@@ -106,20 +128,41 @@ bool readFromS3(JNIEnv *env, char *argv[]) {
 // "your endPoint"
 args[2] = argv[3];
 
-reader.builder(env, "s3a://sdk/WriterOutput", "test");
-reader.withHadoopConf(3, args);
+reader.builder(env, "s3a://sdk/WriterOutput/carbondata/", "test");
+reader.withHadoopConf("fs.s3a.access.key", argv[1]);
+reader.withHadoopConf("fs.s3a.secret.key", argv[2]);
+reader.withHadoopConf("fs.s3a.endpoint", argv[3]);
 reader.build();
 printf("\nRead data from S3:\n");
+CarbonRow carbonRow(env);
 while (reader.hasNext()) {
-jobjectArray row = reader.readNextRow();
-jsize length = env->GetArrayLength(row);
-
+jobject row = reader.readNextCarbonRow();
--- End diff --

ok, done


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r228009906
  
--- Diff: 
store/sdk/src/test/java/org/apache/carbondata/sdk/file/CarbonReaderTest.java ---
@@ -1522,4 +1522,204 @@ public boolean accept(File dir, String name) {
   e.printStackTrace();
 }
   }
+
+   @Test
+  public void testReadNextRowWithRowUtil() {
+String path = "./carbondata";
+try {
+  FileUtils.deleteDirectory(new File(path));
+
+  Field[] fields = new Field[12];
+  fields[0] = new Field("stringField", DataTypes.STRING);
+  fields[1] = new Field("shortField", DataTypes.SHORT);
+  fields[2] = new Field("intField", DataTypes.INT);
+  fields[3] = new Field("longField", DataTypes.LONG);
+  fields[4] = new Field("doubleField", DataTypes.DOUBLE);
+  fields[5] = new Field("boolField", DataTypes.BOOLEAN);
+  fields[6] = new Field("dateField", DataTypes.DATE);
+  fields[7] = new Field("timeField", DataTypes.TIMESTAMP);
+  fields[8] = new Field("decimalField", DataTypes.createDecimalType(8, 
2));
+  fields[9] = new Field("varcharField", DataTypes.VARCHAR);
+  fields[10] = new Field("arrayField", 
DataTypes.createArrayType(DataTypes.STRING));
+  fields[11] = new Field("floatField", DataTypes.FLOAT);
+  Map map = new HashMap<>();
+  map.put("complex_delimiter_level_1", "#");
+  CarbonWriter writer = CarbonWriter.builder()
+  .outputPath(path)
+  .withLoadOptions(map)
+  .withCsvInput(new Schema(fields)).build();
+
+  for (int i = 0; i < 10; i++) {
+String[] row2 = new String[]{
+"robot" + (i % 10),
+String.valueOf(i % 1),
+String.valueOf(i),
+String.valueOf(Long.MAX_VALUE - i),
+String.valueOf((double) i / 2),
+String.valueOf(true),
+"2019-03-02",
+"2019-02-12 03:03:34",
+"12.345",
+"varchar",
+"Hello#World#From#Carbon",
+"1.23"
+};
+writer.write(row2);
+  }
+  writer.close();
+
+  File[] dataFiles = new File(path).listFiles(new FilenameFilter() {
+@Override
+public boolean accept(File dir, String name) {
+  if (name == null) {
+return false;
+  }
+  return name.endsWith("carbonindex");
+}
+  });
+  if (dataFiles == null || dataFiles.length < 1) {
+throw new RuntimeException("Carbon index file not exists.");
+  }
+  Schema schema = CarbonSchemaReader
+  .readSchemaInIndexFile(dataFiles[0].getAbsolutePath())
+  .asOriginOrder();
+  // Transform the schema
+  int count = 0;
+  for (int i = 0; i < schema.getFields().length; i++) {
+if (!((schema.getFields())[i].getFieldName().contains("."))) {
+  count++;
+}
+  }
+  String[] strings = new String[count];
+  int index = 0;
+  for (int i = 0; i < schema.getFields().length; i++) {
+if (!((schema.getFields())[i].getFieldName().contains("."))) {
+  strings[index] = (schema.getFields())[i].getFieldName();
+  index++;
+}
+  }
+  // Read data
+  CarbonReader reader = CarbonReader
+  .builder(path, "_temp")
+  .projection(strings)
+  .build();
+
+  int i = 0;
+  while (reader.hasNext()) {
+Object[] data = (Object[]) reader.readNextRow();
+
+assert (RowUtil.getString(data, 0).equals("robot" + i));
+assertEquals(RowUtil.getShort(data, 1), i);
+assertEquals(RowUtil.getInt(data, 2), i);
+assertEquals(RowUtil.getLong(data, 3), Long.MAX_VALUE - i);
+assertEquals(RowUtil.getDouble(data, 4), ((double) i) / 2);
+assert (RowUtil.getBoolean(data, 5));
+assertEquals(RowUtil.getInt(data, 6), 17957);
+assert (RowUtil.getDecimal(data, 8).equals("12.35"));
+assert (RowUtil.getVarchar(data, 9).equals("varchar"));
+
+Object[] arr = RowUtil.getArray(data, 10);
+assert (arr[0].equals("Hello"));
+assert (arr[1].equals("World"));
+assert (arr[2].equals("From"));
+assert (arr[3].equals("Carbon"));
+
+assertEquals(RowUtil.getFloat(data, 11), (float) 1.23);
+i++;
+  }
+  reader.close();
+} catch (Throwable e) {
+  e.printStackTrace();
--- End diff --

ok, done


---


[GitHub] carbondata issue #2851: [CARBONDATA-3040][BloomDataMap] Add checking before ...

2018-10-24 Thread kevinjmh
Github user kevinjmh commented on the issue:

https://github.com/apache/carbondata/pull/2851
  
description updated


---


[GitHub] carbondata issue #2814: [CARBONDATA-3001] configurable page size in MB

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2814
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9263/



---


[GitHub] carbondata issue #2814: [CARBONDATA-3001] configurable page size in MB

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2814
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1210/



---


[GitHub] carbondata issue #2814: [CARBONDATA-3001] configurable page size in MB

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2814
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/997/



---


[GitHub] carbondata issue #2846: [WIP] Added direct fill

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2846
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1208/



---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9262/



---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1209/



---


[GitHub] carbondata pull request #2815: [CARBONDATA-3016] Refactor No Dictionary Dime...

2018-10-24 Thread kumarvishal09
Github user kumarvishal09 closed the pull request at:

https://github.com/apache/carbondata/pull/2815


---


[GitHub] carbondata issue #2815: [CARBONDATA-3016] Refactor No Dictionary Dimension C...

2018-10-24 Thread kumarvishal09
Github user kumarvishal09 commented on the issue:

https://github.com/apache/carbondata/pull/2815
  
Closing this PR as handled as part of #2819 and One more PR will be raised 
based on #2819 Interface


---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread KanakaKumar
Github user KanakaKumar commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
LGTM apart from those nits in guide & ut.


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread KanakaKumar
Github user KanakaKumar commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r227839509
  
--- Diff: 
store/sdk/src/test/java/org/apache/carbondata/sdk/file/CarbonReaderTest.java ---
@@ -1522,4 +1522,204 @@ public boolean accept(File dir, String name) {
   e.printStackTrace();
 }
   }
+
+   @Test
+  public void testReadNextRowWithRowUtil() {
+String path = "./carbondata";
+try {
+  FileUtils.deleteDirectory(new File(path));
+
+  Field[] fields = new Field[12];
+  fields[0] = new Field("stringField", DataTypes.STRING);
+  fields[1] = new Field("shortField", DataTypes.SHORT);
+  fields[2] = new Field("intField", DataTypes.INT);
+  fields[3] = new Field("longField", DataTypes.LONG);
+  fields[4] = new Field("doubleField", DataTypes.DOUBLE);
+  fields[5] = new Field("boolField", DataTypes.BOOLEAN);
+  fields[6] = new Field("dateField", DataTypes.DATE);
+  fields[7] = new Field("timeField", DataTypes.TIMESTAMP);
+  fields[8] = new Field("decimalField", DataTypes.createDecimalType(8, 
2));
+  fields[9] = new Field("varcharField", DataTypes.VARCHAR);
+  fields[10] = new Field("arrayField", 
DataTypes.createArrayType(DataTypes.STRING));
+  fields[11] = new Field("floatField", DataTypes.FLOAT);
+  Map map = new HashMap<>();
+  map.put("complex_delimiter_level_1", "#");
+  CarbonWriter writer = CarbonWriter.builder()
+  .outputPath(path)
+  .withLoadOptions(map)
+  .withCsvInput(new Schema(fields)).build();
+
+  for (int i = 0; i < 10; i++) {
+String[] row2 = new String[]{
+"robot" + (i % 10),
+String.valueOf(i % 1),
+String.valueOf(i),
+String.valueOf(Long.MAX_VALUE - i),
+String.valueOf((double) i / 2),
+String.valueOf(true),
+"2019-03-02",
+"2019-02-12 03:03:34",
+"12.345",
+"varchar",
+"Hello#World#From#Carbon",
+"1.23"
+};
+writer.write(row2);
+  }
+  writer.close();
+
+  File[] dataFiles = new File(path).listFiles(new FilenameFilter() {
+@Override
+public boolean accept(File dir, String name) {
+  if (name == null) {
+return false;
+  }
+  return name.endsWith("carbonindex");
+}
+  });
+  if (dataFiles == null || dataFiles.length < 1) {
+throw new RuntimeException("Carbon index file not exists.");
+  }
+  Schema schema = CarbonSchemaReader
+  .readSchemaInIndexFile(dataFiles[0].getAbsolutePath())
+  .asOriginOrder();
+  // Transform the schema
+  int count = 0;
+  for (int i = 0; i < schema.getFields().length; i++) {
+if (!((schema.getFields())[i].getFieldName().contains("."))) {
+  count++;
+}
+  }
+  String[] strings = new String[count];
+  int index = 0;
+  for (int i = 0; i < schema.getFields().length; i++) {
+if (!((schema.getFields())[i].getFieldName().contains("."))) {
+  strings[index] = (schema.getFields())[i].getFieldName();
+  index++;
+}
+  }
+  // Read data
+  CarbonReader reader = CarbonReader
+  .builder(path, "_temp")
+  .projection(strings)
+  .build();
+
+  int i = 0;
+  while (reader.hasNext()) {
+Object[] data = (Object[]) reader.readNextRow();
+
+assert (RowUtil.getString(data, 0).equals("robot" + i));
+assertEquals(RowUtil.getShort(data, 1), i);
+assertEquals(RowUtil.getInt(data, 2), i);
+assertEquals(RowUtil.getLong(data, 3), Long.MAX_VALUE - i);
+assertEquals(RowUtil.getDouble(data, 4), ((double) i) / 2);
+assert (RowUtil.getBoolean(data, 5));
+assertEquals(RowUtil.getInt(data, 6), 17957);
+assert (RowUtil.getDecimal(data, 8).equals("12.35"));
+assert (RowUtil.getVarchar(data, 9).equals("varchar"));
+
+Object[] arr = RowUtil.getArray(data, 10);
+assert (arr[0].equals("Hello"));
+assert (arr[1].equals("World"));
+assert (arr[2].equals("From"));
+assert (arr[3].equals("Carbon"));
+
+assertEquals(RowUtil.getFloat(data, 11), (float) 1.23);
+i++;
+  }
+  reader.close();
+} catch (Throwable e) {
+  e.printStackTrace();
+} finally {
+  try {
+FileUtils.deleteDirectory(new File(path));
+  } catch 

[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread KanakaKumar
Github user KanakaKumar commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r227839131
  
--- Diff: 
store/sdk/src/test/java/org/apache/carbondata/sdk/file/CarbonReaderTest.java ---
@@ -1522,4 +1522,204 @@ public boolean accept(File dir, String name) {
   e.printStackTrace();
 }
   }
+
+   @Test
+  public void testReadNextRowWithRowUtil() {
+String path = "./carbondata";
+try {
+  FileUtils.deleteDirectory(new File(path));
+
+  Field[] fields = new Field[12];
+  fields[0] = new Field("stringField", DataTypes.STRING);
+  fields[1] = new Field("shortField", DataTypes.SHORT);
+  fields[2] = new Field("intField", DataTypes.INT);
+  fields[3] = new Field("longField", DataTypes.LONG);
+  fields[4] = new Field("doubleField", DataTypes.DOUBLE);
+  fields[5] = new Field("boolField", DataTypes.BOOLEAN);
+  fields[6] = new Field("dateField", DataTypes.DATE);
+  fields[7] = new Field("timeField", DataTypes.TIMESTAMP);
+  fields[8] = new Field("decimalField", DataTypes.createDecimalType(8, 
2));
+  fields[9] = new Field("varcharField", DataTypes.VARCHAR);
+  fields[10] = new Field("arrayField", 
DataTypes.createArrayType(DataTypes.STRING));
+  fields[11] = new Field("floatField", DataTypes.FLOAT);
+  Map map = new HashMap<>();
+  map.put("complex_delimiter_level_1", "#");
+  CarbonWriter writer = CarbonWriter.builder()
+  .outputPath(path)
+  .withLoadOptions(map)
+  .withCsvInput(new Schema(fields)).build();
+
+  for (int i = 0; i < 10; i++) {
+String[] row2 = new String[]{
+"robot" + (i % 10),
+String.valueOf(i % 1),
+String.valueOf(i),
+String.valueOf(Long.MAX_VALUE - i),
+String.valueOf((double) i / 2),
+String.valueOf(true),
+"2019-03-02",
+"2019-02-12 03:03:34",
+"12.345",
+"varchar",
+"Hello#World#From#Carbon",
+"1.23"
+};
+writer.write(row2);
+  }
+  writer.close();
+
+  File[] dataFiles = new File(path).listFiles(new FilenameFilter() {
+@Override
+public boolean accept(File dir, String name) {
+  if (name == null) {
+return false;
+  }
+  return name.endsWith("carbonindex");
+}
+  });
+  if (dataFiles == null || dataFiles.length < 1) {
+throw new RuntimeException("Carbon index file not exists.");
+  }
+  Schema schema = CarbonSchemaReader
+  .readSchemaInIndexFile(dataFiles[0].getAbsolutePath())
+  .asOriginOrder();
+  // Transform the schema
+  int count = 0;
+  for (int i = 0; i < schema.getFields().length; i++) {
+if (!((schema.getFields())[i].getFieldName().contains("."))) {
+  count++;
+}
+  }
+  String[] strings = new String[count];
+  int index = 0;
+  for (int i = 0; i < schema.getFields().length; i++) {
+if (!((schema.getFields())[i].getFieldName().contains("."))) {
+  strings[index] = (schema.getFields())[i].getFieldName();
+  index++;
+}
+  }
+  // Read data
+  CarbonReader reader = CarbonReader
+  .builder(path, "_temp")
+  .projection(strings)
+  .build();
+
+  int i = 0;
+  while (reader.hasNext()) {
+Object[] data = (Object[]) reader.readNextRow();
+
+assert (RowUtil.getString(data, 0).equals("robot" + i));
+assertEquals(RowUtil.getShort(data, 1), i);
+assertEquals(RowUtil.getInt(data, 2), i);
+assertEquals(RowUtil.getLong(data, 3), Long.MAX_VALUE - i);
+assertEquals(RowUtil.getDouble(data, 4), ((double) i) / 2);
+assert (RowUtil.getBoolean(data, 5));
+assertEquals(RowUtil.getInt(data, 6), 17957);
+assert (RowUtil.getDecimal(data, 8).equals("12.35"));
+assert (RowUtil.getVarchar(data, 9).equals("varchar"));
+
+Object[] arr = RowUtil.getArray(data, 10);
+assert (arr[0].equals("Hello"));
+assert (arr[1].equals("World"));
+assert (arr[2].equals("From"));
+assert (arr[3].equals("Carbon"));
+
+assertEquals(RowUtil.getFloat(data, 11), (float) 1.23);
+i++;
+  }
+  reader.close();
+} catch (Throwable e) {
+  e.printStackTrace();
--- End diff --

I think we should make test fail for any exception. Should not ignore


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread KanakaKumar
Github user KanakaKumar commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r227836644
  
--- Diff: docs/csdk-guide.md ---
@@ -106,20 +128,41 @@ bool readFromS3(JNIEnv *env, char *argv[]) {
 // "your endPoint"
 args[2] = argv[3];
 
-reader.builder(env, "s3a://sdk/WriterOutput", "test");
-reader.withHadoopConf(3, args);
+reader.builder(env, "s3a://sdk/WriterOutput/carbondata/", "test");
+reader.withHadoopConf("fs.s3a.access.key", argv[1]);
+reader.withHadoopConf("fs.s3a.secret.key", argv[2]);
+reader.withHadoopConf("fs.s3a.endpoint", argv[3]);
 reader.build();
 printf("\nRead data from S3:\n");
+CarbonRow carbonRow(env);
 while (reader.hasNext()) {
-jobjectArray row = reader.readNextRow();
-jsize length = env->GetArrayLength(row);
-
+jobject row = reader.readNextCarbonRow();
--- End diff --

readNextCarbonRow is removed from this PR


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread KanakaKumar
Github user KanakaKumar commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r227836440
  
--- Diff: docs/csdk-guide.md ---
@@ -68,20 +68,42 @@ JNIEnv *initJVM() {
 bool readFromLocalWithoutProjection(JNIEnv *env) {
 
 CarbonReader carbonReaderClass;
-carbonReaderClass.builder(env, "../resources/carbondata", "test");
+carbonReaderClass.builder(env, "../resources/carbondata");
 carbonReaderClass.build();
 
+printf("\nRead data from local without projection:\n");
+
+CarbonRow carbonRow(env);
 while (carbonReaderClass.hasNext()) {
-jobjectArray row = carbonReaderClass.readNextRow();
-jsize length = env->GetArrayLength(row);
+jobject row = carbonReaderClass.readNextCarbonRow();
--- End diff --

Need to correct readNextCarbonRow is no more present


---


[GitHub] carbondata issue #2847: [WIP]Support Gzip as column compressor

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2847
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9260/



---


[GitHub] carbondata issue #2847: [WIP]Support Gzip as column compressor

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2847
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1207/



---


[GitHub] carbondata issue #2792: [CARBONDATA-2981] Support read primitive data type i...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2792
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/996/



---


[jira] [Resolved] (CARBONDATA-3034) Combing CarbonCommonConstants

2018-10-24 Thread Jacky Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacky Li resolved CARBONDATA-3034.
--
Resolution: Fixed

> Combing CarbonCommonConstants
> -
>
> Key: CARBONDATA-3034
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3034
> Project: CarbonData
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.5.0
>Reporter: wangsen
>Assignee: wangsen
>Priority: Major
> Fix For: 1.5.1
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Carding parameters,Organized by parameter category.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2843: [CARBONDATA-3034] Carding parameters,Organize...

2018-10-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/2843


---


[GitHub] carbondata issue #2843: [CARBONDATA-3034] Carding parameters,Organized by pa...

2018-10-24 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/2843
  
LGTM


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r227807282
  
--- Diff: 
store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReader.java ---
@@ -93,32 +92,10 @@ public T readNextRow() throws IOException, 
InterruptedException {
   }
 
   /**
-   * Read and return next string row object
-   * limitation: only single dimension Array is supported
-   * TODO: support didfferent data type
+   * Read and return next carbon row object
*/
-  public Object[] readNextStringRow() throws IOException, 
InterruptedException {
-validateReader();
-T t = currentReader.getCurrentValue();
-Object[] objects = (Object[]) t;
-String[] strings = new String[objects.length];
-for (int i = 0; i < objects.length; i++) {
-  if (objects[i] instanceof Object[]) {
-Object[] arrayString = (Object[]) objects[i];
-StringBuffer stringBuffer = new StringBuffer();
-stringBuffer.append(String.valueOf(arrayString[0]));
-if (arrayString.length > 1) {
-  for (int j = 1; j < arrayString.length; j++) {
-stringBuffer.append(CarbonCommonConstants.ARRAY_SEPARATOR)
-.append(String.valueOf(arrayString[j]));
-  }
-}
-strings[i] = stringBuffer.toString();
-  } else {
-strings[i] = String.valueOf(objects[i]);
-  }
-}
-return strings;
+  public Object[] readNextCarbonRow() throws IOException, 
InterruptedException {
+return (Object[]) readNextRow();
--- End diff --

ok, done


---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r227798659
  
--- Diff: store/CSDK/CarbonRow.cpp ---
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include "CarbonRow.h"
+
+CarbonRow::CarbonRow(JNIEnv *env) {
+this->rowUtilClass = 
env->FindClass("org/apache/carbondata/sdk/file/RowUtil");
+this->jniEnv = env;
+}
+
+void CarbonRow::setCarbonRow(jobject data) {
+this->carbonRow = data;
+}
+
+short CarbonRow::getShort(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, "getShort",
+"([Ljava/lang/Object;I)S");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticShortMethodA(rowUtilClass, buildID, args);
+}
+
+int CarbonRow::getInt(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, "getInt",
+"([Ljava/lang/Object;I)I");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticIntMethodA(rowUtilClass, buildID, args);
+}
+
+long CarbonRow::getLong(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, "getLong",
+"([Ljava/lang/Object;I)J");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticLongMethodA(rowUtilClass, buildID, args);
+}
+
+double CarbonRow::getDouble(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, 
"getDouble",
+"([Ljava/lang/Object;I)D");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticDoubleMethodA(rowUtilClass, buildID, args);
+}
+
+
+float CarbonRow::getFloat(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, "getFloat",
+"([Ljava/lang/Object;I)F");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticFloatMethodA(rowUtilClass, buildID, args);
+}
+
+jboolean CarbonRow::getBoolean(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, 
"getBoolean",
+"([Ljava/lang/Object;I)Z");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticBooleanMethodA(rowUtilClass, buildID, args);
+}
+
+char *CarbonRow::getString(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, 
"getString",
+"([Ljava/lang/Object;I)Ljava/lang/String;");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+jobject data = jniEnv->CallStaticObjectMethodA(rowUtilClass, buildID, 
args);
+
+char *str = (char *) jniEnv->GetStringUTFChars((jstring) data, 
JNI_FALSE);
+return str;
+}
+
+char *CarbonRow::getDecimal(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, 
"getDecimal",
--- End diff --

ok, done


---


[GitHub] carbondata issue #2846: [WIP] Added direct fill

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2846
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/995/



---


[GitHub] carbondata issue #2847: [WIP]Support Gzip as column compressor

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2847
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/994/



---


[GitHub] carbondata issue #2851: [CARBONDATA-3040][BloomDataMap] Add checking before ...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2851
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9258/



---


[GitHub] carbondata issue #2851: [CARBONDATA-3040][BloomDataMap] Add checking before ...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2851
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1205/



---


[GitHub] carbondata issue #2846: [WIP] Added direct fill

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2846
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9261/



---


[GitHub] carbondata issue #2846: [WIP] Added direct fill

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2846
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/993/



---


[GitHub] carbondata issue #2851: [CARBONDATA-3040][BloomDataMap] Add checking before ...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2851
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/992/



---


[GitHub] carbondata issue #2846: [WIP] Added direct fill

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2846
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9259/



---


[GitHub] carbondata issue #2851: [CARBONDATA-3040][BloomDataMap] Add checking before ...

2018-10-24 Thread xuchuanyin
Github user xuchuanyin commented on the issue:

https://github.com/apache/carbondata/pull/2851
  
Another related question: What will happen if we create multiple datamaps 
concurrently for the same table?


---


[GitHub] carbondata issue #2851: [CARBONDATA-3040][BloomDataMap] Add checking before ...

2018-10-24 Thread xuchuanyin
Github user xuchuanyin commented on the issue:

https://github.com/apache/carbondata/pull/2851
  
please add description about why adding datamap name will solve the problem 
in the analyze part of your PR description


---


[GitHub] carbondata issue #2848: [CARBONDATA-3036] Cache Columns And Refresh Table Is...

2018-10-24 Thread manishgupta88
Github user manishgupta88 commented on the issue:

https://github.com/apache/carbondata/pull/2848
  
LGTM


---


[GitHub] carbondata issue #2848: [CARBONDATA-3036] Cache Columns And Refresh Table Is...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2848
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9257/



---


[jira] [Updated] (CARBONDATA-3040) Add checking before merging bloom index

2018-10-24 Thread jiangmanhua (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiangmanhua updated CARBONDATA-3040:

Summary: Add checking before merging bloom index  (was: Add folder check 
before merging bloom index)

> Add checking before merging bloom index
> ---
>
> Key: CARBONDATA-3040
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3040
> Project: CarbonData
>  Issue Type: Bug
>Reporter: jiangmanhua
>Assignee: jiangmanhua
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2851: [CARBONDATA-3040][BloomDataMap] Add checking ...

2018-10-24 Thread kevinjmh
GitHub user kevinjmh opened a pull request:

https://github.com/apache/carbondata/pull/2851

[CARBONDATA-3040][BloomDataMap] Add checking before merging bloom index

*Scene*
There is a bug which causes query failure when we create two bloom datamaps 
on same table with data.  

*Analyse*
Since we already have data, each create datamap will trigger rebuild 
datamap task and then trigger bloom index file merging. By debuging, we found 
the first datamap's bloom index files would be merged two times and the second 
time made bloom index file empty.

*Solution*
Send the datamap name in rebuild event for filter. And add file check when 
merging.

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevinjmh/carbondata fix_multi_bloom

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2851.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2851


commit bcab5ac630e39a7dadee09d5b9157642d061b5e1
Author: Manhua 
Date:   2018-10-24T08:20:13Z

only rebuild target datamap and add file check




---


[GitHub] carbondata issue #2848: [CARBONDATA-3036] Cache Columns And Refresh Table Is...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2848
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1204/



---


[GitHub] carbondata issue #2814: [WIP][CARBONDATA-3001] configurable page size in MB

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2814
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9255/



---


[GitHub] carbondata issue #2818: [CARBONDATA-3011] Add carbon property to configure v...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2818
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1201/



---


[GitHub] carbondata pull request #2792: [CARBONDATA-2981] Support read primitive data...

2018-10-24 Thread KanakaKumar
Github user KanakaKumar commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2792#discussion_r227741794
  
--- Diff: store/CSDK/CarbonRow.cpp ---
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+#include 
+#include "CarbonRow.h"
+
+CarbonRow::CarbonRow(JNIEnv *env) {
+this->rowUtilClass = 
env->FindClass("org/apache/carbondata/sdk/file/RowUtil");
+this->jniEnv = env;
+}
+
+void CarbonRow::setCarbonRow(jobject data) {
+this->carbonRow = data;
+}
+
+short CarbonRow::getShort(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, "getShort",
+"([Ljava/lang/Object;I)S");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticShortMethodA(rowUtilClass, buildID, args);
+}
+
+int CarbonRow::getInt(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, "getInt",
+"([Ljava/lang/Object;I)I");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticIntMethodA(rowUtilClass, buildID, args);
+}
+
+long CarbonRow::getLong(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, "getLong",
+"([Ljava/lang/Object;I)J");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticLongMethodA(rowUtilClass, buildID, args);
+}
+
+double CarbonRow::getDouble(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, 
"getDouble",
+"([Ljava/lang/Object;I)D");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticDoubleMethodA(rowUtilClass, buildID, args);
+}
+
+
+float CarbonRow::getFloat(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, "getFloat",
+"([Ljava/lang/Object;I)F");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticFloatMethodA(rowUtilClass, buildID, args);
+}
+
+jboolean CarbonRow::getBoolean(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, 
"getBoolean",
+"([Ljava/lang/Object;I)Z");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+return jniEnv->CallStaticBooleanMethodA(rowUtilClass, buildID, args);
+}
+
+char *CarbonRow::getString(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, 
"getString",
+"([Ljava/lang/Object;I)Ljava/lang/String;");
+jvalue args[2];
+args[0].l = carbonRow;
+args[1].i = ordinal;
+jobject data = jniEnv->CallStaticObjectMethodA(rowUtilClass, buildID, 
args);
+
+char *str = (char *) jniEnv->GetStringUTFChars((jstring) data, 
JNI_FALSE);
+return str;
+}
+
+char *CarbonRow::getDecimal(int ordinal) {
+jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, 
"getDecimal",
--- End diff --

jmethodID buildID = jniEnv->GetStaticMethodID(rowUtilClass, "getDecimal",
 "([Ljava/lang/Object;I)Ljava/lang/String;");
jvalue args[2];

Accessing the static method and initializing the array is done for every  
row reading.  Create once and reuse may improve performance. Can you please 
try? 



---


[GitHub] carbondata issue #2848: [CARBONDATA-3036] Cache Columns And Refresh Table Is...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2848
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/991/



---


[GitHub] carbondata issue #2818: [CARBONDATA-3011] Add carbon property to configure v...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2818
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9254/



---


[GitHub] carbondata issue #2814: [WIP][CARBONDATA-3001] configurable page size in MB

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2814
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1202/



---


[GitHub] carbondata issue #2843: [CARBONDATA-3034] Carding parameters,Organized by pa...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2843
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1200/



---


[GitHub] carbondata issue #2843: [CARBONDATA-3034] Carding parameters,Organized by pa...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2843
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9253/



---


[GitHub] carbondata issue #2814: [WIP][CARBONDATA-3001] configurable page size in MB

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2814
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/989/



---


[GitHub] carbondata pull request #2848: [CARBONDATA-3036] Cache Columns And Refresh T...

2018-10-24 Thread manishgupta88
Github user manishgupta88 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2848#discussion_r227731562
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/TestQueryWithColumnMetCacheAndCacheLevelProperty.scala
 ---
@@ -282,6 +285,48 @@ class TestQueryWithColumnMetCacheAndCacheLevelProperty 
extends QueryTest with Be
 sql("drop table if exists alter_add_column_min_max")
   }
 
+  test("Test For Cache set but Min/Max exceeds") {
+sql("DROP TABLE IF EXISTS carbonCache")
+sql(
+  s"""
+ | CREATE TABLE carbonCache (
+ | name STRING,
+ | age STRING,
+ | desc STRING
+ | )
+ | STORED BY 'carbondata'
+ | TBLPROPERTIES('COLUMN_META_CACHE'='name,desc')
+   """.stripMargin)
+sql(
+  "INSERT INTO carbonCache values('Manish Nalla','24'," +
+  "'gvsahgvsahjvcsahjgvavacavkjvaskjvsahgsvagkjvkjgvsackjg')")
+checkAnswer(sql(
+  "SELECT count(*) FROM carbonCache where " +
+  "desc='gvsahgvsahjvcsahjgvavacavkjvaskjvsahgsvagkjvkjgvsackjg'"),
+  Row(1))
+  }
+
+  test("Cache Blocklet Level testing") {
--- End diff --

1. give proper name for the test case
2. Drop the table before and after completion of each test case
3. Add the property for setting max byte count before the start of each 
test case and in after all set the property back to default


---


[GitHub] carbondata issue #2848: [CARBONDATA-3036] Cache Columns And Refresh Table Is...

2018-10-24 Thread manishnalla1994
Github user manishnalla1994 commented on the issue:

https://github.com/apache/carbondata/pull/2848
  
retest this please



---


[GitHub] carbondata issue #2818: [CARBONDATA-3011] Add carbon property to configure v...

2018-10-24 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2818
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/988/



---


  1   2   >