[GitHub] carbondata pull request #2863: [WIP] Optimise decompressing while filling th...

2018-11-13 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2863#discussion_r233341050
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/scan/result/vector/impl/CarbonColumnVectorImpl.java
 ---
@@ -53,6 +53,12 @@
 
   private DataType blockDataType;
 
+  private int[] lengths;
--- End diff --

please add comment for these three newly added variables


---


[GitHub] carbondata pull request #2863: [WIP] Optimise decompressing while filling th...

2018-11-13 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2863#discussion_r233340004
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/encoding/compress/DirectCompressCodec.java
 ---
@@ -224,130 +238,134 @@ public void decodeAndFillVector(ColumnPage 
columnPage, ColumnVectorInfo vectorIn
   }
 }
 
-private void fillVector(ColumnPage columnPage, CarbonColumnVector 
vector,
-DataType vectorDataType, DataType pageDataType, int pageSize, 
ColumnVectorInfo vectorInfo) {
+private void fillVector(byte[] pageData, CarbonColumnVector vector, 
DataType vectorDataType,
+DataType pageDataType, int pageSize, ColumnVectorInfo vectorInfo, 
BitSet nullBits) {
+  int rowId = 0;
   if (pageDataType == DataTypes.BOOLEAN || pageDataType == 
DataTypes.BYTE) {
-byte[] byteData = columnPage.getBytePage();
 if (vectorDataType == DataTypes.SHORT) {
   for (int i = 0; i < pageSize; i++) {
-vector.putShort(i, (short) byteData[i]);
+vector.putShort(i, (short) pageData[i]);
   }
 } else if (vectorDataType == DataTypes.INT) {
   for (int i = 0; i < pageSize; i++) {
-vector.putInt(i, (int) byteData[i]);
+vector.putInt(i, (int) pageData[i]);
   }
 } else if (vectorDataType == DataTypes.LONG) {
   for (int i = 0; i < pageSize; i++) {
-vector.putLong(i, byteData[i]);
+vector.putLong(i, pageData[i]);
   }
 } else if (vectorDataType == DataTypes.TIMESTAMP) {
   for (int i = 0; i < pageSize; i++) {
-vector.putLong(i, (long) byteData[i] * 1000);
+vector.putLong(i, (long) pageData[i] * 1000);
   }
 } else if (vectorDataType == DataTypes.BOOLEAN || vectorDataType 
== DataTypes.BYTE) {
-  vector.putBytes(0, pageSize, byteData, 0);
+  vector.putBytes(0, pageSize, pageData, 0);
 } else if (DataTypes.isDecimal(vectorDataType)) {
   DecimalConverterFactory.DecimalConverter decimalConverter = 
vectorInfo.decimalConverter;
-  decimalConverter.fillVector(byteData, pageSize, vectorInfo, 
columnPage.getNullBits());
+  decimalConverter.fillVector(pageData, pageSize, vectorInfo, 
nullBits, pageDataType);
 } else {
   for (int i = 0; i < pageSize; i++) {
-vector.putDouble(i, byteData[i]);
+vector.putDouble(i, pageData[i]);
   }
 }
   } else if (pageDataType == DataTypes.SHORT) {
-short[] shortData = columnPage.getShortPage();
+int size = pageSize * DataTypes.SHORT.getSizeInBytes();
 if (vectorDataType == DataTypes.SHORT) {
-  vector.putShorts(0, pageSize, shortData, 0);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putShort(rowId++, 
(ByteUtil.toShortLittleEndian(pageData, i)));
+  }
 } else if (vectorDataType == DataTypes.INT) {
-  for (int i = 0; i < pageSize; i++) {
-vector.putInt(i, (int) shortData[i]);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putInt(rowId++, ByteUtil.toShortLittleEndian(pageData, 
i));
   }
 } else if (vectorDataType == DataTypes.LONG) {
-  for (int i = 0; i < pageSize; i++) {
-vector.putLong(i, shortData[i]);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putLong(rowId++, ByteUtil.toShortLittleEndian(pageData, 
i));
   }
 } else if (vectorDataType == DataTypes.TIMESTAMP) {
-  for (int i = 0; i < pageSize; i++) {
-vector.putLong(i, (long) shortData[i] * 1000);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putLong(rowId++, (long) 
ByteUtil.toShortLittleEndian(pageData, i) * 1000);
   }
 } else if (DataTypes.isDecimal(vectorDataType)) {
   DecimalConverterFactory.DecimalConverter decimalConverter = 
vectorInfo.decimalConverter;
-  decimalConverter.fillVector(shortData, pageSize, vectorInfo, 
columnPage.getNullBits());
+  decimalConverter.fillVector(pageData, pageSize, vectorInfo, 
nullBits, pageDataType);
 } else {
-  for (int i = 0; i < pageSize; i++) {
-vector.putDouble(i, shortData[i]);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putDouble(rowId++, 
ByteUtil.toShortLittleEndian(pageData, i));
   }
 }
 

[GitHub] carbondata pull request #2863: [WIP] Optimise decompressing while filling th...

2018-11-13 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2863#discussion_r29949
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/encoding/compress/DirectCompressCodec.java
 ---
@@ -224,130 +238,134 @@ public void decodeAndFillVector(ColumnPage 
columnPage, ColumnVectorInfo vectorIn
   }
 }
 
-private void fillVector(ColumnPage columnPage, CarbonColumnVector 
vector,
-DataType vectorDataType, DataType pageDataType, int pageSize, 
ColumnVectorInfo vectorInfo) {
+private void fillVector(byte[] pageData, CarbonColumnVector vector, 
DataType vectorDataType,
+DataType pageDataType, int pageSize, ColumnVectorInfo vectorInfo, 
BitSet nullBits) {
+  int rowId = 0;
   if (pageDataType == DataTypes.BOOLEAN || pageDataType == 
DataTypes.BYTE) {
-byte[] byteData = columnPage.getBytePage();
 if (vectorDataType == DataTypes.SHORT) {
   for (int i = 0; i < pageSize; i++) {
-vector.putShort(i, (short) byteData[i]);
+vector.putShort(i, (short) pageData[i]);
   }
 } else if (vectorDataType == DataTypes.INT) {
   for (int i = 0; i < pageSize; i++) {
-vector.putInt(i, (int) byteData[i]);
+vector.putInt(i, (int) pageData[i]);
   }
 } else if (vectorDataType == DataTypes.LONG) {
   for (int i = 0; i < pageSize; i++) {
-vector.putLong(i, byteData[i]);
+vector.putLong(i, pageData[i]);
   }
 } else if (vectorDataType == DataTypes.TIMESTAMP) {
   for (int i = 0; i < pageSize; i++) {
-vector.putLong(i, (long) byteData[i] * 1000);
+vector.putLong(i, (long) pageData[i] * 1000);
   }
 } else if (vectorDataType == DataTypes.BOOLEAN || vectorDataType 
== DataTypes.BYTE) {
-  vector.putBytes(0, pageSize, byteData, 0);
+  vector.putBytes(0, pageSize, pageData, 0);
 } else if (DataTypes.isDecimal(vectorDataType)) {
   DecimalConverterFactory.DecimalConverter decimalConverter = 
vectorInfo.decimalConverter;
-  decimalConverter.fillVector(byteData, pageSize, vectorInfo, 
columnPage.getNullBits());
+  decimalConverter.fillVector(pageData, pageSize, vectorInfo, 
nullBits, pageDataType);
 } else {
   for (int i = 0; i < pageSize; i++) {
-vector.putDouble(i, byteData[i]);
+vector.putDouble(i, pageData[i]);
   }
 }
   } else if (pageDataType == DataTypes.SHORT) {
-short[] shortData = columnPage.getShortPage();
+int size = pageSize * DataTypes.SHORT.getSizeInBytes();
 if (vectorDataType == DataTypes.SHORT) {
-  vector.putShorts(0, pageSize, shortData, 0);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putShort(rowId++, 
(ByteUtil.toShortLittleEndian(pageData, i)));
+  }
 } else if (vectorDataType == DataTypes.INT) {
-  for (int i = 0; i < pageSize; i++) {
-vector.putInt(i, (int) shortData[i]);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putInt(rowId++, ByteUtil.toShortLittleEndian(pageData, 
i));
   }
 } else if (vectorDataType == DataTypes.LONG) {
-  for (int i = 0; i < pageSize; i++) {
-vector.putLong(i, shortData[i]);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putLong(rowId++, ByteUtil.toShortLittleEndian(pageData, 
i));
   }
 } else if (vectorDataType == DataTypes.TIMESTAMP) {
-  for (int i = 0; i < pageSize; i++) {
-vector.putLong(i, (long) shortData[i] * 1000);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putLong(rowId++, (long) 
ByteUtil.toShortLittleEndian(pageData, i) * 1000);
   }
 } else if (DataTypes.isDecimal(vectorDataType)) {
   DecimalConverterFactory.DecimalConverter decimalConverter = 
vectorInfo.decimalConverter;
-  decimalConverter.fillVector(shortData, pageSize, vectorInfo, 
columnPage.getNullBits());
+  decimalConverter.fillVector(pageData, pageSize, vectorInfo, 
nullBits, pageDataType);
 } else {
-  for (int i = 0; i < pageSize; i++) {
-vector.putDouble(i, shortData[i]);
+  for (int i = 0; i < size; i += DataTypes.SHORT.getSizeInBytes()) 
{
+vector.putDouble(rowId++, 
ByteUtil.toShortLittleEndian(pageData, i));
   }
 }
 

[GitHub] carbondata pull request #2863: [WIP] Optimise decompressing while filling th...

2018-11-13 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2863#discussion_r28236
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 ---
@@ -2006,6 +2006,12 @@ private CarbonCommonConstants() {
*/
   public static final String CARBON_WRITTEN_BY_APPNAME = 
"carbon.writtenby.app.name";
 
+  /**
+   * When more global dictionary columns are there then there is issue in 
generating codegen to them
--- End diff --

Is it only valid for table with global dictionary, or for normal table also?


---


[GitHub] carbondata pull request #2863: [WIP] Optimise decompressing while filling th...

2018-11-13 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2863#discussion_r27530
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/ColumnPageValueConverter.java
 ---
@@ -37,5 +40,6 @@
   double decodeDouble(long value);
   double decodeDouble(float value);
   double decodeDouble(double value);
-  void decodeAndFillVector(ColumnPage columnPage, ColumnVectorInfo 
vectorInfo);
+  void decodeAndFillVector(byte[] pageData, ColumnVectorInfo vectorInfo, 
BitSet nullBits,
+  DataType pageDataType, int pageSize);
--- End diff --

can you provide comment for this func


---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1608/



---


[GitHub] carbondata issue #2908: [CARBONDATA-3087] Improve DESC FORMATTED output

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2908
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1401/



---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9657/



---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1609/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9656/



---


[GitHub] carbondata issue #2918: [CARBONDATA-3098] Fix for negative exponents value g...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2918
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1400/



---


[jira] [Closed] (CARBONDATA-3100) Can not create mv:No package org.apache.carbondata.mv.datamap

2018-11-13 Thread U Shaw (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

U Shaw closed CARBONDATA-3100.
--
Resolution: Fixed

> Can not create mv:No package org.apache.carbondata.mv.datamap
> -
>
> Key: CARBONDATA-3100
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3100
> Project: CarbonData
>  Issue Type: Bug
>  Components: other
>Affects Versions: 1.5.0
>Reporter: U Shaw
>Priority: Major
> Fix For: 1.5.0
>
>
> scala> carbon.sql("create datamap web_site_mv USING 'mv' WITH DEFERRED 
> REBUILD AS select * from web_site").show
> java.lang.ClassNotFoundException: 
> org.apache.carbondata.mv.datamap.MVDataMapProvider
>  at 
> scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>  at java.lang.Class.forName0(Native Method)
>  at java.lang.Class.forName(Class.java:348)
>  at org.apache.spark.util.Utils$.classForName(Utils.scala:239)
>  at 
> org.apache.spark.util.CarbonReflectionUtils$.createObject(CarbonReflectionUtils.scala:322)
>  at 
> org.apache.carbondata.spark.util.CarbonScalaUtil$.createDataMapProvider(CarbonScalaUtil.scala:480)
>  at 
> org.apache.carbondata.spark.util.CarbonScalaUtil.createDataMapProvider(CarbonScalaUtil.scala)
>  at 
> org.apache.carbondata.datamap.DataMapManager.getDataMapProvider(DataMapManager.java:57)
>  at 
> org.apache.spark.sql.execution.command.datamap.CarbonCreateDataMapCommand.processMetadata(CarbonCreateDataMapCommand.scala:99)
>  at 
> org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:90)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
>  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
>  at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
>  at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
>  at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
>  at org.apache.spark.sql.Dataset.(Dataset.scala:190)
>  at 
> org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:106)
>  at 
> org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:95)
>  at org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:154)
>  at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:93)
>  ... 53 elided



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-3100) Can not create mv:No package org.apache.carbondata.mv.datamap

2018-11-13 Thread U Shaw (JIRA)
U Shaw created CARBONDATA-3100:
--

 Summary: Can not create mv:No package 
org.apache.carbondata.mv.datamap
 Key: CARBONDATA-3100
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3100
 Project: CarbonData
  Issue Type: Bug
  Components: other
Affects Versions: 1.5.0
Reporter: U Shaw
 Fix For: 1.5.0


scala> carbon.sql("create datamap web_site_mv USING 'mv' WITH DEFERRED REBUILD 
AS select * from web_site").show
java.lang.ClassNotFoundException: 
org.apache.carbondata.mv.datamap.MVDataMapProvider
 at 
scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:348)
 at org.apache.spark.util.Utils$.classForName(Utils.scala:239)
 at 
org.apache.spark.util.CarbonReflectionUtils$.createObject(CarbonReflectionUtils.scala:322)
 at 
org.apache.carbondata.spark.util.CarbonScalaUtil$.createDataMapProvider(CarbonScalaUtil.scala:480)
 at 
org.apache.carbondata.spark.util.CarbonScalaUtil.createDataMapProvider(CarbonScalaUtil.scala)
 at 
org.apache.carbondata.datamap.DataMapManager.getDataMapProvider(DataMapManager.java:57)
 at 
org.apache.spark.sql.execution.command.datamap.CarbonCreateDataMapCommand.processMetadata(CarbonCreateDataMapCommand.scala:99)
 at 
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:90)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
 at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
 at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
 at org.apache.spark.sql.Dataset.(Dataset.scala:190)
 at 
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:106)
 at 
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:95)
 at org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:154)
 at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:93)
 ... 53 elided



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-3099) Can not create mv:No package org.apache.carbondata.mv.datamap

2018-11-13 Thread U Shaw (JIRA)
U Shaw created CARBONDATA-3099:
--

 Summary: Can not create mv:No package 
org.apache.carbondata.mv.datamap
 Key: CARBONDATA-3099
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3099
 Project: CarbonData
  Issue Type: Bug
  Components: other
Affects Versions: 1.5.0
Reporter: U Shaw
 Fix For: 1.5.0


scala> carbon.sql("create datamap web_site_mv USING 'mv' WITH DEFERRED REBUILD 
AS select * from web_site").show
java.lang.ClassNotFoundException: 
org.apache.carbondata.mv.datamap.MVDataMapProvider
 at 
scala.reflect.internal.util.AbstractFileClassLoader.findClass(AbstractFileClassLoader.scala:62)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:348)
 at org.apache.spark.util.Utils$.classForName(Utils.scala:239)
 at 
org.apache.spark.util.CarbonReflectionUtils$.createObject(CarbonReflectionUtils.scala:322)
 at 
org.apache.carbondata.spark.util.CarbonScalaUtil$.createDataMapProvider(CarbonScalaUtil.scala:480)
 at 
org.apache.carbondata.spark.util.CarbonScalaUtil.createDataMapProvider(CarbonScalaUtil.scala)
 at 
org.apache.carbondata.datamap.DataMapManager.getDataMapProvider(DataMapManager.java:57)
 at 
org.apache.spark.sql.execution.command.datamap.CarbonCreateDataMapCommand.processMetadata(CarbonCreateDataMapCommand.scala:99)
 at 
org.apache.spark.sql.execution.command.AtomicRunnableCommand.run(package.scala:90)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
 at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
 at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
 at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
 at org.apache.spark.sql.Dataset.(Dataset.scala:190)
 at 
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:106)
 at 
org.apache.spark.sql.CarbonSession$$anonfun$sql$1.apply(CarbonSession.scala:95)
 at org.apache.spark.sql.CarbonSession.withProfiler(CarbonSession.scala:154)
 at org.apache.spark.sql.CarbonSession.sql(CarbonSession.scala:93)
 ... 53 elided



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #2918: [CARBONDATA-3098] Fix for negative exponents value g...

2018-11-13 Thread manishnalla1994
Github user manishnalla1994 commented on the issue:

https://github.com/apache/carbondata/pull/2918
  
retest this please


---


[GitHub] carbondata issue #2918: [CARBONDATA-3098] Fix for negative exponents value g...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2918
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1607/



---


[GitHub] carbondata issue #2918: [CARBONDATA-3098] Fix for negative exponents value g...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2918
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9655/



---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1399/



---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread dhatchayani
Github user dhatchayani commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
retest this please


---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1398/



---


[GitHub] carbondata issue #2918: [CARBONDATA-3098] Fix for negative exponents value g...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2918
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1397/



---


[GitHub] carbondata pull request #2872: [WIP] Added reusable buffer code

2018-11-13 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2872#discussion_r233321330
  
--- Diff: 
integration/spark-datasource/src/main/spark2.1andspark2.2/org/apache/spark/sql/CarbonVectorProxy.java
 ---
@@ -454,7 +458,11 @@ public ColumnVector reserveDictionaryIds(int capacity) 
{
  * and offset.
  */
 public void putAllByteArray(byte[] data, int offset, int length) {
-  vector.arrayData().appendBytes(length, data, offset);
+  try {
+byteArray.set(vector.arrayData(), data);
--- End diff --

please remove this 



---


[GitHub] carbondata pull request #2898: [CARBONDATA-3077] Fixed query failure in file...

2018-11-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/2898


---


[jira] [Resolved] (CARBONDATA-3077) Fixed query failure in fileformat due stale cache issue

2018-11-13 Thread Ravindra Pesala (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-3077.
-
   Resolution: Fixed
Fix Version/s: 1.5.1

> Fixed query failure in fileformat due stale cache issue
> ---
>
> Key: CARBONDATA-3077
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3077
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Manish Gupta
>Assignee: Manish Gupta
>Priority: Major
> Fix For: 1.5.1
>
> Attachments: 20181102101536.jpg
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> *Problem*
> While using FileFormat API, if a table created, dropped and then recreated 
> with the same name the query fails because of schema mismatch issue



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #2898: [CARBONDATA-3077] Fixed query failure in fileformat ...

2018-11-13 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/2898
  
LGTM


---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1606/



---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9654/



---


[GitHub] carbondata pull request #2918: [CARBONDATA-3098] Fix for negative exponents ...

2018-11-13 Thread manishnalla1994
GitHub user manishnalla1994 opened a pull request:

https://github.com/apache/carbondata/pull/2918

[CARBONDATA-3098] Fix for negative exponents value giving wrong results in 
Float datatype

Problem: When the value of exponent is a negative number then the data is 
incorrect due to loss of precision of Floating point values and wrong 
calculation of the count of decimal points.

Solution: Handled floating point precision by converting it to double and 
counted the decimal count values as done in double datatype(using Big Decimal).

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [x] Testing done
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manishnalla1994/carbondata FloatInfiniteFix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2918.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2918


commit 8d4ede90f5c47759485b34f1f20cec3bbdc32c15
Author: Manish Nalla 
Date:   2018-11-14T05:27:49Z

Float negative exponents




---


[jira] [Created] (CARBONDATA-3098) Negative value exponents giving wrong results

2018-11-13 Thread MANISH NALLA (JIRA)
MANISH NALLA created CARBONDATA-3098:


 Summary: Negative value exponents giving wrong results
 Key: CARBONDATA-3098
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3098
 Project: CarbonData
  Issue Type: Bug
Reporter: MANISH NALLA






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (CARBONDATA-3084) data load with float datatype falis with internal error

2018-11-13 Thread Kunal Kapoor (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Kapoor resolved CARBONDATA-3084.
--
   Resolution: Fixed
Fix Version/s: 1.5.1

> data load with  float datatype falis with internal error
> 
>
> Key: CARBONDATA-3084
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3084
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Priority: Major
> Fix For: 1.5.1
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> when data load is triggered for float datatype and data is exceeding the 
> float max range, data load fails with following error
> java.lang.RuntimeException: internal error: FLOAT
>  at 
> org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.fitMinMax(DefaultEncodingFactory.java:179)
>  at 
> org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.selectCodecByAlgorithmForIntegral(DefaultEncodingFactory.java:259)
>  at 
> org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.selectCodecByAlgorithmForFloating(DefaultEncodingFactory.java:337)
>  at 
> org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.createEncoderForMeasureOrNoDictionaryPrimitive(DefaultEncodingFactory.java:130)
>  at 
> org.apache.carbondata.core.datastore.page.encoding.DefaultEncodingFactory.createEncoder(DefaultEncodingFactory.java:66)
>  at 
> org.apache.carbondata.processing.store.TablePage.encodeAndCompressMeasures(TablePage.java:385)
>  at 
> org.apache.carbondata.processing.store.TablePage.encode(TablePage.java:372)
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.processDataRows(CarbonFactDataHandlerColumnar.java:285)
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar.access$500(CarbonFactDataHandlerColumnar.java:59)
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:583)
>  at 
> org.apache.carbondata.processing.store.CarbonFactDataHandlerColumnar$Producer.call(CarbonFactDataHandlerColumnar.java:560)
>  
>  
>  
>  
> Steps to reproduce are
> create table datatype_floa_byte(f float, b byte) using carbon;
> insert into datatype_floa_byte select 123.123,127;
> insert into datatype_floa_byte select "1.7976931348623157E308",-127;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1396/



---


[GitHub] carbondata issue #2899: [CARBONDATA-3073] Support configure TableProperties,...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2899
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9653/



---


[GitHub] carbondata pull request #2903: [CARBONDATA-3084]dataload failure fix when fl...

2018-11-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/2903


---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread dhatchayani
Github user dhatchayani commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
retest this please


---


[GitHub] carbondata issue #2899: [CARBONDATA-3073] Support configure TableProperties,...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2899
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1605/



---


[GitHub] carbondata issue #2898: [CARBONDATA-3077] Fixed query failure in fileformat ...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2898
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9652/



---


[GitHub] carbondata issue #2898: [CARBONDATA-3077] Fixed query failure in fileformat ...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2898
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1604/



---


[GitHub] carbondata issue #2903: [CARBONDATA-3084]dataload failure fix when float val...

2018-11-13 Thread kunal642
Github user kunal642 commented on the issue:

https://github.com/apache/carbondata/pull/2903
  
LGTM


---


[GitHub] carbondata issue #2899: [CARBONDATA-3073] Support configure TableProperties,...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2899
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1394/



---


[GitHub] carbondata issue #2898: [CARBONDATA-3077] Fixed query failure in fileformat ...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2898
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1395/



---


[GitHub] carbondata issue #2908: [CARBONDATA-3087] Improve DESC FORMATTED output

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2908
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1601/



---


[GitHub] carbondata issue #2898: [CARBONDATA-3077] Fixed query failure in fileformat ...

2018-11-13 Thread brijoobopanna
Github user brijoobopanna commented on the issue:

https://github.com/apache/carbondata/pull/2898
  
retest this please



---


[jira] [Updated] (CARBONDATA-2951) CSDK: Provide C++ interface for SDK

2018-11-13 Thread xubo245 (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xubo245 updated CARBONDATA-2951:

Description: 
For the some user of using C++ code in their project, they can't call 
CarbonData interface and integrate CarbonData into their C++ project. So we 
plan to provide C++  interface for C++ user to integrate carbon, including read 
and write CarbonData. It's will more convenient for they.

We plan to design and develop  as following:

1. Provide CarbonReader for SDK, it can read carbon data in C++ language
##features/interfaces
1.1.create CarbonReader
1.2.hasNext()
1.3.readNextRow()
1.4.close()
1.5.support OBS(AK/SK/Endpoint)
1.6 support batch read(withBatch,readNextBatchRow) 
1.7 support vecor read(default) and carbonrecordreader 
(withRowRecordReader)
1.8 projection

##support data types:
 String, 
Long,Varchar(string),Short,Int,Date(int),timestamp(long),boolean,Decimal(string),Float
 Array in carbonrecordreader, not support in vectorreader
 byte=>support in java RowUtil, not in C++ carbon reader
 
## Schema and data
 Create table tbl_email_form_to_for_XX( 
Event_Time Timestamp,
Ingestion_Time Timestamp,
From_Email String,
To_Email String,
From_To_type String,
Event_ID String
) using carbon options(path ‘obs://X/tbl_email_form_to_for_XX’)
ETL 6 columns from 18 columns table

example data:
from_email_36550_phillip.al...@enron.com
to_email_36550_stagecoachm...@hotmail.com   from_to 
<29528303.107585557.JavaMail.evans@thyme>   153801549700
975514920

2. the performance should be reach X millions records/s/node

3.Provide CarbonWriter for SDK, it can write carbon data in C++ language
##features/interfaces
3.1.create CarbonWriter, including create schema(withCsvInput),set 
outputPath, and build,
3.2.write()
3.3.close()
3.4.support OBS(AK/SK/Endpoint)(withHadoopConf)
3.5.writtenBy
3.6. support withTableProperty, withLoadOption,taskNo, 
uniqueIdentifier, withThreadSafe,  withBlockSize, withBlockletSize, 
localDictionaryThreshold, enableLocalDictionary in C++ SDK (PR2899 TO BE review)

##Data types:
   Carbon need support base data types, including string, float, 
double, int, long, date, timestamp, bool, array.
  For other, we can convert:
 char array => carbon string
 Enum => Carbon string
  set and list => carbon array

##performance
Writing Performance is not required now

4. read schema function
readSchema
getVersionDetails  =>TODO

5. support carbonproperties
5.1 addProperty
5.2 getProperty

6.TODO:
6.1.getVersionDetails. =>JIRA
6.2.updated SDK/CSDK reader doc
6.3.support byte(write read)
6.4.support long string columns
6.5.support sortBy
6.6.support withCsvInput(Schema schema);  create schema(JAVA)
6.7. optimize the write doc
/**
* Create a {@link CarbonWriterBuilder} to build a 
{@link CarbonWriter}
*/
public static CarbonWriterBuilder builder() {
return new CarbonWriterBuilder();
}

  was:
For the some user of using C++ code in their project, they can't call 
CarbonData interface and integrate CarbonData into their C++ project. So we 
plan to provide C++  interface for C++ user to integrate carbon, including read 
and write CarbonData. It's will more convenient for they.

We plan to design and develop  as following:

1. Provide CarbonReader for SDK, it can read carbon data in C++ language
##features/interfaces
1.1.create CarbonReader
1.2.hasNext()
1.3.readNextRow()
1.4.close()
1.5.support OBS(AK/SK/Endpoint)
1.6 support batch read(withBatch,readNextBatchRow) 
1.7 support vecor read(default) and carbonrecordreader 
(withRowRecordReader)
1.8 projection

##support data types:
 String, 
Long,Varchar(string),Short,Int,Date(int),timestamp(long),boolean,Decimal(string),Float
 Array in carbonrecordreader, not support in vectorreader
 byte=>support in java RowUtil, not in C++ carbon reader
 
## Schema and data
 Create table tbl_email_form_to_for_XX( 
Event_Time Timestamp,
Ingestion_Time Timestamp,
From_Email String,

[GitHub] carbondata issue #2908: [CARBONDATA-3087] Improve DESC FORMATTED output

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2908
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9649/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9650/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1602/



---


[jira] [Created] (CARBONDATA-3097) Support folder path in getVersionDetails and support getVersionDetails in CSDK

2018-11-13 Thread xubo245 (JIRA)
xubo245 created CARBONDATA-3097:
---

 Summary: Support  folder path in getVersionDetails and support 
getVersionDetails in CSDK
 Key: CARBONDATA-3097
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3097
 Project: CarbonData
  Issue Type: Sub-task
Affects Versions: 1.5.1
Reporter: xubo245
Assignee: xubo245


Support  folder path in getVersionDetails and support getVersionDetails in CSDK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #2917: [WIP]Show load/insert/update/delete row number

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2917
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1600/



---


[GitHub] carbondata issue #2917: [WIP]Show load/insert/update/delete row number

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2917
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9648/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1392/



---


[GitHub] carbondata issue #2917: [WIP]Show load/insert/update/delete row number

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2917
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1390/



---


[GitHub] carbondata issue #2908: [CARBONDATA-3087] Improve DESC FORMATTED output

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2908
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1391/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread kumarvishal09
Github user kumarvishal09 commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
retest this please 


---


[GitHub] carbondata pull request #2917: [WIP]Show load/insert/update/delete row numbe...

2018-11-13 Thread kevinjmh
GitHub user kevinjmh opened a pull request:

https://github.com/apache/carbondata/pull/2917

[WIP]Show load/insert/update/delete row number

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kevinjmh/carbondata ProceededRowCount

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2917.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2917


commit f7d199e7d6d28c72124a2ad45635949976816d04
Author: Manhua 
Date:   2018-11-13T09:27:10Z

show load/insert/update/delete row number




---


[GitHub] carbondata issue #2915: [CARBONDATA-3095] Optimize the documentation of SDK/...

2018-11-13 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/2915
  
@KanakaKumar @jackylk @ajantha-bhat please review it.


---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1599/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9647/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1389/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1388/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9645/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1597/



---


[GitHub] carbondata issue #2872: [WIP] Added reusable buffer code

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2872
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1387/



---


[GitHub] carbondata issue #2908: [CARBONDATA-3087] Improve DESC FORMATTED output

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2908
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9644/



---


[GitHub] carbondata issue #2908: [CARBONDATA-3087] Improve DESC FORMATTED output

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2908
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1596/



---


[GitHub] carbondata issue #2909: [CARBONDATA-3089] Change task distribution for NO_SO...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2909
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9642/



---


[GitHub] carbondata issue #2898: [CARBONDATA-3077] Fixed query failure in fileformat ...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2898
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1593/



---


[GitHub] carbondata issue #2909: [CARBONDATA-3089] Change task distribution for NO_SO...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2909
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1594/



---


[GitHub] carbondata issue #2898: [CARBONDATA-3077] Fixed query failure in fileformat ...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2898
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9641/



---


[GitHub] carbondata issue #2908: [CARBONDATA-3087] Improve DESC FORMATTED output

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2908
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1386/



---


[GitHub] carbondata issue #2863: [WIP] Optimise decompressing while filling the vecto...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2863
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1592/



---


[GitHub] carbondata issue #2863: [WIP] Optimise decompressing while filling the vecto...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2863
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9640/



---


[GitHub] carbondata pull request #2901: [CARBONDATA-3081] Fixed NPE for boolean type ...

2018-11-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/2901


---


[jira] [Resolved] (CARBONDATA-3081) NPE when boolean column has null values with Vectorized SDK reader

2018-11-13 Thread Manish Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manish Gupta resolved CARBONDATA-3081.
--
   Resolution: Fixed
 Assignee: Kunal Kapoor
Fix Version/s: 1.5.1

> NPE when boolean column has null values with Vectorized SDK reader
> --
>
> Key: CARBONDATA-3081
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3081
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Kunal Kapoor
>Assignee: Kunal Kapoor
>Priority: Major
> Fix For: 1.5.1
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #2901: [CARBONDATA-3081] Fixed NPE for boolean type column ...

2018-11-13 Thread manishgupta88
Github user manishgupta88 commented on the issue:

https://github.com/apache/carbondata/pull/2901
  
LGTM


---


[GitHub] carbondata pull request #2895: [HOTFIX] Fix NPE in spark, when same vector r...

2018-11-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/2895


---


[GitHub] carbondata issue #2908: [CARBONDATA-3087] Improve DESC FORMATTED output

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2908
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1385/



---


[GitHub] carbondata issue #2909: [CARBONDATA-3089] Change task distribution for NO_SO...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2909
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1384/



---


[GitHub] carbondata issue #2895: [HOTFIX] Fix NPE in spark, when same vector reads fi...

2018-11-13 Thread manishgupta88
Github user manishgupta88 commented on the issue:

https://github.com/apache/carbondata/pull/2895
  
LGTM


---


[GitHub] carbondata issue #2909: [CARBONDATA-3089] Change task distribution for NO_SO...

2018-11-13 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/2909
  
retest this please


---


[jira] [Resolved] (CARBONDATA-3065) by default disable inverted index for all the dimension column

2018-11-13 Thread Ravindra Pesala (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-3065.
-
   Resolution: Fixed
Fix Version/s: 1.5.1

> by default disable inverted index for all the dimension column
> --
>
> Key: CARBONDATA-3065
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3065
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Major
> Fix For: 1.5.1
>
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> h3. Bottleneck with invertedIndex:
>  # As for each page first we will sort the data and generate inverted index, 
> data loading performance will get impacted.because of this
>  # Store size is more because of stroing inverted index for each dimension 
> column which results in more IO and it impacts query performance
>  # One extra lookup happenes during query due to presence of inverted index 
> which is causing many cachline miss and it impacts the query performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2886: [CARBONDATA-3065]make inverted index false by...

2018-11-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/2886


---


[GitHub] carbondata issue #2886: [CARBONDATA-3065]make inverted index false by defaul...

2018-11-13 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/2886
  
LGTM


---


[GitHub] carbondata issue #2898: [CARBONDATA-3077] Fixed query failure in fileformat ...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2898
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1383/



---


[jira] [Resolved] (CARBONDATA-3075) Select Filter fails for Legacy store if DirectVectorFill is enabled

2018-11-13 Thread Ravindra Pesala (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-3075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-3075.
-
   Resolution: Fixed
Fix Version/s: 1.5.1

> Select Filter fails for Legacy store if DirectVectorFill is enabled
> ---
>
> Key: CARBONDATA-3075
> URL: https://issues.apache.org/jira/browse/CARBONDATA-3075
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Indhumathi Muthumurugesh
>Priority: Major
> Fix For: 1.5.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Please find below steps to reproduce the issue:
>  # Create table and load data in legacy store
>  # In new store, with Direct Vector filling Enabled, execute filter query and 
> find below Exception
> |
> |*This operation is not supported in this reader 
> org.apache.carbondata.core.datastore.chunk.reader.dimension.v2.CompressedDimensionChunkFileBasedReaderV2*|
> |



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2896: [CARBONDATA-3075] Select Filter fails for Leg...

2018-11-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/2896


---


[GitHub] carbondata issue #2898: [CARBONDATA-3077] Fixed query failure in fileformat ...

2018-11-13 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/2898
  
retest this please


---


[GitHub] carbondata issue #2863: [WIP] Optimise decompressing while filling the vecto...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2863
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1382/



---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9639/



---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1591/



---


[GitHub] carbondata issue #2915: [CARBONDATA-3095] Optimize the documentation of SDK/...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2915
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1590/



---


[GitHub] carbondata issue #2915: [CARBONDATA-3095] Optimize the documentation of SDK/...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2915
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9638/



---


[GitHub] carbondata issue #2916: [CARBONDATA-3096] Wrong records size on the input me...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2916
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1381/



---


[GitHub] carbondata pull request #2916: [CARBONDATA-3096] Wrong records size on the i...

2018-11-13 Thread dhatchayani
GitHub user dhatchayani opened a pull request:

https://github.com/apache/carbondata/pull/2916

[CARBONDATA-3096] Wrong records size on the input metrics & Free the 
intermediate page used while adaptive encoding

(1) Scanned record result size is taking from the default batch size. It 
should be taken from the records scanned.
(2) The intermediate page used to sort in adaptive encoding should be freed.


 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [x] Testing done
Manual Testing
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhatchayani/carbondata CARBONDATA-3096

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2916.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2916


commit d85a3d4b68b5ec52b10cfc4eaa37d5d4bc0629d7
Author: dhatchayani 
Date:   2018-11-13T12:58:48Z

[CARBONDATA-3096] Wrong records size on the input metrics & Free the 
intermediate page used while adaptive encoding




---


[GitHub] carbondata issue #2915: [CARBONDATA-3095] Optimize the documentation of SDK/...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2915
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1380/



---


[GitHub] carbondata issue #2895: [HOTFIX] Fix NPE in spark, when same vector reads fi...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2895
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1589/



---


[GitHub] carbondata issue #2895: [HOTFIX] Fix NPE in spark, when same vector reads fi...

2018-11-13 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2895
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9637/



---


[jira] [Created] (CARBONDATA-3096) Wrong records size on the input metrics & Free the intermediate page used while adaptive encoding

2018-11-13 Thread dhatchayani (JIRA)
dhatchayani created CARBONDATA-3096:
---

 Summary: Wrong records size on the input metrics & Free the 
intermediate page used while adaptive encoding
 Key: CARBONDATA-3096
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3096
 Project: CarbonData
  Issue Type: Bug
Reporter: dhatchayani
Assignee: dhatchayani


(1) Scanned record result size is taking from the default batch size. It should 
be taken from the records scanned.

(2) The intermediate page used to sort in adaptive encoding should be freed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2915: [CARBONDATA-3095] Optimize the documentation ...

2018-11-13 Thread xubo245
GitHub user xubo245 opened a pull request:

https://github.com/apache/carbondata/pull/2915

[CARBONDATA-3095] Optimize the documentation of SDK/CSDK

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 No
 - [ ] Any backward compatibility impacted?
 No
 - [ ] Document update required?
Yes
 - [ ] Testing done
No need
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
Jira-2951


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xubo245/carbondata CARBONDATA-3095_OptimizeDoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2915.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2915


commit 18cfd99905ca67b43b948fe90f8032c619ed4e9d
Author: xubo245 
Date:   2018-11-13T12:24:08Z

[CARBONDATA-3095] Optimize the documentation of SDK/CSDK




---


[jira] [Created] (CARBONDATA-3095) Optimize the documentation of SDK/CSDK

2018-11-13 Thread xubo245 (JIRA)
xubo245 created CARBONDATA-3095:
---

 Summary: Optimize the documentation of SDK/CSDK
 Key: CARBONDATA-3095
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3095
 Project: CarbonData
  Issue Type: Sub-task
Affects Versions: 1.5.1
Reporter: xubo245
Assignee: xubo245






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2904: [HOTFIX] Remove search mode module

2018-11-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/2904


---


  1   2   >