[GitHub] carbondata issue #1113: [CARBONDATA-1246] fix null pointer exception by chan...

2017-07-04 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1113
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1113: [CARBONDATA-1246] fix null pointer exception by chan...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1113
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/320/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1113: [CARBONDATA-1246] fix null pointer exception by chan...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1113
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2906/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1113: [CARBONDATA-1246] fix null pointer exception by chan...

2017-07-04 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1113
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1078: [CARBONDATA-1214]changing the delete syntax as in th...

2017-07-04 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1078
  
@ravikiran23 Please update the document that old syntax of "DELETE SEGMENT" 
is deprecated from 1.2 version and new syntax is added.
@chenliang613 please comment on the same. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (CARBONDATA-1242) Query block distribution is more time before scheduling tasks to executor.

2017-07-04 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G updated CARBONDATA-1242:
-
Issue Type: Bug  (was: Improvement)

> Query block distribution is more time before scheduling tasks to executor.
> --
>
> Key: CARBONDATA-1242
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1242
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Rahul Kumar
> Fix For: 1.2.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> select * from issue2_2 limit 14
> Expect Output by Submitter: - Query performance should be equal to executor 
> execution time
> Actual Output shown currently: - 
> The E2E time is 56.545 seconds,  but the executor time is 0.7 seconds ( tow 
> job: 0.2 seconds + 0.5 seconds)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (CARBONDATA-1242) Query block distribution is more time before scheduling tasks to executor.

2017-07-04 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G resolved CARBONDATA-1242.
--
   Resolution: Fixed
 Assignee: Rahul Kumar
Fix Version/s: 1.2.0

> Query block distribution is more time before scheduling tasks to executor.
> --
>
> Key: CARBONDATA-1242
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1242
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
> Fix For: 1.2.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> select * from issue2_2 limit 14
> Expect Output by Submitter: - Query performance should be equal to executor 
> execution time
> Actual Output shown currently: - 
> The E2E time is 56.545 seconds,  but the executor time is 0.7 seconds ( tow 
> job: 0.2 seconds + 0.5 seconds)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1242) Query block distribution is more time before scheduling tasks to executor.

2017-07-04 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G updated CARBONDATA-1242:
-
Summary: Query block distribution is more time before scheduling tasks to 
executor.  (was: Query Performance should not be greater than executor process 
time)

> Query block distribution is more time before scheduling tasks to executor.
> --
>
> Key: CARBONDATA-1242
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1242
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> select * from issue2_2 limit 14
> Expect Output by Submitter: - Query performance should be equal to executor 
> execution time
> Actual Output shown currently: - 
> The E2E time is 56.545 seconds,  but the executor time is 0.7 seconds ( tow 
> job: 0.2 seconds + 0.5 seconds)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1242) Query Performance should not be greater than executor process time

2017-07-04 Thread Venkata Ramana G (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074257#comment-16074257
 ] 

Venkata Ramana G commented on CARBONDATA-1242:
--

Block distribution is taking more time due to unnecessary intermediate sort 
during distribution.

> Query Performance should not be greater than executor process time
> --
>
> Key: CARBONDATA-1242
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1242
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> select * from issue2_2 limit 14
> Expect Output by Submitter: - Query performance should be equal to executor 
> execution time
> Actual Output shown currently: - 
> The E2E time is 56.545 seconds,  but the executor time is 0.7 seconds ( tow 
> job: 0.2 seconds + 0.5 seconds)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1108: [CARBONDATA-1242] performance issue resolved

2017-07-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1108


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1108: [CARBONDATA-1242] performance issue resolved

2017-07-04 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1108
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1061: [CARBONDATA-1193] ViewFS Support - improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1061
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2905/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1061: [CARBONDATA-1193] ViewFS Support - improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1061
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/319/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1061: [CARBONDATA-1193] ViewFS Support - improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1061
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2904/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1061: [CARBONDATA-1193] ViewFS Support - improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1061
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/318/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1129: [CARBONDATA-1259] CompareTest improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1129
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2903/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1129: [CARBONDATA-1259] CompareTest improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1129
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/317/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (CARBONDATA-1255) Remove "COLUMN_GROUPS" feature from documentation

2017-07-04 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen resolved CARBONDATA-1255.

Resolution: Fixed

> Remove "COLUMN_GROUPS" feature from documentation 
> --
>
> Key: CARBONDATA-1255
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1255
> Project: CarbonData
>  Issue Type: Improvement
>  Components: docs
>Affects Versions: 1.2.0
>Reporter: Vandana Yadav
>Assignee: Vandana Yadav
>Priority: Trivial
> Fix For: 1.2.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We should remove "COLUMN_GROUPS " feature from documentation as this feature 
> has been removed 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1255) Remove "COLUMN_GROUPS" feature from documentation

2017-07-04 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen updated CARBONDATA-1255:
---
Fix Version/s: 1.2.0

> Remove "COLUMN_GROUPS" feature from documentation 
> --
>
> Key: CARBONDATA-1255
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1255
> Project: CarbonData
>  Issue Type: Improvement
>  Components: docs
>Affects Versions: 1.2.0
>Reporter: Vandana Yadav
>Assignee: Vandana Yadav
>Priority: Trivial
> Fix For: 1.2.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We should remove "COLUMN_GROUPS " feature from documentation as this feature 
> has been removed 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1255) Remove "COLUMN_GROUPS" feature from documentation

2017-07-04 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen reassigned CARBONDATA-1255:
--

Assignee: Vandana Yadav

> Remove "COLUMN_GROUPS" feature from documentation 
> --
>
> Key: CARBONDATA-1255
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1255
> Project: CarbonData
>  Issue Type: Improvement
>  Components: docs
>Affects Versions: 1.2.0
>Reporter: Vandana Yadav
>Assignee: Vandana Yadav
>Priority: Trivial
> Fix For: 1.2.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We should remove "COLUMN_GROUPS " feature from documentation as this feature 
> has been removed 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1127: [CARBONDATA-1255]-updated "ddl-operation-on-c...

2017-07-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1127


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (CARBONDATA-1241) Single_Pass either should be blocked with Global_Sort

2017-07-04 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G updated CARBONDATA-1241:
-
Priority: Minor  (was: Major)

> Single_Pass either should be blocked with Global_Sort
> -
>
> Key: CARBONDATA-1241
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1241
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
>Priority: Minor
> Fix For: 1.2.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1109: [CARBONDATA-1241] Single_Pass either should b...

2017-07-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1109


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1129: [CARBONDATA-1259] CompareTest improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1129
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2902/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1129: [CARBONDATA-1259] CompareTest improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1129
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/316/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1109: [CARBONDATA-1241] Single_Pass either should be block...

2017-07-04 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1109
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1117: [CARBONDATA-757] Big decimal optimization

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1117#discussion_r125548362
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/ColumnPage.java ---
@@ -73,54 +82,58 @@ public int getPageSize() {
 return pageSize;
   }
 
-  private static ColumnPage createVarLengthPage(DataType dataType, int 
pageSize) {
+  private static ColumnPage createVarLengthPage(DataType dataType, int 
pageSize, int scale,
+  int precision) {
 if (unsafe) {
   try {
-return new UnsafeVarLengthColumnPage(dataType, pageSize);
+return new UnsafeVarLengthColumnPage(dataType, pageSize, scale, 
precision);
   } catch (MemoryException e) {
 throw new RuntimeException(e);
   }
 } else {
-  return new SafeVarLengthColumnPage(dataType, pageSize);
+  return new SafeVarLengthColumnPage(dataType, pageSize, scale, 
precision);
 }
   }
 
-  private static ColumnPage createFixLengthPage(DataType dataType, int 
pageSize) {
+  private static ColumnPage createFixLengthPage(DataType dataType, int 
pageSize, int scale,
+  int precision) {
 if (unsafe) {
   try {
-return new UnsafeFixLengthColumnPage(dataType, pageSize);
+return new UnsafeFixLengthColumnPage(dataType, pageSize, scale, 
precision);
   } catch (MemoryException e) {
 throw new RuntimeException(e);
   }
 } else {
-  return new SafeFixLengthColumnPage(dataType, pageSize);
+  return new SafeFixLengthColumnPage(dataType, pageSize, scale, 
pageSize);
 }
   }
 
-  private static ColumnPage createPage(DataType dataType, int pageSize) {
+  private static ColumnPage createPage(DataType dataType, int pageSize, 
int scale, int precision) {
 if (dataType.equals(BYTE_ARRAY) | dataType.equals(DECIMAL)) {
-  return createVarLengthPage(dataType, pageSize);
+  return createVarLengthPage(dataType, pageSize, scale, precision);
 } else {
-  return createFixLengthPage(dataType, pageSize);
+  return createFixLengthPage(dataType, pageSize, scale, precision);
 }
   }
 
-  public static ColumnPage newVarLengthPath(DataType dataType, int 
pageSize) {
+  public static ColumnPage newVarLengthPath(DataType dataType, int 
pageSize, int scale,
--- End diff --

And please correct the function name, it should be `newVarLengthPage`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1117: [CARBONDATA-757] Big decimal optimization

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1117#discussion_r125548330
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/ColumnPage.java ---
@@ -73,54 +82,58 @@ public int getPageSize() {
 return pageSize;
   }
 
-  private static ColumnPage createVarLengthPage(DataType dataType, int 
pageSize) {
+  private static ColumnPage createVarLengthPage(DataType dataType, int 
pageSize, int scale,
+  int precision) {
 if (unsafe) {
   try {
-return new UnsafeVarLengthColumnPage(dataType, pageSize);
+return new UnsafeVarLengthColumnPage(dataType, pageSize, scale, 
precision);
   } catch (MemoryException e) {
 throw new RuntimeException(e);
   }
 } else {
-  return new SafeVarLengthColumnPage(dataType, pageSize);
+  return new SafeVarLengthColumnPage(dataType, pageSize, scale, 
precision);
 }
   }
 
-  private static ColumnPage createFixLengthPage(DataType dataType, int 
pageSize) {
+  private static ColumnPage createFixLengthPage(DataType dataType, int 
pageSize, int scale,
+  int precision) {
 if (unsafe) {
   try {
-return new UnsafeFixLengthColumnPage(dataType, pageSize);
+return new UnsafeFixLengthColumnPage(dataType, pageSize, scale, 
precision);
   } catch (MemoryException e) {
 throw new RuntimeException(e);
   }
 } else {
-  return new SafeFixLengthColumnPage(dataType, pageSize);
+  return new SafeFixLengthColumnPage(dataType, pageSize, scale, 
pageSize);
 }
   }
 
-  private static ColumnPage createPage(DataType dataType, int pageSize) {
+  private static ColumnPage createPage(DataType dataType, int pageSize, 
int scale, int precision) {
 if (dataType.equals(BYTE_ARRAY) | dataType.equals(DECIMAL)) {
-  return createVarLengthPage(dataType, pageSize);
+  return createVarLengthPage(dataType, pageSize, scale, precision);
 } else {
-  return createFixLengthPage(dataType, pageSize);
+  return createFixLengthPage(dataType, pageSize, scale, precision);
 }
   }
 
-  public static ColumnPage newVarLengthPath(DataType dataType, int 
pageSize) {
+  public static ColumnPage newVarLengthPath(DataType dataType, int 
pageSize, int scale,
--- End diff --

can you create another function of `newVarLengthPage` which accept 
`DataType dataType, int pageSize` only and pass -1, -1 to this function


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1117: [CARBONDATA-757] Big decimal optimization

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1117#discussion_r125548213
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/VarLengthColumnPageBase.java
 ---
@@ -83,7 +84,59 @@ public void setByteArrayPage(byte[][] byteArray) {
   /**
* Create a new column page based on the LV (Length Value) encoded bytes
*/
-  static ColumnPage newDecimalColumnPage(byte[] lvEncodedBytes) throws 
MemoryException {
+  static ColumnPage newDecimalColumnPage(byte[] lvEncodedBytes, int scale, 
int precision)
+  throws MemoryException {
+DecimalConverterFactory.DecimalConverter decimalConverter =
+DecimalConverterFactory.INSTANCE.getDecimalConverter(precision, 
scale);
+int size = decimalConverter.getSize();
+if (size < 0) {
+  return getLegacyColumnPage(lvEncodedBytes, scale, precision, 
DataType.DECIMAL);
+} else {
+  // Here the size is always fixed.
+  return getDecimalColumnPage(lvEncodedBytes, scale, precision, size);
+}
+  }
+
+  /**
+   * Create a new column page based on the LV (Length Value) encoded bytes
+   */
+  static ColumnPage newVarLengthColumnPage(byte[] lvEncodedBytes, int 
scale, int precision)
+  throws MemoryException {
+return getLegacyColumnPage(lvEncodedBytes, scale, precision, 
DataType.BYTE_ARRAY);
--- End diff --

I think it is better to rename `LegacyColumnPage` to 'LVBytesColumnPage`. 
It is hard to know what is legacy


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125546316
  
--- Diff: 
hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputFormatNew.java ---
@@ -0,0 +1,566 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.hadoop;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.BitSet;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.datastore.TableSegmentUniqueIdentifier;
+import org.apache.carbondata.core.indexstore.AbstractTableDataMap;
+import org.apache.carbondata.core.indexstore.Blocklet;
+import org.apache.carbondata.core.indexstore.DataMapStoreManager;
+import org.apache.carbondata.core.indexstore.DataMapType;
+import org.apache.carbondata.core.keygenerator.KeyGenException;
+import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
+import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
+import org.apache.carbondata.core.metadata.schema.PartitionInfo;
+import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
+import org.apache.carbondata.core.mutate.CarbonUpdateUtil;
+import org.apache.carbondata.core.mutate.SegmentUpdateDetails;
+import org.apache.carbondata.core.mutate.UpdateVO;
+import org.apache.carbondata.core.mutate.data.BlockMappingVO;
+import org.apache.carbondata.core.scan.expression.Expression;
+import org.apache.carbondata.core.scan.filter.FilterExpressionProcessor;
+import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
+import org.apache.carbondata.core.scan.model.CarbonQueryPlan;
+import org.apache.carbondata.core.scan.model.QueryModel;
+import org.apache.carbondata.core.scan.partition.PartitionUtil;
+import org.apache.carbondata.core.scan.partition.Partitioner;
+import org.apache.carbondata.core.stats.QueryStatistic;
+import org.apache.carbondata.core.stats.QueryStatisticsConstants;
+import org.apache.carbondata.core.stats.QueryStatisticsRecorder;
+import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
+import org.apache.carbondata.core.statusmanager.SegmentUpdateStatusManager;
+import org.apache.carbondata.core.util.CarbonTimeStatisticsFactory;
+import org.apache.carbondata.core.util.CarbonUtil;
+import org.apache.carbondata.core.util.path.CarbonStorePath;
+import org.apache.carbondata.core.util.path.CarbonTablePath;
+import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
+import 
org.apache.carbondata.hadoop.readsupport.impl.DictionaryDecodeReadSupport;
+import org.apache.carbondata.hadoop.util.CarbonInputFormatUtil;
+import org.apache.carbondata.hadoop.util.ObjectSerializationUtil;
+import org.apache.carbondata.hadoop.util.SchemaReader;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.InvalidPathException;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.input.FileSplit;
+import org.apache.hadoop.mapreduce.security.TokenCache;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Carbon Input format class representing one carbon table
+ */
+public class CarbonInputFormatNew extends FileInputFormat {
+
+  // comma separated list of input segment numbers
  

[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125546253
  
--- Diff: 
hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputFormatNew.java ---
@@ -0,0 +1,566 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.hadoop;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.BitSet;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.datastore.TableSegmentUniqueIdentifier;
+import org.apache.carbondata.core.indexstore.AbstractTableDataMap;
+import org.apache.carbondata.core.indexstore.Blocklet;
+import org.apache.carbondata.core.indexstore.DataMapStoreManager;
+import org.apache.carbondata.core.indexstore.DataMapType;
+import org.apache.carbondata.core.keygenerator.KeyGenException;
+import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
+import org.apache.carbondata.core.metadata.ColumnarFormatVersion;
+import org.apache.carbondata.core.metadata.schema.PartitionInfo;
+import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
+import org.apache.carbondata.core.mutate.CarbonUpdateUtil;
+import org.apache.carbondata.core.mutate.SegmentUpdateDetails;
+import org.apache.carbondata.core.mutate.UpdateVO;
+import org.apache.carbondata.core.mutate.data.BlockMappingVO;
+import org.apache.carbondata.core.scan.expression.Expression;
+import org.apache.carbondata.core.scan.filter.FilterExpressionProcessor;
+import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
+import org.apache.carbondata.core.scan.model.CarbonQueryPlan;
+import org.apache.carbondata.core.scan.model.QueryModel;
+import org.apache.carbondata.core.scan.partition.PartitionUtil;
+import org.apache.carbondata.core.scan.partition.Partitioner;
+import org.apache.carbondata.core.stats.QueryStatistic;
+import org.apache.carbondata.core.stats.QueryStatisticsConstants;
+import org.apache.carbondata.core.stats.QueryStatisticsRecorder;
+import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
+import org.apache.carbondata.core.statusmanager.SegmentUpdateStatusManager;
+import org.apache.carbondata.core.util.CarbonTimeStatisticsFactory;
+import org.apache.carbondata.core.util.CarbonUtil;
+import org.apache.carbondata.core.util.path.CarbonStorePath;
+import org.apache.carbondata.core.util.path.CarbonTablePath;
+import org.apache.carbondata.hadoop.readsupport.CarbonReadSupport;
+import 
org.apache.carbondata.hadoop.readsupport.impl.DictionaryDecodeReadSupport;
+import org.apache.carbondata.hadoop.util.CarbonInputFormatUtil;
+import org.apache.carbondata.hadoop.util.ObjectSerializationUtil;
+import org.apache.carbondata.hadoop.util.SchemaReader;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.InvalidPathException;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.input.FileSplit;
+import org.apache.hadoop.mapreduce.security.TokenCache;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Carbon Input format class representing one carbon table
+ */
+public class CarbonInputFormatNew extends FileInputFormat {
--- End diff --

Can you change to implement 

[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125546189
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletTableMap.java
 ---
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.core.indexstore.blockletindex;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.carbondata.core.cache.Cache;
+import org.apache.carbondata.core.cache.CacheProvider;
+import org.apache.carbondata.core.cache.CacheType;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFile;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFileFilter;
+import org.apache.carbondata.core.datastore.impl.FileFactory;
+import org.apache.carbondata.core.events.ChangeEvent;
+import org.apache.carbondata.core.indexstore.AbstractTableDataMap;
+import org.apache.carbondata.core.indexstore.DataMap;
+import org.apache.carbondata.core.indexstore.DataMapDistributable;
+import org.apache.carbondata.core.indexstore.DataMapWriter;
+import 
org.apache.carbondata.core.indexstore.TableBlockIndexUniqueIdentifier;
+import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
+import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
+
+/**
+ * Table map for blocklet
+ */
+public class BlockletTableMap extends AbstractTableDataMap {
--- End diff --

After creating `DataMap.Builder` interface, we can remove abstract class 
and make this class final. It should not be extensible.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125546095
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletTableMap.java
 ---
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.core.indexstore.blockletindex;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.carbondata.core.cache.Cache;
+import org.apache.carbondata.core.cache.CacheProvider;
+import org.apache.carbondata.core.cache.CacheType;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFile;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFileFilter;
+import org.apache.carbondata.core.datastore.impl.FileFactory;
+import org.apache.carbondata.core.events.ChangeEvent;
+import org.apache.carbondata.core.indexstore.AbstractTableDataMap;
+import org.apache.carbondata.core.indexstore.DataMap;
+import org.apache.carbondata.core.indexstore.DataMapDistributable;
+import org.apache.carbondata.core.indexstore.DataMapWriter;
+import 
org.apache.carbondata.core.indexstore.TableBlockIndexUniqueIdentifier;
+import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
+import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
+
+/**
+ * Table map for blocklet
+ */
+public class BlockletTableMap extends AbstractTableDataMap {
+
+  private String dataMapName;
+
+  private AbsoluteTableIdentifier identifier;
+
+  private Map segmentMap = 
new HashMap<>();
+
+  private Cache cache;
+
+  @Override public void init(AbsoluteTableIdentifier identifier, String 
dataMapName) {
+this.identifier = identifier;
+this.dataMapName = dataMapName;
+cache = CacheProvider.getInstance()
+.createCache(CacheType.DRIVER_BLOCKLET_DATAMAP, 
identifier.getStorePath());
+  }
+
+  @Override public DataMapWriter getMetaDataWriter() {
+return null;
+  }
+
+  @Override
+  public DataMapWriter getDataMapWriter(AbsoluteTableIdentifier 
identifier, String segmentId) {
+return null;
+  }
+
+  @Override protected List getDataMaps(String segmentId) {
--- End diff --

Please move this interface to `DataMap.Builder` interface. And also add 
`DataMap.Writer` interface


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125546037
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletTableMap.java
 ---
@@ -0,0 +1,123 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.core.indexstore.blockletindex;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.carbondata.core.cache.Cache;
+import org.apache.carbondata.core.cache.CacheProvider;
+import org.apache.carbondata.core.cache.CacheType;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFile;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFileFilter;
+import org.apache.carbondata.core.datastore.impl.FileFactory;
+import org.apache.carbondata.core.events.ChangeEvent;
+import org.apache.carbondata.core.indexstore.AbstractTableDataMap;
+import org.apache.carbondata.core.indexstore.DataMap;
+import org.apache.carbondata.core.indexstore.DataMapDistributable;
+import org.apache.carbondata.core.indexstore.DataMapWriter;
+import 
org.apache.carbondata.core.indexstore.TableBlockIndexUniqueIdentifier;
+import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier;
+import org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
+
+/**
+ * Table map for blocklet
+ */
+public class BlockletTableMap extends AbstractTableDataMap {
+
+  private String dataMapName;
+
+  private AbsoluteTableIdentifier identifier;
+
+  private Map segmentMap = 
new HashMap<>();
+
+  private Cache cache;
+
+  @Override public void init(AbsoluteTableIdentifier identifier, String 
dataMapName) {
+this.identifier = identifier;
+this.dataMapName = dataMapName;
+cache = CacheProvider.getInstance()
+.createCache(CacheType.DRIVER_BLOCKLET_DATAMAP, 
identifier.getStorePath());
+  }
+
+  @Override public DataMapWriter getMetaDataWriter() {
--- End diff --

I think we can defer this abstraction till there is table level data map 
metadata, so we can delete this now


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1125: [CarbonData-1250] Change default partition id & Add ...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1125
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2901/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1125: [CarbonData-1250] Change default partition id & Add ...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1125
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/315/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1129: [CARBONDATA-1259] CompareTest improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1129
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2900/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1129: [CARBONDATA-1259] CompareTest improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1129
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/314/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1135: [CARBONDATA-1265] Fix AllDictionary because it is on...

2017-07-04 Thread chenerlu
Github user chenerlu commented on the issue:

https://github.com/apache/carbondata/pull/1135
  
Have merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1135: [CARBONDATA-1265] Fix AllDictionary because i...

2017-07-04 Thread chenerlu
Github user chenerlu closed the pull request at:

https://github.com/apache/carbondata/pull/1135


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1129: [CARBONDATA-1259] CompareTest improvement

2017-07-04 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/1129
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (CARBONDATA-1265) Fix AllDictionaryExample because it is only supported when single_pass is true

2017-07-04 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen resolved CARBONDATA-1265.

Resolution: Fixed

> Fix AllDictionaryExample because it is only supported when single_pass is true
> --
>
> Key: CARBONDATA-1265
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1265
> Project: CarbonData
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 1.1.0
>Reporter: chenerlu
>Assignee: chenerlu
>Priority: Minor
> Fix For: 1.1.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1265) Fix AllDictionaryExample because it is only supported when single_pass is true

2017-07-04 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen updated CARBONDATA-1265:
---
Affects Version/s: 1.1.0

> Fix AllDictionaryExample because it is only supported when single_pass is true
> --
>
> Key: CARBONDATA-1265
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1265
> Project: CarbonData
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 1.1.0
>Reporter: chenerlu
>Assignee: chenerlu
>Priority: Minor
> Fix For: 1.1.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1265) Fix AllDictionaryExample because it is only supported when single_pass is true

2017-07-04 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen updated CARBONDATA-1265:
---
Fix Version/s: 1.1.1

> Fix AllDictionaryExample because it is only supported when single_pass is true
> --
>
> Key: CARBONDATA-1265
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1265
> Project: CarbonData
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 1.1.0
>Reporter: chenerlu
>Assignee: chenerlu
>Priority: Minor
> Fix For: 1.1.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1265) Fix AllDictionaryExample because it is only supported when single_pass is true

2017-07-04 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen updated CARBONDATA-1265:
---
Component/s: examples

> Fix AllDictionaryExample because it is only supported when single_pass is true
> --
>
> Key: CARBONDATA-1265
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1265
> Project: CarbonData
>  Issue Type: Bug
>  Components: examples
>Affects Versions: 1.1.0
>Reporter: chenerlu
>Assignee: chenerlu
>Priority: Minor
> Fix For: 1.1.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125543490
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/row/DataMapRow.java ---
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.core.indexstore.row;
+
+import org.apache.carbondata.core.indexstore.schema.DataMapSchema;
+
+/**
+ * Index row
+ */
+public abstract class DataMapRow {
--- End diff --

Can you describe more for this class, like saying this contains a list of 
fields which can be filled by client using ordinal


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125542074
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/AbstractTableDataMap.java
 ---
@@ -84,11 +121,11 @@
* @param filterExp
* @return
*/
-  boolean isFiltersSupported(FilterResolverIntf filterExp);
+  public abstract boolean isFiltersSupported(FilterResolverIntf filterExp);
--- End diff --

I feel it is better to accept a `FilterType` as input and return boolean, 
so client is not dependent on carbon's filter interface


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125541188
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/DataMapStoreManager.java
 ---
@@ -30,7 +31,7 @@
 
   private static DataMapStoreManager instance = new DataMapStoreManager();
 
-  private Map> dataMapMappping = 
new HashMap<>();
+  private Map> 
dataMapMappping = new HashMap<>();
--- End diff --

I feel it is better to keep `Map>`, thus client can get all data map for one table easily.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125539888
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/DataMapStoreManager.java
 ---
@@ -30,7 +31,7 @@
 
   private static DataMapStoreManager instance = new DataMapStoreManager();
 
-  private Map> dataMapMappping = 
new HashMap<>();
+  private Map> 
dataMapMappping = new HashMap<>();
--- End diff --

add comment to describe this map


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125540172
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/DataMapStoreManager.java
 ---
@@ -69,20 +70,20 @@ public TableDataMap getDataMap(AbsoluteTableIdentifier 
identifier, String dataMa
* @param mapType
* @return
*/
-  public TableDataMap createTableDataMap(AbsoluteTableIdentifier 
identifier, DataMapType mapType,
-  String dataMapName) {
-Map map = dataMapMappping.get(mapType);
+  public AbstractTableDataMap createTableDataMap(AbsoluteTableIdentifier 
identifier,
+  DataMapType mapType, String dataMapName) {
+Map map = dataMapMappping.get(mapType);
 if (map == null) {
   map = new HashMap<>();
   dataMapMappping.put(mapType, map);
 }
-TableDataMap dataMap = map.get(dataMapName);
+AbstractTableDataMap dataMap = map.get(dataMapName);
 if (dataMap != null) {
   throw new RuntimeException("Already datamap exists in that path with 
type " + mapType);
 }
 
 try {
-  //TODO create datamap using @mapType.getClassName())
+  dataMap = (AbstractTableDataMap) 
(Class.forName(mapType.getClassName()).newInstance());
 } catch (Exception e) {
   LOGGER.error(e);
--- End diff --

should not ignore the exception


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125539854
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/DataMapStoreManager.java
 ---
@@ -30,7 +31,7 @@
 
--- End diff --

please modify the comment, not index table


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1099: [CARBONDATA-1232] Datamap implementation for ...

2017-07-04 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1099#discussion_r125539784
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/AbstractTableDataMap.java
 ---
@@ -26,28 +27,29 @@
  * DataMap at the table level, user can add any number of datamaps for one 
table. Depends
  * on the filter condition it can prune the blocklets.
  */
-public interface TableDataMap extends EventListener {
+public abstract class AbstractTableDataMap implements EventListener {
--- End diff --

how about to name it `TableDataMap` and rename `DataMap` to `SegmentDataMap`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1064: [CARBONDATA-1173] Stream ingestion - write path fram...

2017-07-04 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/1064
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1124: [CARBONDATA-1257] Measure Filter implementati...

2017-07-04 Thread sounakr
Github user sounakr closed the pull request at:

https://github.com/apache/carbondata/pull/1124


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1061: [CARBONDATA-1193] ViewFS Support - improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1061
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/313/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1102: [CARBONDATA-1098] Change page statistics use exact t...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1102
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/312/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1135: [CARBONDATA-1265] Fix AllDictionary because it is on...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1135
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/311/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1061: [CARBONDATA-1193] ViewFS Support - improvement

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1061
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2899/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1135: [CARBONDATA-1265] Fix AllDictionary because it is on...

2017-07-04 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1135
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1135: [CARBONDATA-1265] Fix AllDictionary because it is on...

2017-07-04 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1135
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1135: [CARBONDATA-1265] Fix AllDictionary because i...

2017-07-04 Thread chenerlu
GitHub user chenerlu opened a pull request:

https://github.com/apache/carbondata/pull/1135

[CARBONDATA-1265] Fix AllDictionary because it is only supported when 
single_pass is true

Fix AllDictionary because it is only supported when single_pass is true

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chenerlu/incubator-carbondata 
branch-1.1-release

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1135.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1135






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1265) Fix AllDictionaryExample because it is only supported when single_pass is true

2017-07-04 Thread chenerlu (JIRA)
chenerlu created CARBONDATA-1265:


 Summary: Fix AllDictionaryExample because it is only supported 
when single_pass is true
 Key: CARBONDATA-1265
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1265
 Project: CarbonData
  Issue Type: Bug
Reporter: chenerlu
Assignee: chenerlu
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1263) Single pass load does not take default value false for blank or invalid single pass value

2017-07-04 Thread ayushmantri (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ayushmantri reassigned CARBONDATA-1263:
---

Assignee: ayushmantri

> Single pass load does not take default value false for blank or invalid 
> single pass value
> -
>
> Key: CARBONDATA-1263
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1263
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.1.0
> Environment: 3 node cluster
>Reporter: Chetan Bhat
>Assignee: ayushmantri
>Priority: Minor
>
> Issue : Single pass load does not take default value false for blank or 
> invalid single pass value.
> 0: jdbc:hive2://10.19.91.224:22550/default> LOAD DATA  inpath 
> 'hdfs://hacluster/chetan/1.csv' into table flow_carbon_test4 
> options('DELIMITER'=',', 
> 'QUOTECHAR'='"','COLUMNDICT'='test:hdfs://hacluster/chetan/MSISDN.csv','SINGLE_PASS'='','FILEHEADER'='test');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (37.628 seconds)
> 0: jdbc:hive2://10.19.91.224:22550/default> LOAD DATA  inpath 
> 'hdfs://hacluster/chetan/1.csv' into table flow_carbon_test4 
> options('DELIMITER'=',', 
> 'QUOTECHAR'='"','COLUMNDICT'='test:hdfs://hacluster/chetan/MSISDN.csv','SINGLE_PASS'='1234','FILEHEADER'='test,test1');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (39.131 seconds)
> Expected : Validation should be provided and the load should fail with error 
> message. Default value of Single pass false should be reflected for 
> blank/invalid value for single pass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1102: [CARBONDATA-1098] Change page statistics use exact t...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1102
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2897/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1263) Single pass load does not take default value false for blank or invalid single pass value

2017-07-04 Thread Chetan Bhat (JIRA)
Chetan Bhat created CARBONDATA-1263:
---

 Summary: Single pass load does not take default value false for 
blank or invalid single pass value
 Key: CARBONDATA-1263
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1263
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Affects Versions: 1.1.0
 Environment: 3 node cluster
Reporter: Chetan Bhat
Priority: Minor


Issue : Single pass load does not take default value false for blank or invalid 
single pass value.

0: jdbc:hive2://10.19.91.224:22550/default> LOAD DATA  inpath 
'hdfs://hacluster/chetan/1.csv' into table flow_carbon_test4 
options('DELIMITER'=',', 
'QUOTECHAR'='"','COLUMNDICT'='test:hdfs://hacluster/chetan/MSISDN.csv','SINGLE_PASS'='','FILEHEADER'='test');
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (37.628 seconds)

0: jdbc:hive2://10.19.91.224:22550/default> LOAD DATA  inpath 
'hdfs://hacluster/chetan/1.csv' into table flow_carbon_test4 
options('DELIMITER'=',', 
'QUOTECHAR'='"','COLUMNDICT'='test:hdfs://hacluster/chetan/MSISDN.csv','SINGLE_PASS'='1234','FILEHEADER'='test,test1');
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (39.131 seconds)


Expected : Validation should be provided and the load should fail with error 
message. Default value of Single pass false should be reflected for 
blank/invalid value for single pass.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1141) Data load is partially successful but delete error

2017-07-04 Thread Jatin (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073607#comment-16073607
 ] 

Jatin commented on CARBONDATA-1141:
---

I have tried the same scenario with latest code but I didn't able to reproduce 
the scenario. Please provide more details.

> Data load is partially successful  but delete error
> ---
>
> Key: CARBONDATA-1141
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1141
> Project: CarbonData
>  Issue Type: Bug
>  Components: spark-integration, sql
>Affects Versions: 1.2.0
> Environment: spark on 
> yarn,carbondata1.2.0,hadoop2.7,spark2.1.0,hive2.1.0
>Reporter: zhuzhibin
> Fix For: 1.2.0
>
> Attachments: error1.png, error.png
>
>
> when I tried to load data into table (data size is about 300 million),the log 
> showed me that “Data load is partially successful for table",
> but when I executed delete table operation,some errors appeared,the error 
> message is "java.lang.ArrayIndexOutOfBoundsException: 1
> at 
> org.apache.carbondata.core.mutate.CarbonUpdateUtil.getRequiredFieldFromTID(CarbonUpdateUtil.java:67)".
> when I executed another delete table operation with where condition,it was 
> succeeful,but executed select operation then appeared 
> "java.lang.ArrayIndexOutOfBoundsException Driver stacktrace:
>   at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)"
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1115: [CARBONDATA-1247]Block pruning not working fo...

2017-07-04 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1115#discussion_r125459962
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/CastExpressionOptimization.scala
 ---
@@ -66,6 +68,19 @@ object CastExpressionOptimization {
 }
   }
 
+  def typeCastStringToLongForDateType(v: Any): Any = {
--- End diff --

use stringToDate instead of stringToTimestamp for date datatype.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1115: [CARBONDATA-1247]Block pruning not working fo...

2017-07-04 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1115#discussion_r125459816
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/CastExpressionOptimization.scala
 ---
@@ -66,6 +68,19 @@ object CastExpressionOptimization {
 }
   }
 
+  def typeCastStringToLongForDateType(v: Any): Any = {
+try {
+  // spark also uses castToTimestamp only to convert  time to long.So 
to syn with spark ,
+  // Filter cast format should be same so used castToTimestamp method .
+  // Spark uses it in Cast.scala under ConstantFolding Rule before 
carbon optimizer)
+  val value = 
DateTimeUtils.stringToTimestamp(UTF8String.fromString(v.toString)).get
--- End diff --

When parse fails it gives none, none case behaviour needs to validate as 
per hive, whether null values are considered in output or not.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (CARBONDATA-1200) update data failed on spark 1.6.2

2017-07-04 Thread Ramandeep Kaur (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073570#comment-16073570
 ] 

Ramandeep Kaur commented on CARBONDATA-1200:


Hi Jarck
I have run the update query with 30 records and it was working as expected that 
is, it was updating all the 30 records. Kindly provide more information like 
steps to reproduce, queries executed, logs, etc

> update data failed on spark 1.6.2
> -
>
> Key: CARBONDATA-1200
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1200
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Jarck
>
> I use branch-1.1 do update test on spark 1.6.2 in my local machine
> I update 30 records, 3 records update success and 27 records update failed, 
> more over it inserts 27 records instead, apparently it is not expected



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1115: [CARBONDATA-1247]Block pruning not working fo...

2017-07-04 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1115#discussion_r125457205
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/CastExpressionOptimization.scala
 ---
@@ -66,6 +68,19 @@ object CastExpressionOptimization {
 }
   }
 
+  def typeCastStringToLongForDateType(v: Any): Any = {
--- End diff --

time stamp also needs to be handled accordingly


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1115: [CARBONDATA-1247]Block pruning not working fo...

2017-07-04 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1115#discussion_r125457152
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/CastExpressionOptimization.scala
 ---
@@ -66,6 +68,19 @@ object CastExpressionOptimization {
 }
   }
 
+  def typeCastStringToLongForDateType(v: Any): Any = {
--- End diff --

Also check support for spark1.5


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1132: [CARBONDATA-1260] Show Partition for Range partition...

2017-07-04 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1132
  
default to use or not depends on where null values are stored. based on 
that we need to show the partitions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (CARBONDATA-1153) Can not add column

2017-07-04 Thread Geetika Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073515#comment-16073515
 ] 

Geetika Gupta commented on CARBONDATA-1153:
---

The scenario is working fine on my machine. Please provide more details

> Can not add column
> --
>
> Key: CARBONDATA-1153
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1153
> Project: CarbonData
>  Issue Type: Bug
>  Components: spark-integration
>Affects Versions: 1.2.0
>Reporter: cen yuhai
>
> Sometimes it will throws exception as below. why can't I add column? no one 
> are altering the table... 
> {code}
> scala> carbon.sql("alter table temp.yuhai_carbon add columns(test1 string)")
> 17/06/11 22:09:13 AUDIT 
> [org.apache.spark.sql.execution.command.AlterTableAddColumns(207) -- main]: 
> [sh-hadoop-datanode-250-104.elenet.me][master][Thread-1]Alter table add 
> columns request has been received for temp.yuhai_carbon
> 17/06/11 22:10:22 ERROR [org.apache.spark.scheduler.TaskSetManager(70) -- 
> task-result-getter-3]: Task 0 in stage 0.0 failed 4 times; aborting job
> 17/06/11 22:10:22 ERROR 
> [org.apache.spark.sql.execution.command.AlterTableAddColumns(141) -- main]: 
> main Alter table add columns failed :Job aborted due to stage failure: Task 0 
> in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 
> (TID 3, sh-hadoop-datanode-368.elenet.me, executor 7): 
> java.lang.RuntimeException: Dictionary file test1 is locked for updation. 
> Please try after some time
> at scala.sys.package$.error(package.scala:27)
> at 
> org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDefaultDictionaryValueForNewColumn(GlobalDictionaryUtil.scala:857)
> at 
> org.apache.carbondata.spark.rdd.AlterTableAddColumnRDD$$anon$1.(AlterTableAddColumnRDD.scala:83)
> at 
> org.apache.carbondata.spark.rdd.AlterTableAddColumnRDD.compute(AlterTableAddColumnRDD.scala:68)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:331)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:295)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:88)
> at org.apache.spark.scheduler.Task.run(Task.scala:104)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:351)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (CARBONDATA-1184) Incorrect value displays in double data type.

2017-07-04 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-1184:
-

Assignee: Ashwini K

> Incorrect value displays in double data type. 
> --
>
> Key: CARBONDATA-1184
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1184
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Assignee: Ashwini K
>Priority: Minor
> Attachments: 100_olap_C20.csv
>
>
> Incorrect value displays to the user in double datatype.
> Step to reproduces:
> 1:Create table:
> create table VMALL_DICTIONARY_EXCLUDE (imei string,deviceInformationId 
> int,MAC string,deviceColor string,device_backColor string,modelId 
> string,marketName string,AMSize string,ROMSize string,CUPAudit 
> string,CPIClocked string,series string,productionDate timestamp,bomCode 
> string,internalModels string, deliveryTime string, channelsId string, 
> channelsName string , deliveryAreaId string, deliveryCountry string, 
> deliveryProvince string, deliveryCity string,deliveryDistrict string, 
> deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, 
> ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity 
> string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, 
> Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion 
> string, Active_BacVerNumber string, Active_BacFlashVer string, 
> Active_webUIVersion string, Active_webUITypeCarrVer 
> string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, 
> Active_phonePADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, 
> Latest_DAY Decimal(30,10), Latest_HOUR string, Latest_areaId string, 
> Latest_country string, Latest_province string, Latest_city string, 
> Latest_district string, Latest_street string, Latest_releaseId string, 
> Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber 
> string, Latest_BacFlashVer string, Latest_webUIVersion string, 
> Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, 
> Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, 
> Latest_operatorId string, gamePointDescription string,gamePointId 
> double,contractNumber BigInt) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='imei');
> 2:Load Data:
> LOAD DATA INPATH 'hdfs://localhost:54310/100_olap_C20.csv' INTO table 
> VMALL_DICTIONARY_EXCLUDE options ('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE', 
> 'FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription');
> 3: Run Select Query.
> select gamePointId from VMALL_DICTIONARY_EXCLUDE;
> 4: Result:
> 0: jdbc:hive2://localhost:1> select gamePointId from 
> VMALL_DICTIONARY_EXCLUDE;
> +---+--+
> |  gamePointId  |
> +---+--+
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |

[jira] [Assigned] (CARBONDATA-1143) Incorrect Data load while loading data into struct of struct

2017-07-04 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-1143:
-

Assignee: Ashwini K

> Incorrect Data load while loading data into struct of struct
> 
>
> Key: CARBONDATA-1143
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1143
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
> Environment: spark 2.1
>Reporter: Vandana Yadav
>Assignee: Ashwini K
>Priority: Minor
> Attachments: structinstructnull.csv
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Incorrect Data load while loading data into struct of struct
> Steps to reproduce:
> 1) Create table:
> create table structinstruct(id int, structelem struct struct>)stored by 'carbondata';
> 2)Load data:
> load data inpath 'hdfs://localhost:54310/structinstructnull.csv' into table 
> structinstruct options('delimiter'=',' , 
> 'fileheader'='id,structelem','COMPLEX_DELIMITER_LEVEL_1'='#', 
> 'COMPLEX_DELIMITER_LEVEL_2'='|');
> 3)Query executed:
> select * from structinstruct;
> 4) Actual result:
> +---+--+--+
> |  id   |  structelem  |
> +---+--+--+
> | 1 | {"id1":111,"structelem":{"id2":1001,"name":"abc"}}   |
> | 2 | {"id1":222,"structelem":{"id2":2002,"name":"xyz"}}   |
> | NULL  | {"id1":333,"structelem":{"id2":3003,"name":"def"}}   |
> | 4 | {"id1":null,"structelem":{"id2":4004,"name":"pqr"}}  |
> | 5 | {"id1":555,"structelem":{"id2":null,"name":"ghi"}}   |
> | 6 | {"id1":666,"structelem":{"id2":6006,"name":"null"}}  |
> | 7 | {"id1":null,"structelem":{"id2":1001,"name":null}}   |
> +---+--+--+
> 7 rows selected (1.023 seconds)
> 5) Expected Result: In last row "id2" should be null as there is no such 
> value(1001) provided in csv for that
> 6) Data in CSV:
> 1,111#1001|abc
> 2,222#2002|xyz
> null,333#3003|def
> 4,null#4004|pqr
> 5,555#null|ghi
> 6,666#6006|null
> 7,null



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1134: [CARBONDATA-1262] Remove unnecessary LoadConfigurati...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1134
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/310/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1134: [CARBONDATA-1262] Remove unnecessary LoadConfigurati...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1134
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2896/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1262) Remove unnecessary LoadConfiguration creation

2017-07-04 Thread Jacky Li (JIRA)
Jacky Li created CARBONDATA-1262:


 Summary: Remove unnecessary LoadConfiguration creation
 Key: CARBONDATA-1262
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1262
 Project: CarbonData
  Issue Type: Improvement
Reporter: Jacky Li
 Fix For: 1.2.0


Remove unnecessary LoadConfiguration creation



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1134: Remove unnecessary LoadConfiguration creation

2017-07-04 Thread jackylk
GitHub user jackylk opened a pull request:

https://github.com/apache/carbondata/pull/1134

Remove unnecessary LoadConfiguration creation

Currently for every load, `LoadConfiguration` is created twice. This PR 
removes one unnecessary creation.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jackylk/incubator-carbondata load

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1134.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1134


commit cee979f3f1e668d114d93e37999fa24e24262d7c
Author: jackylk 
Date:   2017-07-04T11:07:32Z

remove unnecessary object creation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (CARBONDATA-1202) delete data failed on spark 1.6.2

2017-07-04 Thread Shivangi Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073467#comment-16073467
 ] 

Shivangi Gupta commented on CARBONDATA-1202:


Hi Jarck,

I tried to execute 30 Delete queries in the for loop and I was able to delete 
all 30 rows.

Please provide more information like steps to reproduce, queries executed, 
logs, etc

> delete data failed on spark 1.6.2
> -
>
> Key: CARBONDATA-1202
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1202
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Jarck
>
> I use branch-1.1 do delete test on spark 1.6.2 in my local machine
> delete 30 records one by one in a for loop
> at last I found that there are only 3 records are deleted successfully ,27 
> records still exist



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-851) Incorrect result displays while range filter query.

2017-07-04 Thread Vinod Rohilla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Rohilla closed CARBONDATA-851.


Issue Closed

> Incorrect result displays while range filter query. 
> 
>
> Key: CARBONDATA-851
> URL: https://issues.apache.org/jira/browse/CARBONDATA-851
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.0.0-incubating
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Priority: Minor
> Attachments: 2000_UniqData.csv
>
>
> Incorrect result displays to a user while use Greater then or equal to (>=) 
> operator.
> Steps to Reproduces:
> 1:Create table :
> CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> 2:Load Data:
> LOAD DATA INPATH 'HDFS_URL/BabuStore/Data/uniqdata/2000_UniqData.csv' into 
> table uniqdata OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> 3:Run the Query:
> select dob from uniqdata where dob <= '1972-12-10 01:00:03.0' and dob >= 
> '1972-12-01 01:00:03.0';
> Result:
> ++--+
> |  dob   |
> ++--+
> | 1972-12-02 01:00:03.0  |
> | 1972-12-03 01:00:03.0  |
> | 1972-12-04 01:00:03.0  |
> | 1972-12-05 01:00:03.0  |
> | 1972-12-06 01:00:03.0  |
> | 1972-12-07 01:00:03.0  |
> | 1972-12-08 01:00:03.0  |
> | 1972-12-09 01:00:03.0  |
> | 1972-12-10 01:00:03.0  |
> ++--+
> Expected Result: it should include " 1972-12-01 01:00:03.0 " in the result 
> set.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-824) Null pointer Exception display to user while performance Testing

2017-07-04 Thread Vinod Rohilla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Rohilla closed CARBONDATA-824.


Issue Closed

> Null pointer Exception display to user while performance Testing
> 
>
> Key: CARBONDATA-824
> URL: https://issues.apache.org/jira/browse/CARBONDATA-824
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 0.1.0-incubating
> Environment: SPARK 2.1
>Reporter: Vinod Rohilla
>
> Displays null pointer exception to the user while select Query.
> Steps to reproduces:
> 1: Create table:
> CREATE TABLE oscon_new_1 (ACTIVE_AREA_ID String, ACTIVE_CHECK_DY String, 
> ACTIVE_CHECK_HOUR String, ACTIVE_CHECK_MM String, ACTIVE_CHECK_TIME String, 
> ACTIVE_CHECK_YR String, ACTIVE_CITY String, ACTIVE_COUNTRY String, 
> ACTIVE_DISTRICT String, ACTIVE_EMUI_VERSION String, ACTIVE_FIRMWARE_VER 
> String, ACTIVE_NETWORK String, ACTIVE_OS_VERSION String, ACTIVE_PROVINCE 
> String, BOM String, CHECK_DATE String, CHECK_DY String, CHECK_HOUR String, 
> CHECK_MM String, CHECK_YR String, CUST_ADDRESS_ID String, CUST_AGE String, 
> CUST_BIRTH_COUNTRY String, CUST_BIRTH_DY String, CUST_BIRTH_MM String, 
> CUST_BIRTH_YR String, CUST_BUY_POTENTIAL String, CUST_CITY String, CUST_STATE 
> String, CUST_COUNTRY String, CUST_COUNTY String, CUST_EMAIL_ADDR String, 
> CUST_LAST_RVW_DATE TIMESTAMP, CUST_FIRST_NAME String, CUST_ID String, 
> CUST_JOB_TITLE String, CUST_LAST_NAME String, CUST_LOGIN String, 
> CUST_NICK_NAME String, CUST_PRFRD_FLG String, CUST_SEX String, 
> CUST_STREET_NAME String, CUST_STREET_NO String, CUST_SUITE_NO String, 
> CUST_ZIP String, DELIVERY_CITY String, DELIVERY_STATE String, 
> DELIVERY_COUNTRY String, DELIVERY_DISTRICT String, DELIVERY_PROVINCE String, 
> DEVICE_NAME String, INSIDE_NAME String, ITM_BRAND String, ITM_BRAND_ID 
> String, ITM_CATEGORY String, ITM_CATEGORY_ID String, ITM_CLASS String, 
> ITM_CLASS_ID String, ITM_COLOR String, ITM_CONTAINER String, ITM_FORMULATION 
> String, ITM_MANAGER_ID String, ITM_MANUFACT String, ITM_MANUFACT_ID String, 
> ITM_ID String, ITM_NAME String, ITM_REC_END_DATE String, ITM_REC_START_DATE 
> String, LATEST_AREAID String, LATEST_CHECK_DY String, LATEST_CHECK_HOUR 
> String, LATEST_CHECK_MM String, LATEST_CHECK_TIME String, LATEST_CHECK_YR 
> String, LATEST_CITY String, LATEST_COUNTRY String, LATEST_DISTRICT String, 
> LATEST_EMUI_VERSION String, LATEST_FIRMWARE_VER String, LATEST_NETWORK 
> String, LATEST_OS_VERSION String, LATEST_PROVINCE String, OL_ORDER_DATE 
> String, OL_ORDER_NO INT, OL_RET_ORDER_NO String, OL_RET_DATE String, OL_SITE 
> String, OL_SITE_DESC String, PACKING_DATE String, PACKING_DY String, 
> PACKING_HOUR String, PACKING_LIST_NO String, PACKING_MM String, PACKING_YR 
> String, PRMTION_ID String, PRMTION_NAME String, PRM_CHANNEL_CAT String, 
> PRM_CHANNEL_DEMO String, PRM_CHANNEL_DETAILS String, PRM_CHANNEL_DMAIL 
> String, PRM_CHANNEL_EMAIL String, PRM_CHANNEL_EVENT String, PRM_CHANNEL_PRESS 
> String, PRM_CHANNEL_RADIO String, PRM_CHANNEL_TV String, PRM_DSCNT_ACTIVE 
> String, PRM_END_DATE String, PRM_PURPOSE String, PRM_START_DATE String, 
> PRODUCT_ID String, PROD_BAR_CODE String, PROD_BRAND_NAME String, PRODUCT_NAME 
> String, PRODUCT_MODEL String, PROD_MODEL_ID String, PROD_COLOR String, 
> PROD_SHELL_COLOR String, PROD_CPU_CLOCK String, PROD_IMAGE String, PROD_LIVE 
> String, PROD_LOC String, PROD_LONG_DESC String, PROD_RAM String, PROD_ROM 
> String, PROD_SERIES String, PROD_SHORT_DESC String, PROD_THUMB String, 
> PROD_UNQ_DEVICE_ADDR String, PROD_UNQ_MDL_ID String, PROD_UPDATE_DATE String, 
> PROD_UQ_UUID String, SHP_CARRIER String, SHP_CODE String, SHP_CONTRACT 
> String, SHP_MODE_ID String, SHP_MODE String, STR_ORDER_DATE String, 
> STR_ORDER_NO String, TRACKING_NO String, WH_CITY String, WH_COUNTRY String, 
> WH_COUNTY String, WH_ID String, WH_NAME String, WH_STATE String, 
> WH_STREET_NAME String, WH_STREET_NO String, WH_STREET_TYPE String, 
> WH_SUITE_NO String, WH_ZIP String, CUST_DEP_COUNT DOUBLE, CUST_VEHICLE_COUNT 
> DOUBLE, CUST_ADDRESS_CNT DOUBLE, CUST_CRNT_CDEMO_CNT DOUBLE, 
> CUST_CRNT_HDEMO_CNT DOUBLE, CUST_CRNT_ADDR_DM DOUBLE, CUST_FIRST_SHIPTO_CNT 
> DOUBLE, CUST_FIRST_SALES_CNT DOUBLE, CUST_GMT_OFFSET DOUBLE, CUST_DEMO_CNT 
> DOUBLE, CUST_INCOME DOUBLE, PROD_UNLIMITED INT, PROD_OFF_PRICE DOUBLE, 
> PROD_UNITS INT, TOTAL_PRD_COST DOUBLE, TOTAL_PRD_DISC DOUBLE, PROD_WEIGHT 
> DOUBLE, REG_UNIT_PRICE DOUBLE, EXTENDED_AMT DOUBLE, UNIT_PRICE_DSCNT_PCT 
> DOUBLE, DSCNT_AMT DOUBLE, PROD_STD_CST DOUBLE, TOTAL_TX_AMT DOUBLE, 
> FREIGHT_CHRG DOUBLE, WAITING_PERIOD DOUBLE, DELIVERY_PERIOD DOUBLE, 
> ITM_CRNT_PRICE DOUBLE, ITM_UNITS DOUBLE, ITM_WSLE_CST DOUBLE, ITM_SIZE 
> DOUBLE, PRM_CST DOUBLE, PRM_RESPONSE_TARGET DOUBLE, PRM_ITM_DM DOUBLE, 

[jira] [Closed] (CARBONDATA-865) Remove configurations for Kettle from master/docs/installation-guide.md

2017-07-04 Thread Vinod Rohilla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Rohilla closed CARBONDATA-865.


Issue Closed

> Remove configurations for Kettle from master/docs/installation-guide.md
> ---
>
> Key: CARBONDATA-865
> URL: https://issues.apache.org/jira/browse/CARBONDATA-865
> Project: CarbonData
>  Issue Type: Bug
>Affects Versions: 1.0.0-incubating
>Reporter: Vinod Rohilla
>Assignee: manoj mathpal
>Priority: Minor
> Fix For: 1.1.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Actual Result:Configurations  displays for Kettle  under 
> installation-guide.md file.
> Expected Result: Remove configurations for Kettle from 
> master/docs/installation-guide.md file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1002) Results order does not display same as hive in Carbon data .

2017-07-04 Thread Vinod Rohilla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Rohilla closed CARBONDATA-1002.
-

Issue Closed

> Results order does not display same as hive in Carbon data . 
> -
>
> Key: CARBONDATA-1002
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1002
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Priority: Minor
> Attachments: Employee1.csv, Result.png
>
>
> Result order does not display same as hive in Carbondata.
> Steps to reproduces:
> 1: CARBONDATA:
> a):Create table in CarbonData
> create table employee (Id int,Name String,Salary int,Designation String,Dept 
> String) STORED BY 'org.apache.carbondata.format';
> b):Load data in table
> LOAD DATA INPATH 'hdfs://localhost:54310/Employee1.csv' into table employee;
> c): select * from employee;
> +--+---+--+++--+
> |Id| Name  |  Salary  |  Designation   |Dept  
>   |
> +--+---+--+++--+
> | 101  | Zoe   | 8567816  | J BUSH & CO| Warehouse/Equipment 
> Agent  |
> | 91   | Zoe   | 6380353  | J C MALONE ASSOCIATES  | Water Services 
> Technician  |
> | 81   | Zoe   | 2937793  | J C P & L CO   | Websphere Consultant 
>   |
> | 71   | Zoe   | 9237710  | J C PENNEY | Wedding Consultant   
>   |
> | 61   | Zoe   | 2663980  | J C PENNEY CO  | Wedding Coordinator  
>   |
> | 51   | Zoe   | 6355842  | J C PENNEY CO INC  | Wedding Sales 
> Manager  |
> | 41   | Zoe   | 3966825  | J C PENNEY COMPANY | Weight Loss 
> Consultant |
> | 31   | Zoe   | 7679689  | J D CHADNEY MD | Welder   
>   |
> | 21   | Zoe   | 9589193  | J DOMANISH ARCHT   | Welding Engineer 
>   |
> | 11   | Zoe   | 7958183  | J F K HIGH SCHOOL  | Wheelchair agent 
>   |
> | 1| Zoe   | 3640571  | J GRAHAM BROWN CANCER  | Yachting 
>   |
> +--+---+--+++--+
> 2:HIVE
> a):Create table in CarbonData
> create table employeeH (Id int,Name String,Salary int,Designation String,Dept 
> String) ROW FORMAT DELIMITED FIELDS TERMINATED BY ",";
> b):Load data in table
> LOAD DATA LOCAL INPATH '/home/vinod/Desktop/AllCSV/Employee1.csv'OVERWRITE 
> INTO TABLE employeeH;
> c: select * from employeeH;
> +--+---+--+++--+
> |Id| Name  |  Salary  |  Designation   |Dept  
>   |
> +--+---+--+++--+
> | 1| Zoe   | 3640571  | J GRAHAM BROWN CANCER  | Yachting 
>   |
> | 11   | Zoe   | 7958183  | J F K HIGH SCHOOL  | Wheelchair agent 
>   |
> | 21   | Zoe   | 9589193  | J DOMANISH ARCHT   | Welding Engineer 
>   |
> | 31   | Zoe   | 7679689  | J D CHADNEY MD | Welder   
>   |
> | 41   | Zoe   | 3966825  | J C PENNEY COMPANY | Weight Loss 
> Consultant |
> | 51   | Zoe   | 6355842  | J C PENNEY CO INC  | Wedding Sales 
> Manager  |
> | 61   | Zoe   | 2663980  | J C PENNEY CO  | Wedding Coordinator  
>   |
> | 71   | Zoe   | 9237710  | J C PENNEY | Wedding Consultant   
>   |
> | 81   | Zoe   | 2937793  | J C P & L CO   | Websphere Consultant 
>   |
> | 91   | Zoe   | 6380353  | J C MALONE ASSOCIATES  | Water Services 
> Technician  |
> | 101  | Zoe   | 8567816  | J BUSH & CO| Warehouse/Equipment 
> Agent  |
> +--+---+--+++--+



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1144) Drop column operation failed in Alter table.

2017-07-04 Thread Vinod Rohilla (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Rohilla closed CARBONDATA-1144.
-
Assignee: Vinod Rohilla  (was: Kunal Kapoor)

Issue Fixed.

0: jdbc:hive2://localhost:1> CREATE TABLE uniqdata (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= 
"256 MB");
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (3.744 seconds)
0: jdbc:hive2://localhost:1> LOAD DATA INPATH 
'hdfs://localhost:54310/2000_UniqData.csv' into table uniqdata 
OPTIONS('DELIMITER'=',' , 
'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (14.665 seconds)
0: jdbc:hive2://localhost:1> 
0: jdbc:hive2://localhost:1> alter table uniqdata drop columns(CUST_NAME);
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (2.549 seconds)


> Drop column operation failed in Alter table. 
> -
>
> Key: CARBONDATA-1144
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1144
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.2.0
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Assignee: Vinod Rohilla
>Priority: Minor
> Fix For: 1.2.0
>
> Attachments: 2000_UniqData.csv
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Drop column does not work in Alter table.
> Steps to reproduce:
> 1: Create a table in Carbon:
> CREATE TABLE uniqdata (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ("TABLE_BLOCKSIZE"= "256 MB");
> 2: Load Data in a table:
> LOAD DATA INPATH 'hdfs://localhost:54310/2000_UniqData.csv' into table 
> uniqdata OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> 3: Run the following query: alter table uniqdata drop columns(CUST_NAME);
> 4: Result:
> 0: jdbc:hive2://localhost:1> alter table uniqdata drop columns(CUST_NAME);
> Error: java.lang.RuntimeException: Alter table drop column operation failed: 
> null (state=,code=0)
> Expected Result: Column should be dropped.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1116: [CARBONDATA-1249] Wrong order of columns in redirect...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1116
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/309/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1116: [CARBONDATA-1249] Wrong order of columns in redirect...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1116
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2895/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (CARBONDATA-1068) Error occur while executing select query "local class incompatible: stream classdesc serialVersionUID"

2017-07-04 Thread SWATI RAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SWATI RAO closed CARBONDATA-1068.
-
Resolution: Fixed

> Error occur while executing select query "local class incompatible: stream 
> classdesc serialVersionUID"
> --
>
> Key: CARBONDATA-1068
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1068
> Project: CarbonData
>  Issue Type: Bug
>Reporter: SWATI RAO
> Attachments: Test_Data1.csv
>
>
> CREATE TABLE :
> create table Test_Boundary (c1_int int,c2_Bigint Bigint,c3_Decimal 
> Decimal(38,30),c4_double double,c5_string string,c6_Timestamp 
> Timestamp,c7_Datatype_Desc string) STORED BY 'org.apache.carbondata.format'
> LOAD :
> LOAD DATA INPATH 'HDFS_URL/BabuStore/Data/Test_Data1.csv' INTO table 
> Test_Boundary 
> OPTIONS('DELIMITER'=',','QUOTECHAR'='','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='')
> SELECT:
> select c4_double,c7_datatype_desc,min(c4_double) from Test_Boundary group by 
> c4_double,c7_datatype_desc having min(c4_double) >1.7976931348623158E308 
> order by c4_double limit 5
> boundry_TC_0120,FAIL,Job aborted due to stage failure: Task 0 in stage 
> 12284.0 failed 4 times, most recent failure: Lost task 0.3 in stage 12284.0 
> (TID 580145, h-slave-1): java.io.InvalidClassException: 
> org.apache.spark.sql.CarbonRelation; local class incompatible: stream 
> classdesc serialVersionUID = 1716814377307478832, local class 
> serialVersionUID = 6286910848280021658
>   at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> 

[jira] [Commented] (CARBONDATA-1068) Error occur while executing select query "local class incompatible: stream classdesc serialVersionUID"

2017-07-04 Thread SWATI RAO (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073430#comment-16073430
 ] 

SWATI RAO commented on CARBONDATA-1068:
---

[~simar] : Now it is not coming. Previously it was coming on cluster.

> Error occur while executing select query "local class incompatible: stream 
> classdesc serialVersionUID"
> --
>
> Key: CARBONDATA-1068
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1068
> Project: CarbonData
>  Issue Type: Bug
>Reporter: SWATI RAO
> Attachments: Test_Data1.csv
>
>
> CREATE TABLE :
> create table Test_Boundary (c1_int int,c2_Bigint Bigint,c3_Decimal 
> Decimal(38,30),c4_double double,c5_string string,c6_Timestamp 
> Timestamp,c7_Datatype_Desc string) STORED BY 'org.apache.carbondata.format'
> LOAD :
> LOAD DATA INPATH 'HDFS_URL/BabuStore/Data/Test_Data1.csv' INTO table 
> Test_Boundary 
> OPTIONS('DELIMITER'=',','QUOTECHAR'='','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='')
> SELECT:
> select c4_double,c7_datatype_desc,min(c4_double) from Test_Boundary group by 
> c4_double,c7_datatype_desc having min(c4_double) >1.7976931348623158E308 
> order by c4_double limit 5
> boundry_TC_0120,FAIL,Job aborted due to stage failure: Task 0 in stage 
> 12284.0 failed 4 times, most recent failure: Lost task 0.3 in stage 12284.0 
> (TID 580145, h-slave-1): java.io.InvalidClassException: 
> org.apache.spark.sql.CarbonRelation; local class incompatible: stream 
> classdesc serialVersionUID = 1716814377307478832, local class 
> serialVersionUID = 6286910848280021658
>   at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at 

[jira] [Commented] (CARBONDATA-1068) Error occur while executing select query "local class incompatible: stream classdesc serialVersionUID"

2017-07-04 Thread Simarpreet Kaur (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073417#comment-16073417
 ] 

Simarpreet Kaur commented on CARBONDATA-1068:
-

I have reproduced the above scenario, no such error is encountered.
Kindly close the Jira Issue.

> Error occur while executing select query "local class incompatible: stream 
> classdesc serialVersionUID"
> --
>
> Key: CARBONDATA-1068
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1068
> Project: CarbonData
>  Issue Type: Bug
>Reporter: SWATI RAO
> Attachments: Test_Data1.csv
>
>
> CREATE TABLE :
> create table Test_Boundary (c1_int int,c2_Bigint Bigint,c3_Decimal 
> Decimal(38,30),c4_double double,c5_string string,c6_Timestamp 
> Timestamp,c7_Datatype_Desc string) STORED BY 'org.apache.carbondata.format'
> LOAD :
> LOAD DATA INPATH 'HDFS_URL/BabuStore/Data/Test_Data1.csv' INTO table 
> Test_Boundary 
> OPTIONS('DELIMITER'=',','QUOTECHAR'='','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='')
> SELECT:
> select c4_double,c7_datatype_desc,min(c4_double) from Test_Boundary group by 
> c4_double,c7_datatype_desc having min(c4_double) >1.7976931348623158E308 
> order by c4_double limit 5
> boundry_TC_0120,FAIL,Job aborted due to stage failure: Task 0 in stage 
> 12284.0 failed 4 times, most recent failure: Lost task 0.3 in stage 12284.0 
> (TID 580145, h-slave-1): java.io.InvalidClassException: 
> org.apache.spark.sql.CarbonRelation; local class incompatible: stream 
> classdesc serialVersionUID = 1716814377307478832, local class 
> serialVersionUID = 6286910848280021658
>   at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:616)
>   at 
> java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630)
>   at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1781)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   

[GitHub] carbondata pull request #1125: [CarbonData-1250] Change default partition id...

2017-07-04 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1125#discussion_r125428800
  
--- Diff: format/src/main/thrift/schema.thrift ---
@@ -135,6 +135,9 @@ struct PartitionInfo{
 3: optional i32 num_partitions;  // number of partitions defined in 
hash partition table
--- End diff --

Same as Hash_num_parititions


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1133: [CARBONDATA-1261] Load data sql add 'header' option

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1133
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2894/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1133: [CARBONDATA-1261] Load data sql add 'header' option

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1133
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/308/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (CARBONDATA-1184) Incorrect value displays in double data type.

2017-07-04 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073379#comment-16073379
 ] 

Ashwini K commented on CARBONDATA-1184:
---

Data file attached has format problem . However I am able to reproduce the 
issue with similar data . Please attach the correct file .

> Incorrect value displays in double data type. 
> --
>
> Key: CARBONDATA-1184
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1184
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
> Environment: Spark 2.1
>Reporter: Vinod Rohilla
>Priority: Minor
> Attachments: 100_olap_C20.csv
>
>
> Incorrect value displays to the user in double datatype.
> Step to reproduces:
> 1:Create table:
> create table VMALL_DICTIONARY_EXCLUDE (imei string,deviceInformationId 
> int,MAC string,deviceColor string,device_backColor string,modelId 
> string,marketName string,AMSize string,ROMSize string,CUPAudit 
> string,CPIClocked string,series string,productionDate timestamp,bomCode 
> string,internalModels string, deliveryTime string, channelsId string, 
> channelsName string , deliveryAreaId string, deliveryCountry string, 
> deliveryProvince string, deliveryCity string,deliveryDistrict string, 
> deliveryStreet string, oxSingleNumber string, ActiveCheckTime string, 
> ActiveAreaId string, ActiveCountry string, ActiveProvince string, Activecity 
> string, ActiveDistrict string, ActiveStreet string, ActiveOperatorId string, 
> Active_releaseId string, Active_EMUIVersion string, Active_operaSysVersion 
> string, Active_BacVerNumber string, Active_BacFlashVer string, 
> Active_webUIVersion string, Active_webUITypeCarrVer 
> string,Active_webTypeDataVerNumber string, Active_operatorsVersion string, 
> Active_phonePADPartitionedVersions string, Latest_YEAR int, Latest_MONTH int, 
> Latest_DAY Decimal(30,10), Latest_HOUR string, Latest_areaId string, 
> Latest_country string, Latest_province string, Latest_city string, 
> Latest_district string, Latest_street string, Latest_releaseId string, 
> Latest_EMUIVersion string, Latest_operaSysVersion string, Latest_BacVerNumber 
> string, Latest_BacFlashVer string, Latest_webUIVersion string, 
> Latest_webUITypeCarrVer string, Latest_webTypeDataVerNumber string, 
> Latest_operatorsVersion string, Latest_phonePADPartitionedVersions string, 
> Latest_operatorId string, gamePointDescription string,gamePointId 
> double,contractNumber BigInt) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='imei');
> 2:Load Data:
> LOAD DATA INPATH 'hdfs://localhost:54310/100_olap_C20.csv' INTO table 
> VMALL_DICTIONARY_EXCLUDE options ('DELIMITER'=',', 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE', 
> 'FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription');
> 3: Run Select Query.
> select gamePointId from VMALL_DICTIONARY_EXCLUDE;
> 4: Result:
> 0: jdbc:hive2://localhost:1> select gamePointId from 
> VMALL_DICTIONARY_EXCLUDE;
> +---+--+
> |  gamePointId  |
> +---+--+
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |
> | 9.223372036854776E18  |

[GitHub] carbondata pull request #1125: [CarbonData-1250] Change default partition id...

2017-07-04 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1125#discussion_r125427623
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
 ---
@@ -99,10 +98,6 @@
   private Map tablePartitionMap;
 
   /**
-   * statistic information of partition table
-   */
-  private PartitionStatistic partitionStatistic;
-  /**
--- End diff --

Should keep this line.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1125: [CarbonData-1250] Change default partition id...

2017-07-04 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1125#discussion_r125427240
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
 ---
@@ -219,6 +219,10 @@
 externalPartitionInfo.setList_info(wrapperPartitionInfo.getListInfo());
 
externalPartitionInfo.setRange_info(wrapperPartitionInfo.getRangeInfo());
 
externalPartitionInfo.setNum_partitions(wrapperPartitionInfo.getNumPartitions());
+
externalPartitionInfo.setNumOfPartitions(wrapperPartitionInfo.getNumberOfPartitions());
--- End diff --

I think it may be better that use Hash_numPartition. otherwise users may 
confused about this two num of partitions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1132: [CARBONDATA-1260] Show Partition for Range partition...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1132
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/307/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1132: [CARBONDATA-1260] Show Partition for Range partition...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1132
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2893/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1133: [CARBONDATA-1261] Load data sql add 'header' ...

2017-07-04 Thread QiangCai
GitHub user QiangCai opened a pull request:

https://github.com/apache/carbondata/pull/1133

[CARBONDATA-1261] Load data sql add 'header' option

When we load the CSV files without file header and the file header is the 
same with the table schema, add 'header'='false' to load data sql, no need to 
let user provide the file header.

maillist:

http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Discussion-Add-HEADER-option-to-load-data-sql-td17080.html

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/QiangCai/carbondata addheaderoption

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1133.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1133


commit a065166776d1c9c63c2cd2080265553c61c49846
Author: QiangCai 
Date:   2017-07-04T04:11:33Z

add header option




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1116: [CARBONDATA-1249] Wrong order of columns in redirect...

2017-07-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1116
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/306/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >