[jira] [Created] (CARBONDATA-1729) Recover to supporting Hadoop <= 2.6

2017-11-15 Thread Zhichao Zhang (JIRA)
Zhichao  Zhang created CARBONDATA-1729:
--

 Summary: Recover to supporting Hadoop <= 2.6
 Key: CARBONDATA-1729
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1729
 Project: CarbonData
  Issue Type: Bug
  Components: hadoop-integration
Affects Versions: 1.3.0
Reporter: Zhichao  Zhang
Assignee: Zhichao  Zhang
 Fix For: 1.3.0


On branch master, when compiled with hadoop <= 2.6, it failed, the root cause 
is using new API FileSystem.truncate which is added in hadoop 2.7. It needs to 
implement a method called 'truncate' in file 'FileFactory.java' to support 
hadoop <= 2.6.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1438: [CARBONDATA-1649]insert overwrite fix during job int...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1438
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1745/



---


[GitHub] carbondata issue #1438: [CARBONDATA-1649]insert overwrite fix during job int...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1438
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1167/



---


[GitHub] carbondata issue #1501: [CARBONDATA-1713] Fixed Aggregate query on main tabl...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1501
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1744/



---


[GitHub] carbondata issue #1494: [CARBONDATA-1706] Making index merge DDL insensitive...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1494
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1166/



---


[jira] [Created] (CARBONDATA-1728) (Carbon1.3.0- DB creation external path) - Delete data with select in where clause not successful for large data

2017-11-15 Thread Chetan Bhat (JIRA)
Chetan Bhat created CARBONDATA-1728:
---

 Summary: (Carbon1.3.0- DB creation external path) - Delete data 
with select in where clause not successful for large data
 Key: CARBONDATA-1728
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1728
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Affects Versions: 1.3.0
 Environment: 3 node ant cluster
Reporter: Chetan Bhat


Steps :
0: jdbc:hive2://10.18.98.34:23040> create database test_db1 location 
'hdfs://hacluster/user/test1';
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.032 seconds)
0: jdbc:hive2://10.18.98.34:23040> use test_db1;
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.01 seconds)
0: jdbc:hive2://10.18.98.34:23040> create table if not exists 
ORDERS(O_ORDERDATE string,O_ORDERPRIORITY string,O_ORDERSTATUS 
string,O_ORDERKEY string,O_CUSTKEY string,O_TOTALPRICE double,O_CLERK 
string,O_SHIPPRIORITY int,O_COMMENT string) STORED BY 
'org.apache.carbondata.format' TBLPROPERTIES ('table_blocksize'='128');
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.174 seconds)
0: jdbc:hive2://10.18.98.34:23040> load data inpath 
"hdfs://hacluster/chetan/orders.tbl.1" into table ORDERS 
options('DELIMITER'='|','FILEHEADER'='O_ORDERKEY,O_CUSTKEY,O_ORDERSTATUS,O_TOTALPRICE,O_ORDERDATE,O_ORDERPRIORITY,O_CLERK,O_SHIPPRIORITY,O_COMMENT','batch_sort_size_inmb'='32');
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (27.421 seconds)
0: jdbc:hive2://10.18.98.34:23040> create table h_orders as select * from 
orders;
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (9.779 seconds)
0: jdbc:hive2://10.18.98.34:23040> Delete from test_db1.orders a where exists 
(select 1 from test_db1.h_orders b where b.o_ORDERKEY=a.O_ORDERKEY);
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (48.998 seconds)
select count(*) from test_db1.orders;

Actual Issue : Select count displays shows all records present which means the 
records are not deleted.
0: jdbc:hive2://10.18.98.34:23040> select count(*) from test_db1.orders;
+---+--+
| count(1)  |
+---+--+
| 750   |
+---+--+
1 row selected (7.967 seconds)
This indicates Delete data with select in where clause not successful for large 
data. 

Expected : The Delete data with select in where clause should be successful for 
large data. The select count should return 0 records which indicates that the 
records are deleted successfully.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1438: [CARBONDATA-1649]insert overwrite fix during job int...

2017-11-15 Thread akashrn5
Github user akashrn5 commented on the issue:

https://github.com/apache/carbondata/pull/1438
  
@jackylk handled your comment, please review


---


[GitHub] carbondata pull request #1438: [CARBONDATA-1649]insert overwrite fix during ...

2017-11-15 Thread akashrn5
Github user akashrn5 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1438#discussion_r151332421
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/LoadTableCommand.scala
 ---
@@ -186,6 +186,12 @@ case class LoadTableCommand(
   LOGGER.error(ex, s"Dataload failure for $dbName.$tableName")
   throw new RuntimeException(s"Dataload failure for 
$dbName.$tableName, ${ex.getMessage}")
 case ex: Exception =>
+  if (ex.isInstanceOf[InterruptedException] &&
+  ex.getMessage.contains("update fail status")) {
--- End diff --

ok, i will add a new Exception for this, and throw that exception


---


[GitHub] carbondata pull request #1438: [CARBONDATA-1649]insert overwrite fix during ...

2017-11-15 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1438#discussion_r151331644
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/LoadTableCommand.scala
 ---
@@ -186,6 +186,12 @@ case class LoadTableCommand(
   LOGGER.error(ex, s"Dataload failure for $dbName.$tableName")
   throw new RuntimeException(s"Dataload failure for 
$dbName.$tableName, ${ex.getMessage}")
 case ex: Exception =>
+  if (ex.isInstanceOf[InterruptedException] &&
+  ex.getMessage.contains("update fail status")) {
--- End diff --

It is not good to rely on message inside the exception, can you create one 
special exception for this case?


---


[GitHub] carbondata issue #1502: [CARBONDATA-1720] Wrong data displayed for <= filter...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1502
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1165/



---


[GitHub] carbondata issue #1471: [CARBONDATA-1544][Datamap] Datamap FineGrain impleme...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1743/



---


[GitHub] carbondata issue #1502: [CARBONDATA-1720] Wrong data displayed for <= filter...

2017-11-15 Thread dhatchayani
Github user dhatchayani commented on the issue:

https://github.com/apache/carbondata/pull/1502
  
retest this please


---


[GitHub] carbondata issue #1494: [CARBONDATA-1706] Making index merge DDL insensitive...

2017-11-15 Thread dhatchayani
Github user dhatchayani commented on the issue:

https://github.com/apache/carbondata/pull/1494
  
retest this please



---


[jira] [Updated] (CARBONDATA-1671) Support set/unset table comment for ALTER table

2017-11-15 Thread Pawan Malwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pawan Malwal updated CARBONDATA-1671:
-
Description: Table comment set/unset using **ALTER TABLE  SET/UNSET 
TBLPROPERTIES** 

> Support set/unset table comment for ALTER table
> ---
>
> Key: CARBONDATA-1671
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1671
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: Pawan Malwal
>Assignee: Pawan Malwal
>
> Table comment set/unset using **ALTER TABLE  SET/UNSET TBLPROPERTIES** 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1671) Support set/unset table comment for ALTER table

2017-11-15 Thread Pawan Malwal (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16254813#comment-16254813
 ] 

Pawan Malwal commented on CARBONDATA-1671:
--

Features proposed in this jira::

Table comment set/unset using **ALTER TABLE  SET/UNSET TBLPROPERTIES** query is 
not supported

eg: ALTER TABLE table_with_comment SET TBLPROPERTIES("comment"= "modified 
comment)
If user wants to alter the table properties and adds/updates table comment, 
inorder  to handle this requirement, update the comment field value with the 
newly added/modified comment when user executes **ALTER TABLE  SET 
TBLPROPERTIES** query.

This will also take care of unsetting the table comment when user executes 
query  **ALTER TABLE  UNSET TBLPROPERTIES** inorder to unset or remove table 
comment.
eg: ALTER TABLE table_comment UNSET TBLPROPERTIES IF EXISTS ('comment')

> Support set/unset table comment for ALTER table
> ---
>
> Key: CARBONDATA-1671
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1671
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: Pawan Malwal
>Assignee: Pawan Malwal
>
> Table comment set/unset using **ALTER TABLE  SET/UNSET TBLPROPERTIES** 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1494: [CARBONDATA-1706] Making index merge DDL insensitive...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1494
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1164/



---


[GitHub] carbondata issue #1501: [CARBONDATA-1713] Fixed Aggregate query on main tabl...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1501
  
retest sdv please


---


[GitHub] carbondata issue #1502: [CARBONDATA-1720] Wrong data displayed for <= filter...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1502
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1163/



---


[GitHub] carbondata issue #1471: [CARBONDATA-1544][Datamap] Datamap FineGrain impleme...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
retest sdv please


---


[GitHub] carbondata issue #1494: [CARBONDATA-1706] Making index merge DDL insensitive...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1494
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1741/



---


[GitHub] carbondata issue #1502: [CARBONDATA-1720] Wrong data displayed for <= filter...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1502
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1738/



---


[GitHub] carbondata issue #1471: [CARBONDATA-1544][Datamap] Datamap FineGrain impleme...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1162/



---


[GitHub] carbondata issue #1502: [CARBONDATA-1720] Wrong data displayed for <= filter...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1502
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1161/



---


[GitHub] carbondata issue #1502: [CARBONDATA-1720] Wrong data displayed for <= filter...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1502
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1737/



---


[GitHub] carbondata issue #1471: [CARBONDATA-1544][Datamap] Datamap FineGrain impleme...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
retest this please


---


[GitHub] carbondata issue #1500: [CARBONDATA-1717]Remove spark broadcast for gettting...

2017-11-15 Thread QiangCai
Github user QiangCai commented on the issue:

https://github.com/apache/carbondata/pull/1500
  
please fix CI issue


---


[GitHub] carbondata issue #1499: [WIP][CARBONDATA-1235]Add Lucene Datamap

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1499
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1736/



---


[GitHub] carbondata issue #1499: [WIP][CARBONDATA-1235]Add Lucene Datamap

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1499
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1160/



---


[GitHub] carbondata issue #1471: [CARBONDATA-1544][Datamap] Datamap FineGrain impleme...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1159/



---


[GitHub] carbondata issue #1471: [CARBONDATA-1544][Datamap] Datamap FineGrain impleme...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
retest this please


---


[GitHub] carbondata issue #1498: [CARBONDATA-1614][Streaming] Show file format for se...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1498
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1158/



---


[GitHub] carbondata issue #1501: [CARBONDATA-1713] Fixed Aggregate query on main tabl...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1501
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1157/



---


[GitHub] carbondata issue #1498: [CARBONDATA-1614][Streaming] Show file format for se...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1498
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1735/



---


[GitHub] carbondata issue #1501: [CARBONDATA-1713] Fixed Aggregate query on main tabl...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1501
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1734/



---


[GitHub] carbondata issue #1471: [CARBONDATA-1544][Datamap] Datamap FineGrain impleme...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1733/



---


[GitHub] carbondata issue #1502: [CARBONDATA-1720] Wrong data displayed for <= filter...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1502
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1732/



---


[GitHub] carbondata issue #1498: [CARBONDATA-1614][Streaming] Show file format for se...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1498
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1731/



---


[jira] [Updated] (CARBONDATA-1727) Dataload is successful even in case if the table is droped from other client.

2017-11-15 Thread Mohammad Shahid Khan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Shahid Khan updated CARBONDATA-1727:
-
Summary: Dataload is successful even in case if the table is droped from 
other client.  (was: Dataload is successful even in case if some user drops the 
table from other client.)

> Dataload is successful even in case if the table is droped from other client.
> -
>
> Key: CARBONDATA-1727
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1727
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load, spark-integration
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Mohammad Shahid Khan
>Assignee: Mohammad Shahid Khan
>Priority: Minor
>
> table drop has highest priority so even if on some table load operation is in 
> progress, 
> the table could be droped.
> If before finishing the load operation the table is dropped then the load 
> should fail.
> Steps:
> 1. Create table t1
> 2. Load data into t1 (big data that takes some time)
> 3. when load in progress, then drop the table
> Actual Result : The load is successful
> Expected Result : Final Load status should be fail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1727) Dataload is successful even in case if some user drops the table from other client.

2017-11-15 Thread Mohammad Shahid Khan (JIRA)
Mohammad Shahid Khan created CARBONDATA-1727:


 Summary: Dataload is successful even in case if some user drops 
the table from other client.
 Key: CARBONDATA-1727
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1727
 Project: CarbonData
  Issue Type: Bug
  Components: data-load, spark-integration
Affects Versions: 1.2.0, 1.3.0
Reporter: Mohammad Shahid Khan
Assignee: Mohammad Shahid Khan
Priority: Minor


table drop has highest priority so even if on some table load operation is in 
progress, 
the table could be droped.
If before finishing the load operation the table is dropped then the load 
should fail.

Steps:
1. Create table t1
2. Load data into t1 (big data that takes some time)
3. when load in progress, then drop the table
Actual Result : The load is successful

Expected Result : Final Load status should be fail.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1471: [CARBONDATA-1544][Datamap] Datamap FineGrain impleme...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1156/



---


[GitHub] carbondata issue #1471: [CARBONDATA-1544][Datamap] Datamap FineGrain impleme...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1155/



---


[GitHub] carbondata issue #1502: [CARBONDATA-1720] Wrong data displayed for <= filter...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1502
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1153/



---


[GitHub] carbondata pull request #1494: [CARBONDATA-1706] Making index merge DDL inse...

2017-11-15 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1494#discussion_r151131189
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
 ---
@@ -851,13 +851,21 @@ object CommonUtil {
   def mergeIndexFiles(sparkContext: SparkContext,
   segmentIds: Seq[String],
   tablePath: String,
-  carbonTable: CarbonTable): Unit = {
-if (CarbonProperties.getInstance().getProperty(
-  CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT,
-  
CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT).toBoolean) {
-  new CarbonMergeFilesRDD(sparkContext, 
AbsoluteTableIdentifier.from(tablePath,
-carbonTable.getDatabaseName, 
carbonTable.getFactTableName).getTablePath,
-segmentIds).collect()
+  carbonTable: CarbonTable,
+  mergeIndexProperty: Option[Boolean]): Unit = {
+mergeIndexProperty match {
+  case Some(true) =>
+new CarbonMergeFilesRDD(sparkContext, 
AbsoluteTableIdentifier.from(tablePath,
+  carbonTable.getDatabaseName, 
carbonTable.getFactTableName).getTablePath,
+  segmentIds).collect()
+  case _ =>
+if (CarbonProperties.getInstance().getProperty(
+  CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT,
+  
CarbonCommonConstants.CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT).toBoolean) {
--- End diff --

Add the validation for boolean in case user pass wrong boolean parameter 
then take default


---


[GitHub] carbondata issue #1471: [WIP][CARBONDATA-1544][Datamap] Datamap FineGrain im...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1471
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1152/



---


[jira] [Updated] (CARBONDATA-1715) Carbon 1.3.0- Datamap BAD_RECORD_ACTION is not working as per the Document link.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1715:
-
Summary: Carbon 1.3.0- Datamap BAD_RECORD_ACTION is not working as per the 
Document link.  (was: Carbon 1.3.0- Bad Records BAD_RECORD_ACTION is not 
working as per the Document link.)

> Carbon 1.3.0- Datamap BAD_RECORD_ACTION is not working as per the Document 
> link.
> 
>
> Key: CARBONDATA-1715
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1715
> Project: CarbonData
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
>  Labels: Document
> Attachments: Bad_Records.PNG
>
>
> By default the BAD_RECORDS_ACTION = FORCE should be written in 
> "http://carbondata.apache.org/dml-operation-on-carbondata.html; document link 
> but it is written as BAD_RECORDS_ACTION = FAIL.
> Expected result: BAD_RECORDS_ACTION = FORCE should be mentioned BAD RECORDS 
> HANDLING section in document.
> Actual issue: BAD_RECORDS_ACTION = FAIL is present in the Document link.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1716) Carbon 1.3.0-Table Comment- When unset the header is not removed in describe formatted.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi closed CARBONDATA-1716.

Resolution: Invalid

It is UNSETTING to the default behavior where the table comment property is 
provided by default in the desc formatted.

>  Carbon 1.3.0-Table Comment- When unset the header is not removed in describe 
> formatted.
> 
>
> Key: CARBONDATA-1716
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1716
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
>  Labels: Functional
> Attachments: UNSET is not working as expected 1.PNG, UNSET is not 
> working as expected.PNG
>
>
> When UNSET, the header is not removed in the Describe Formatted.
> Create a table with comment.
> SET the comment
> UNSET the comment
> Describe formatted
> Expected Result: When UNSET the header should be removed in the Describe 
> Formatted.
> Actual result: When UNSET the header is not removed in the describe Formatted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1726) Carbon1.3.0-Streaming - Select query from spark-shell does not execute successfully for streaming table load

2017-11-15 Thread Chetan Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-1726:

Summary: Carbon1.3.0-Streaming - Select query from spark-shell does not 
execute successfully for streaming table load  (was: Carbon1.3.0-Streaming - 
Select query from spark-sql does not execute successfully for streaming table 
load)

> Carbon1.3.0-Streaming - Select query from spark-shell does not execute 
> successfully for streaming table load
> 
>
> Key: CARBONDATA-1726
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1726
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster SUSE 11 SP4
>Reporter: Chetan Bhat
>  Labels: Functional
>
> Steps :
> // prepare csv file for batch loading
> cd /srv/spark2.2Bigdata/install/hadoop/datanode/bin
> // generate streamSample.csv
> 10001,batch_1,city_1,0.1,school_1:school_11$20
> 10002,batch_2,city_2,0.2,school_2:school_22$30
> 10003,batch_3,city_3,0.3,school_3:school_33$40
> 10004,batch_4,city_4,0.4,school_4:school_44$50
> 10005,batch_5,city_5,0.5,school_5:school_55$60
> // put to hdfs /tmp/streamSample.csv
> ./hadoop fs -put streamSample.csv /tmp
> // spark-beeline
> cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
> bin/spark-submit --master yarn-client --executor-memory 10G --executor-cores 
> 5 --driver-memory 5G --num-executors 3 --class 
> org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
> /srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar
>  "hdfs://hacluster/user/sparkhive/warehouse"
> bin/beeline -u jdbc:hive2://10.18.98.34:23040
> CREATE TABLE stream_table(
> id INT,
> name STRING,
> city STRING,
> salary FLOAT
> )
> STORED BY 'carbondata'
> TBLPROPERTIES('streaming'='true', 'sort_columns'='name');
> LOAD DATA LOCAL INPATH 'hdfs://hacluster/chetan/streamSample.csv' INTO TABLE 
> stream_table OPTIONS('HEADER'='false');
> // spark-shell 
> cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
> bin/spark-shell --master yarn-client
> import java.io.{File, PrintWriter}
> import java.net.ServerSocket
> import org.apache.spark.sql.{CarbonEnv, SparkSession}
> import org.apache.spark.sql.hive.CarbonRelation
> import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery}
> import org.apache.carbondata.core.constants.CarbonCommonConstants
> import org.apache.carbondata.core.util.CarbonProperties
> import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}
> CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
>  "/MM/dd")
> import org.apache.spark.sql.CarbonSession._
> val carbonSession = SparkSession.
>   builder().
>   appName("StreamExample").
>   config("spark.sql.warehouse.dir", 
> "hdfs://hacluster/user/sparkhive/warehouse").
>   config("javax.jdo.option.ConnectionURL", 
> "jdbc:mysql://10.18.98.34:3306/sparksql?characterEncoding=UTF-8").
>   config("javax.jdo.option.ConnectionDriverName", "com.mysql.jdbc.Driver").
>   config("javax.jdo.option.ConnectionPassword", "huawei").
>   config("javax.jdo.option.ConnectionUserName", "sparksql").
>   getOrCreateCarbonSession()
>
> carbonSession.sparkContext.setLogLevel("ERROR")
> carbonSession.sql("select * from stream_table").show
> Issue : Select query from spark-sql does not execute successfully for 
> streaming table load.
> Expected : Select query from spark-sql should execute successfully for 
> streaming table load.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1726) Carbon1.3.0-Streaming - Select query from spark-shell does not execute successfully for streaming table load

2017-11-15 Thread Chetan Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-1726:

Description: 
Steps :
// prepare csv file for batch loading
cd /srv/spark2.2Bigdata/install/hadoop/datanode/bin

// generate streamSample.csv

10001,batch_1,city_1,0.1,school_1:school_11$20
10002,batch_2,city_2,0.2,school_2:school_22$30
10003,batch_3,city_3,0.3,school_3:school_33$40
10004,batch_4,city_4,0.4,school_4:school_44$50
10005,batch_5,city_5,0.5,school_5:school_55$60

// put to hdfs /tmp/streamSample.csv
./hadoop fs -put streamSample.csv /tmp

// spark-beeline
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-submit --master yarn-client --executor-memory 10G --executor-cores 5 
--driver-memory 5G --num-executors 3 --class 
org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
/srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar
 "hdfs://hacluster/user/sparkhive/warehouse"

bin/beeline -u jdbc:hive2://10.18.98.34:23040

CREATE TABLE stream_table(
id INT,
name STRING,
city STRING,
salary FLOAT
)
STORED BY 'carbondata'
TBLPROPERTIES('streaming'='true', 'sort_columns'='name');

LOAD DATA LOCAL INPATH 'hdfs://hacluster/chetan/streamSample.csv' INTO TABLE 
stream_table OPTIONS('HEADER'='false');

// spark-shell 
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-shell --master yarn-client

import java.io.{File, PrintWriter}
import java.net.ServerSocket

import org.apache.spark.sql.{CarbonEnv, SparkSession}
import org.apache.spark.sql.hive.CarbonRelation
import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery}

import org.apache.carbondata.core.constants.CarbonCommonConstants
import org.apache.carbondata.core.util.CarbonProperties
import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}

CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
 "/MM/dd")

import org.apache.spark.sql.CarbonSession._

val carbonSession = SparkSession.
  builder().
  appName("StreamExample").
  config("spark.sql.warehouse.dir", 
"hdfs://hacluster/user/sparkhive/warehouse").
  config("javax.jdo.option.ConnectionURL", 
"jdbc:mysql://10.18.98.34:3306/sparksql?characterEncoding=UTF-8").
  config("javax.jdo.option.ConnectionDriverName", "com.mysql.jdbc.Driver").
  config("javax.jdo.option.ConnectionPassword", "huawei").
  config("javax.jdo.option.ConnectionUserName", "sparksql").
  getOrCreateCarbonSession()
   
carbonSession.sparkContext.setLogLevel("ERROR")

carbonSession.sql("select * from stream_table").show

Issue : Select query from spark-shell does not execute successfully for 
streaming table load.


Expected : Select query from spark-shell should execute successfully for 
streaming table load.

  was:
Steps :
// prepare csv file for batch loading
cd /srv/spark2.2Bigdata/install/hadoop/datanode/bin

// generate streamSample.csv

10001,batch_1,city_1,0.1,school_1:school_11$20
10002,batch_2,city_2,0.2,school_2:school_22$30
10003,batch_3,city_3,0.3,school_3:school_33$40
10004,batch_4,city_4,0.4,school_4:school_44$50
10005,batch_5,city_5,0.5,school_5:school_55$60

// put to hdfs /tmp/streamSample.csv
./hadoop fs -put streamSample.csv /tmp

// spark-beeline
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-submit --master yarn-client --executor-memory 10G --executor-cores 5 
--driver-memory 5G --num-executors 3 --class 
org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
/srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar
 "hdfs://hacluster/user/sparkhive/warehouse"

bin/beeline -u jdbc:hive2://10.18.98.34:23040

CREATE TABLE stream_table(
id INT,
name STRING,
city STRING,
salary FLOAT
)
STORED BY 'carbondata'
TBLPROPERTIES('streaming'='true', 'sort_columns'='name');

LOAD DATA LOCAL INPATH 'hdfs://hacluster/chetan/streamSample.csv' INTO TABLE 
stream_table OPTIONS('HEADER'='false');

// spark-shell 
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-shell --master yarn-client

import java.io.{File, PrintWriter}
import java.net.ServerSocket

import org.apache.spark.sql.{CarbonEnv, SparkSession}
import org.apache.spark.sql.hive.CarbonRelation
import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery}

import org.apache.carbondata.core.constants.CarbonCommonConstants
import org.apache.carbondata.core.util.CarbonProperties
import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}

CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
 "/MM/dd")

import org.apache.spark.sql.CarbonSession._

val carbonSession = SparkSession.
  builder().
  appName("StreamExample").
  config("spark.sql.warehouse.dir", 
"hdfs://hacluster/user/sparkhive/warehouse").
  

[jira] [Updated] (CARBONDATA-1726) Carbon1.3.0-Streaming - Select query from spark-sql does not execute successfully for streaming table load

2017-11-15 Thread Chetan Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-1726:

Description: 
Steps :
// prepare csv file for batch loading
cd /srv/spark2.2Bigdata/install/hadoop/datanode/bin

// generate streamSample.csv

10001,batch_1,city_1,0.1,school_1:school_11$20
10002,batch_2,city_2,0.2,school_2:school_22$30
10003,batch_3,city_3,0.3,school_3:school_33$40
10004,batch_4,city_4,0.4,school_4:school_44$50
10005,batch_5,city_5,0.5,school_5:school_55$60

// put to hdfs /tmp/streamSample.csv
./hadoop fs -put streamSample.csv /tmp

// spark-beeline
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-submit --master yarn-client --executor-memory 10G --executor-cores 5 
--driver-memory 5G --num-executors 3 --class 
org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
/srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar
 "hdfs://hacluster/user/sparkhive/warehouse"

bin/beeline -u jdbc:hive2://10.18.98.34:23040

CREATE TABLE stream_table(
id INT,
name STRING,
city STRING,
salary FLOAT
)
STORED BY 'carbondata'
TBLPROPERTIES('streaming'='true', 'sort_columns'='name');

LOAD DATA LOCAL INPATH 'hdfs://hacluster/chetan/streamSample.csv' INTO TABLE 
stream_table OPTIONS('HEADER'='false');

// spark-shell 
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-shell --master yarn-client

import java.io.{File, PrintWriter}
import java.net.ServerSocket

import org.apache.spark.sql.{CarbonEnv, SparkSession}
import org.apache.spark.sql.hive.CarbonRelation
import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery}

import org.apache.carbondata.core.constants.CarbonCommonConstants
import org.apache.carbondata.core.util.CarbonProperties
import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}

CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
 "/MM/dd")

import org.apache.spark.sql.CarbonSession._

val carbonSession = SparkSession.
  builder().
  appName("StreamExample").
  config("spark.sql.warehouse.dir", 
"hdfs://hacluster/user/sparkhive/warehouse").
  config("javax.jdo.option.ConnectionURL", 
"jdbc:mysql://10.18.98.34:3306/sparksql?characterEncoding=UTF-8").
  config("javax.jdo.option.ConnectionDriverName", "com.mysql.jdbc.Driver").
  config("javax.jdo.option.ConnectionPassword", "huawei").
  config("javax.jdo.option.ConnectionUserName", "sparksql").
  getOrCreateCarbonSession()
   
carbonSession.sparkContext.setLogLevel("ERROR")

carbonSession.sql("select * from stream_table").show

Issue : Select query from spark-sql does not execute successfully for streaming 
table load.


Expected : Select query from spark-sql should execute successfully for 
streaming table load.

  was:
Steps :
// prepare csv file for batch loading
cd /srv/spark2.2Bigdata/install/hadoop/datanode/bin

// generate streamSample.csv
id,name,city,salary,file
10001,batch_1,city_1,0.1,school_1:school_11$20
10002,batch_2,city_2,0.2,school_2:school_22$30
10003,batch_3,city_3,0.3,school_3:school_33$40
10004,batch_4,city_4,0.4,school_4:school_44$50
10005,batch_5,city_5,0.5,school_5:school_55$60

// put to hdfs /tmp/streamSample.csv
./hadoop fs -put streamSample.csv /tmp

// spark-beeline
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-submit --master yarn-client --executor-memory 10G --executor-cores 5 
--driver-memory 5G --num-executors 3 --class 
org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
/srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar
 "hdfs://hacluster/user/sparkhive/warehouse"

bin/beeline -u jdbc:hive2://10.18.98.34:23040

CREATE TABLE stream_table(
id INT,
name STRING,
city STRING,
salary FLOAT
)
STORED BY 'carbondata'
TBLPROPERTIES('streaming'='true', 'sort_columns'='name');

LOAD DATA LOCAL INPATH 'hdfs://hacluster/chetan/streamSample.csv' INTO TABLE 
stream_table OPTIONS('HEADER'='true');

// spark-shell 
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-shell --master yarn-client

import java.io.{File, PrintWriter}
import java.net.ServerSocket

import org.apache.spark.sql.{CarbonEnv, SparkSession}
import org.apache.spark.sql.hive.CarbonRelation
import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery}

import org.apache.carbondata.core.constants.CarbonCommonConstants
import org.apache.carbondata.core.util.CarbonProperties
import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}

CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
 "/MM/dd")

import org.apache.spark.sql.CarbonSession._

val carbonSession = SparkSession.
  builder().
  appName("StreamExample").
  config("spark.sql.warehouse.dir", 
"hdfs://hacluster/user/sparkhive/warehouse").
  

[GitHub] carbondata pull request #1445: [WIP][CARBONDATA-1551] Add test cases for the...

2017-11-15 Thread xubo245
Github user xubo245 closed the pull request at:

https://github.com/apache/carbondata/pull/1445


---


[GitHub] carbondata pull request #1502: [CARBONDATA-1720] Wrong data displayed for <=...

2017-11-15 Thread dhatchayani
GitHub user dhatchayani opened a pull request:

https://github.com/apache/carbondata/pull/1502

[CARBONDATA-1720] Wrong data displayed for <= filter for timestamp 
column(dictionary column)

Issue:
<= filter is giving wrong results for timestamp dictioinary column
Solution:
In less than equal to filter, we are considering surrogate 2 as default 
value. But surrogate 1 is for default value.

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [X] Testing done
UT Added
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhatchayani/incubator-carbondata 
lessthan_issue

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1502.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1502


commit c8a2ba00eb2bff5a452c0417dd2a1ed768ed6023
Author: dhatchayani 
Date:   2017-11-15T13:11:00Z

[CARBONDATA-1720] Wrong data displayed for <= filter for timestamp 
column(dictionary column)




---


[GitHub] carbondata pull request #1407: [CARBONDATA-1549] CarbonProperties should be ...

2017-11-15 Thread xubo245
Github user xubo245 closed the pull request at:

https://github.com/apache/carbondata/pull/1407


---


[GitHub] carbondata pull request #1406: [WIP][CARBONDATA-1506] SDV tests error in CI

2017-11-15 Thread xubo245
Github user xubo245 closed the pull request at:

https://github.com/apache/carbondata/pull/1406


---


[GitHub] carbondata issue #1259: [Review][CARBONDATA-1381] Add test cases for missing...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1259
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1730/



---


[jira] [Closed] (CARBONDATA-1724) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhatchayani closed CARBONDATA-1724.
---
Resolution: Duplicate

> Wrong data displayed for <= filter for timestamp column(dictionary column)
> --
>
> Key: CARBONDATA-1724
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1724
> Project: CarbonData
>  Issue Type: Bug
>Reporter: dhatchayani
>Assignee: dhatchayani
>
> *Issue:*
> <= filter is giving wrong results for timestamp dictioinary column
> *Steps to reproduce:*
> (1) Create a table with a timestamp dictionary column
> create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
> ('DICTIONARY_INCLUDE'='dob')
> (2) Load data
> 1970-01-01 05:30:00 (same value as 300 records)
> (3) Apply filter on table
> select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |300 |
> ++
> select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |1 |
> ++
> Both the queries should give us the same results.
> Solution:
> In less than equal to filter, we are considering surrogate 2 as default 
> value. But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1725) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhatchayani closed CARBONDATA-1725.
---
Resolution: Duplicate

> Wrong data displayed for <= filter for timestamp column(dictionary column)
> --
>
> Key: CARBONDATA-1725
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1725
> Project: CarbonData
>  Issue Type: Bug
>Reporter: dhatchayani
>Assignee: dhatchayani
>
> *Issue:*
> <= filter is giving wrong results for timestamp dictioinary column
> *Steps to reproduce:*
> (1) Create a table with a timestamp dictionary column
> create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
> ('DICTIONARY_INCLUDE'='dob')
> (2) Load data
> 1970-01-01 05:30:00 (same value as 300 records)
> (3) Apply filter on table
> select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |300 |
> ++
> select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |1 |
> ++
> Both the queries should give us the same results.
> *+Solution:+*
> In less than equal to filter, we are considering surrogate 2 as default 
> value. But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1722) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhatchayani closed CARBONDATA-1722.
---
Resolution: Duplicate

> Wrong data displayed for <= filter for timestamp column(dictionary column)
> --
>
> Key: CARBONDATA-1722
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1722
> Project: CarbonData
>  Issue Type: Bug
>Reporter: dhatchayani
>Assignee: dhatchayani
>
> *Issue:*
> <= filter is giving wrong results for timestamp dictioinary column
> *Steps to reproduce:*
> (1) Create a table with a timestamp dictionary column
> create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
> ('DICTIONARY_INCLUDE'='dob')
> (2) Load data
> 1970-01-01 05:30:00 (same value as 300 records)
> (3) Apply filter on table
> select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |300 |
> ++
> select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |1 |
> ++
> Both the queries should give us the same results.
> Solution:
> In less than equal to filter, we are considering surrogate 2 as default 
> value. But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1721) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhatchayani closed CARBONDATA-1721.
---
Resolution: Duplicate

> Wrong data displayed for <= filter for timestamp column(dictionary column)
> --
>
> Key: CARBONDATA-1721
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1721
> Project: CarbonData
>  Issue Type: Bug
>Reporter: dhatchayani
>Assignee: dhatchayani
>
> *Issue:*
> <= filter is giving wrong results for timestamp dictioinary column
> *Steps to reproduce:*
> (1) Create a table with a timestamp dictionary column
> create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
> ('DICTIONARY_INCLUDE'='dob')
> (2) Load data
> 1970-01-01 05:30:00 (same value as 300 records)
> (3) Apply filter on table
> select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |300 |
> ++
> select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |1 |
> ++
> Both the queries should give us the same results.
> Solution:
> In less than equal to filter, we are considering surrogate 2 as default 
> value. But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1726) Carbon1.3.0-Streaming - Select query from spark-sql does not execute successfully for streaming table load

2017-11-15 Thread Chetan Bhat (JIRA)
Chetan Bhat created CARBONDATA-1726:
---

 Summary: Carbon1.3.0-Streaming - Select query from spark-sql does 
not execute successfully for streaming table load
 Key: CARBONDATA-1726
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1726
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 1.3.0
 Environment: 3 node ant cluster SUSE 11 SP4
Reporter: Chetan Bhat


Steps :
// prepare csv file for batch loading
cd /srv/spark2.2Bigdata/install/hadoop/datanode/bin

// generate streamSample.csv
id,name,city,salary,file
10001,batch_1,city_1,0.1,school_1:school_11$20
10002,batch_2,city_2,0.2,school_2:school_22$30
10003,batch_3,city_3,0.3,school_3:school_33$40
10004,batch_4,city_4,0.4,school_4:school_44$50
10005,batch_5,city_5,0.5,school_5:school_55$60

// put to hdfs /tmp/streamSample.csv
./hadoop fs -put streamSample.csv /tmp

// spark-beeline
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-submit --master yarn-client --executor-memory 10G --executor-cores 5 
--driver-memory 5G --num-executors 3 --class 
org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
/srv/spark2.2Bigdata/install/spark/sparkJdbc/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar
 "hdfs://hacluster/user/sparkhive/warehouse"

bin/beeline -u jdbc:hive2://10.18.98.34:23040

CREATE TABLE stream_table(
id INT,
name STRING,
city STRING,
salary FLOAT
)
STORED BY 'carbondata'
TBLPROPERTIES('streaming'='true', 'sort_columns'='name');

LOAD DATA LOCAL INPATH 'hdfs://hacluster/chetan/streamSample.csv' INTO TABLE 
stream_table OPTIONS('HEADER'='true');

// spark-shell 
cd /srv/spark2.2Bigdata/install/spark/sparkJdbc
bin/spark-shell --master yarn-client

import java.io.{File, PrintWriter}
import java.net.ServerSocket

import org.apache.spark.sql.{CarbonEnv, SparkSession}
import org.apache.spark.sql.hive.CarbonRelation
import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery}

import org.apache.carbondata.core.constants.CarbonCommonConstants
import org.apache.carbondata.core.util.CarbonProperties
import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}

CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
 "/MM/dd")

import org.apache.spark.sql.CarbonSession._

val carbonSession = SparkSession.
  builder().
  appName("StreamExample").
  config("spark.sql.warehouse.dir", 
"hdfs://hacluster/user/sparkhive/warehouse").
  config("javax.jdo.option.ConnectionURL", 
"jdbc:mysql://10.18.98.34:3306/sparksql?characterEncoding=UTF-8").
  config("javax.jdo.option.ConnectionDriverName", "com.mysql.jdbc.Driver").
  config("javax.jdo.option.ConnectionPassword", "huawei").
  config("javax.jdo.option.ConnectionUserName", "sparksql").
  getOrCreateCarbonSession()
   
carbonSession.sparkContext.setLogLevel("ERROR")

carbonSession.sql("select * from stream_table").show

Issue : Select query from spark-sql does not execute successfully for streaming 
table load.


Expected : Select query from spark-sql should execute successfully for 
streaming table load.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1725) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)
dhatchayani created CARBONDATA-1725:
---

 Summary: Wrong data displayed for <= filter for timestamp 
column(dictionary column)
 Key: CARBONDATA-1725
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1725
 Project: CarbonData
  Issue Type: Bug
Reporter: dhatchayani
Assignee: dhatchayani


*Issue:*
<= filter is giving wrong results for timestamp dictioinary column

*Steps to reproduce:*
(1) Create a table with a timestamp dictionary column
create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
('DICTIONARY_INCLUDE'='dob')
(2) Load data
1970-01-01 05:30:00 (same value as 300 records)
(3) Apply filter on table
select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|300 |
++
select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|1 |
++


Both the queries should give us the same results.

Solution:
In less than equal to filter, we are considering surrogate 2 as default value. 
But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1725) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhatchayani updated CARBONDATA-1725:

Description: 
*Issue:*
<= filter is giving wrong results for timestamp dictioinary column

*Steps to reproduce:*
(1) Create a table with a timestamp dictionary column
create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
('DICTIONARY_INCLUDE'='dob')
(2) Load data
1970-01-01 05:30:00 (same value as 300 records)
(3) Apply filter on table
select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|300 |
++
select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|1 |
++


Both the queries should give us the same results.

*+Solution:+*
In less than equal to filter, we are considering surrogate 2 as default value. 
But surrogate 1 is for default value.

  was:
*Issue:*
<= filter is giving wrong results for timestamp dictioinary column

*Steps to reproduce:*
(1) Create a table with a timestamp dictionary column
create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
('DICTIONARY_INCLUDE'='dob')
(2) Load data
1970-01-01 05:30:00 (same value as 300 records)
(3) Apply filter on table
select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|300 |
++
select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|1 |
++


Both the queries should give us the same results.

Solution:
In less than equal to filter, we are considering surrogate 2 as default value. 
But surrogate 1 is for default value.


> Wrong data displayed for <= filter for timestamp column(dictionary column)
> --
>
> Key: CARBONDATA-1725
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1725
> Project: CarbonData
>  Issue Type: Bug
>Reporter: dhatchayani
>Assignee: dhatchayani
>
> *Issue:*
> <= filter is giving wrong results for timestamp dictioinary column
> *Steps to reproduce:*
> (1) Create a table with a timestamp dictionary column
> create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
> ('DICTIONARY_INCLUDE'='dob')
> (2) Load data
> 1970-01-01 05:30:00 (same value as 300 records)
> (3) Apply filter on table
> select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |300 |
> ++
> select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
> ++
> |count(1)|
> ++
> |1 |
> ++
> Both the queries should give us the same results.
> *+Solution:+*
> In less than equal to filter, we are considering surrogate 2 as default 
> value. But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1720) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)
dhatchayani created CARBONDATA-1720:
---

 Summary: Wrong data displayed for <= filter for timestamp 
column(dictionary column)
 Key: CARBONDATA-1720
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1720
 Project: CarbonData
  Issue Type: Bug
Reporter: dhatchayani
Assignee: dhatchayani


*Issue:*
<= filter is giving wrong results for timestamp dictioinary column

*Steps to reproduce:*
(1) Create a table with a timestamp dictionary column
create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
('DICTIONARY_INCLUDE'='dob')
(2) Load data
1970-01-01 05:30:00 (same value as 300 records)
(3) Apply filter on table
select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|300 |
++
select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|1 |
++


Both the queries should give us the same results.

Solution:
In less than equal to filter, we are considering surrogate 2 as default value. 
But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1724) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)
dhatchayani created CARBONDATA-1724:
---

 Summary: Wrong data displayed for <= filter for timestamp 
column(dictionary column)
 Key: CARBONDATA-1724
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1724
 Project: CarbonData
  Issue Type: Bug
Reporter: dhatchayani
Assignee: dhatchayani


*Issue:*
<= filter is giving wrong results for timestamp dictioinary column

*Steps to reproduce:*
(1) Create a table with a timestamp dictionary column
create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
('DICTIONARY_INCLUDE'='dob')
(2) Load data
1970-01-01 05:30:00 (same value as 300 records)
(3) Apply filter on table
select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|300 |
++
select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|1 |
++


Both the queries should give us the same results.

Solution:
In less than equal to filter, we are considering surrogate 2 as default value. 
But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1723) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)
dhatchayani created CARBONDATA-1723:
---

 Summary: Wrong data displayed for <= filter for timestamp 
column(dictionary column)
 Key: CARBONDATA-1723
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1723
 Project: CarbonData
  Issue Type: Bug
Reporter: dhatchayani
Assignee: dhatchayani






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1721) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)
dhatchayani created CARBONDATA-1721:
---

 Summary: Wrong data displayed for <= filter for timestamp 
column(dictionary column)
 Key: CARBONDATA-1721
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1721
 Project: CarbonData
  Issue Type: Bug
Reporter: dhatchayani
Assignee: dhatchayani


*Issue:*
<= filter is giving wrong results for timestamp dictioinary column

*Steps to reproduce:*
(1) Create a table with a timestamp dictionary column
create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
('DICTIONARY_INCLUDE'='dob')
(2) Load data
1970-01-01 05:30:00 (same value as 300 records)
(3) Apply filter on table
select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|300 |
++
select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|1 |
++


Both the queries should give us the same results.

Solution:
In less than equal to filter, we are considering surrogate 2 as default value. 
But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1722) Wrong data displayed for <= filter for timestamp column(dictionary column)

2017-11-15 Thread dhatchayani (JIRA)
dhatchayani created CARBONDATA-1722:
---

 Summary: Wrong data displayed for <= filter for timestamp 
column(dictionary column)
 Key: CARBONDATA-1722
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1722
 Project: CarbonData
  Issue Type: Bug
Reporter: dhatchayani
Assignee: dhatchayani


*Issue:*
<= filter is giving wrong results for timestamp dictioinary column

*Steps to reproduce:*
(1) Create a table with a timestamp dictionary column
create table t1(dob timestamp) stored by 'carbondata' TBLPROPERTIES 
('DICTIONARY_INCLUDE'='dob')
(2) Load data
1970-01-01 05:30:00 (same value as 300 records)
(3) Apply filter on table
select count(*) from t1 where dob=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|300 |
++
select count(*) from t1 where dob<=cast('1970-01-01 05:30:00' as timestamp);
++
|count(1)|
++
|1 |
++


Both the queries should give us the same results.

Solution:
In less than equal to filter, we are considering surrogate 2 as default value. 
But surrogate 1 is for default value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1494: [CARBONDATA-1706] Making index merge DDL insensitive...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1494
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1151/



---


[GitHub] carbondata issue #1494: [CARBONDATA-1706] Making index merge DDL insensitive...

2017-11-15 Thread dhatchayani
Github user dhatchayani commented on the issue:

https://github.com/apache/carbondata/pull/1494
  
retest this please


---


[GitHub] carbondata issue #1500: [CARBONDATA-1717]Remove spark broadcast for gettting...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1500
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1150/



---


[GitHub] carbondata issue #1494: [CARBONDATA-1706] Making index merge DDL insensitive...

2017-11-15 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1494
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/1729/



---


[jira] [Updated] (CARBONDATA-1708) Carbon1.3.0 Dictionary creation: By default dictionary is not created for string column

2017-11-15 Thread Chetan Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-1708:

Labels: Functional  (was: )

> Carbon1.3.0 Dictionary creation: By default dictionary is not created for 
> string column
> ---
>
> Key: CARBONDATA-1708
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1708
> Project: CarbonData
>  Issue Type: Bug
>  Components: other
>Affects Versions: 1.3.0
> Environment: 3 Node ant cluster
>Reporter: Ajeet Rai
>Priority: Minor
>  Labels: Functional
>
> By default dictionary is not created for string column.  
> steps: 
> 1: create a table with one column of string data type:
> create table check_dict(id int, name string)
> 2: insert into check_dict select 1,'abc'
> 3: describe the table to check dictionary column:
> desc formatted check_dict;
> 4: Observe that name column is not DICTIONARY column.
> Issue: This is not as per document. 
> Link: https://carbondata.apache.org/ddl-operation-on-carbondata.html
> Expected : Dictionary encoding is enabled by default for all String columns, 
> and disabled for non-String columns



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1713) Carbon1.3.0-Pre-AggregateTable - Aggregate query on main table fails after creating pre-aggregate table

2017-11-15 Thread Chetan Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-1713:

Labels: Functional sanity  (was: sanity)

> Carbon1.3.0-Pre-AggregateTable - Aggregate query on main table fails after 
> creating pre-aggregate table
> ---
>
> Key: CARBONDATA-1713
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1713
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.3.0
> Environment: ANT Test cluster - 3 node
>Reporter: Ramakrishna S
>Assignee: kumar vishal
>Priority: Minor
>  Labels: Functional, sanity
> Fix For: 1.3.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> 0: jdbc:hive2://10.18.98.34:23040> load data inpath 
> "hdfs://hacluster/user/test/lineitem.tbl.1" into table lineitem 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> Error: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or 
> view 'lineitem' not found in database 'default'; (state=,code=0)
> 0: jdbc:hive2://10.18.98.34:23040> create table if not exists lineitem(
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPMODE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPINSTRUCT string,
> 0: jdbc:hive2://10.18.98.34:23040> L_RETURNFLAG string,
> 0: jdbc:hive2://10.18.98.34:23040> L_RECEIPTDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_ORDERKEY string,
> 0: jdbc:hive2://10.18.98.34:23040> L_PARTKEY string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SUPPKEY   string,
> 0: jdbc:hive2://10.18.98.34:23040> L_LINENUMBER int,
> 0: jdbc:hive2://10.18.98.34:23040> L_QUANTITY double,
> 0: jdbc:hive2://10.18.98.34:23040> L_EXTENDEDPRICE double,
> 0: jdbc:hive2://10.18.98.34:23040> L_DISCOUNT double,
> 0: jdbc:hive2://10.18.98.34:23040> L_TAX double,
> 0: jdbc:hive2://10.18.98.34:23040> L_LINESTATUS string,
> 0: jdbc:hive2://10.18.98.34:23040> L_COMMITDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_COMMENT  string
> 0: jdbc:hive2://10.18.98.34:23040> ) STORED BY 'org.apache.carbondata.format'
> 0: jdbc:hive2://10.18.98.34:23040> TBLPROPERTIES 
> ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.338 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> load data inpath 
> "hdfs://hacluster/user/test/lineitem.tbl.1" into table lineitem 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (48.634 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> create datamap agr_lineitem ON TABLE 
> lineitem USING "org.apache.carbondata.datamap.AggregateDataMapHandler" as 
> select L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from 
> lineitem group by  L_RETURNFLAG, L_LINESTATUS;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (16.552 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> select 
> L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem 
> group by  L_RETURNFLAG, L_LINESTATUS;
> Error: org.apache.spark.sql.AnalysisException: Column doesnot exists in Pre 
> Aggregate table; (state=,code=0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1715) Carbon 1.3.0- Bad Records BAD_RECORD_ACTION is not working as per the Document link.

2017-11-15 Thread Chetan Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-1715:

Labels: Document  (was: )

> Carbon 1.3.0- Bad Records BAD_RECORD_ACTION is not working as per the 
> Document link.
> 
>
> Key: CARBONDATA-1715
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1715
> Project: CarbonData
>  Issue Type: Bug
>  Components: docs
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
>  Labels: Document
> Attachments: Bad_Records.PNG
>
>
> By default the BAD_RECORDS_ACTION = FORCE should be written in 
> "http://carbondata.apache.org/dml-operation-on-carbondata.html; document link 
> but it is written as BAD_RECORDS_ACTION = FAIL.
> Expected result: BAD_RECORDS_ACTION = FORCE should be mentioned BAD RECORDS 
> HANDLING section in document.
> Actual issue: BAD_RECORDS_ACTION = FAIL is present in the Document link.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1711) Carbon1.3.0-Pre-AggregateTable - Show datamap on table does not work

2017-11-15 Thread Chetan Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-1711:

Labels: Functional  (was: )

> Carbon1.3.0-Pre-AggregateTable - Show datamap  on table  does not 
> work
> -
>
> Key: CARBONDATA-1711
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1711
> Project: CarbonData
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.3.0
> Environment: Test
>Reporter: Ramakrishna S
>Priority: Minor
>  Labels: Functional
> Fix For: 1.3.0
>
>
> 0: jdbc:hive2://10.18.98.34:23040> create datamap agr_lineitem ON TABLE 
> lineitem USING "org.apache.carbondata.datamap.AggregateDataMapHandler" as 
> select L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from 
> lineitem group by  L_RETURNFLAG, L_LINESTATUS;
> Error: java.lang.RuntimeException: Table [lineitem_agr_lineitem] already 
> exists under database [default] (state=,code=0)
> 0: jdbc:hive2://10.18.98.34:23040> show tables;
> +---+---+--+--+
> | database  | tableName | isTemporary  |
> +---+---+--+--+
> | default   | flow_carbon_test4 | false|
> | default   | jl_r3 | false|
> | default   | lineitem  | false|
> | default   | lineitem_agr_lineitem | false|
> | default   | sensor_reading_blockblank_false   | false|
> | default   | sensor_reading_blockblank_false1  | false|
> | default   | sensor_reading_blockblank_false2  | false|
> | default   | sensor_reading_false  | false|
> | default   | sensor_reading_true   | false|
> | default   | t1| false|
> | default   | t1_agg_t1 | false|
> | default   | tc4   | false|
> | default   | uniqdata  | false|
> +---+---+--+--+
> 13 rows selected (0.04 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> show datamap on table lineitem;
> Error: java.lang.RuntimeException:
> BaseSqlParser
> missing 'FUNCTIONS' at 'on'(line 1, pos 13)
> == SQL ==
> show datamap on table lineitem
> -^^^
> CarbonSqlParser [1.6] failure: identifier matching regex (?i)SEGMENTS 
> expected
> show datamap on table lineitem



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1716) Carbon 1.3.0-Table Comment- When unset the header is not removed in describe formatted.

2017-11-15 Thread Chetan Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-1716:

Labels: Functional  (was: )

>  Carbon 1.3.0-Table Comment- When unset the header is not removed in describe 
> formatted.
> 
>
> Key: CARBONDATA-1716
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1716
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
>  Labels: Functional
> Attachments: UNSET is not working as expected 1.PNG, UNSET is not 
> working as expected.PNG
>
>
> When UNSET, the header is not removed in the Describe Formatted.
> Create a table with comment.
> SET the comment
> UNSET the comment
> Describe formatted
> Expected Result: When UNSET the header should be removed in the Describe 
> Formatted.
> Actual result: When UNSET the header is not removed in the describe Formatted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1500: [CARBONDATA-1717]Remove spark broadcast for gettting...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1500
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1149/



---


[jira] [Closed] (CARBONDATA-1718) carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi closed CARBONDATA-1718.

Resolution: Invalid

Configuration set in property file was not proper. 

> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> --
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: action redirect.PNG, action redirect1.PNG
>
>
> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> When we set carbon properties as carbon.options.bad.records.action=REDIRECT , 
> csv should be created. But Csv is not created.
> Expected Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv should be created.
> Actual Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1718) carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Summary: carbon.options.bad.records.action=REDIRECT configured in 
carbon.properties is not working as expected.  (was: Carbon 1.3.0 Bad Record 
-carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
not working as expected.)

> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> --
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: action redirect.PNG, action redirect1.PNG
>
>
> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> When we set carbon properties as carbon.options.bad.records.action=REDIRECT , 
> csv should be created. But Csv is not created.
> Expected Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv should be created.
> Actual Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1714) Carbon1.3.0-Alter Table - Select columns with is null and limit throws ArrayIndexOutOfBoundsException after multiple alter

2017-11-15 Thread Chetan Bhat (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-1714:

Labels: DFX  (was: )

> Carbon1.3.0-Alter Table - Select columns with is null and limit throws 
> ArrayIndexOutOfBoundsException after multiple alter
> --
>
> Key: CARBONDATA-1714
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1714
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster- SUSE 11 SP4
>Reporter: Chetan Bhat
>  Labels: DFX
>
> Steps -
> Execute the below queries in sequence.
> create database test;
> use test;
> CREATE TABLE uniqdata111785 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_INCLUDE'='INTEGER_COLUMN1,CUST_ID');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata111785 OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> alter table test.uniqdata111785 RENAME TO  uniqdata1117856;
> select * from test.uniqdata1117856 limit 100;
> ALTER TABLE test.uniqdata1117856 ADD COLUMNS (cust_name1 int);
> select * from test.uniqdata1117856 where cust_name1 is null limit 100;
> ALTER TABLE test.uniqdata1117856 DROP COLUMNS (cust_name1);
> select * from test.uniqdata1117856 where cust_name1 is null limit 100;
> ALTER TABLE test.uniqdata1117856 CHANGE CUST_ID CUST_ID BIGINT;
> select * from test.uniqdata1117856 where CUST_ID in (10013,10011,1,10019) 
> limit 10;
> ALTER  TABLE test.uniqdata1117856 ADD COLUMNS (a1 INT, b1 STRING) 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='b1');
> select a1,b1 from test.uniqdata1117856  where a1 is null and b1 is null limit 
> 100;
> Actual Issue : Select columns with is null and limit throws 
> ArrayIndexOutOfBoundsException after multiple alter operations.
> 0: jdbc:hive2://10.18.98.34:23040> select a1,b1 from test.uniqdata1117856  
> where a1 is null and b1 is null limit 100;
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 9.0 failed 4 times, most recent failure: Lost task 0.3 in 
> stage 9.0 (TID 14, BLR114269, executor 2): 
> java.lang.ArrayIndexOutOfBoundsException: 7
> at 
> org.apache.carbondata.core.scan.model.QueryModel.setDimAndMsrColumnNode(QueryModel.java:223)
> at 
> org.apache.carbondata.core.scan.model.QueryModel.processFilterExpression(QueryModel.java:172)
> at 
> org.apache.carbondata.core.scan.model.QueryModel.processFilterExpression(QueryModel.java:181)
> at 
> org.apache.carbondata.hadoop.util.CarbonInputFormatUtil.processFilterExpression(CarbonInputFormatUtil.java:118)
> at 
> org.apache.carbondata.hadoop.api.CarbonTableInputFormat.getQueryModel(CarbonTableInputFormat.java:791)
> at 
> org.apache.carbondata.spark.rdd.CarbonScanRDD.internalCompute(CarbonScanRDD.scala:250)
> at 
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
> at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
> at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> at org.apache.spark.scheduler.Task.run(Task.scala:99)
> at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace: (state=,code=0)
> Expected : The select query should be successful after multiple alter 
> operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1719) Carbon1.3.0-Pre-AggregateTable - Empty segment is created when pre-aggr table created in parallel with table load, aggregate query returns no data

2017-11-15 Thread Ramakrishna S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramakrishna S updated CARBONDATA-1719:
--
Summary: Carbon1.3.0-Pre-AggregateTable - Empty segment is created when 
pre-aggr table created in parallel with table load, aggregate query returns no 
data  (was: Carbon1.3.0-Pre-AggregateTable - Empty segment is created if 
pre-aggr table created in parallel with table load, aggregate query returns no 
data)

> Carbon1.3.0-Pre-AggregateTable - Empty segment is created when pre-aggr table 
> created in parallel with table load, aggregate query returns no data
> --
>
> Key: CARBONDATA-1719
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1719
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: Test - 3 node ant cluster
>Reporter: Ramakrishna S
>  Labels: DFX
> Fix For: 1.3.0
>
>
> 1. Create a table
> create table if not exists lineitem3(L_SHIPDATE string,L_SHIPMODE 
> string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE 
> string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER 
> int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX 
> double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 
> 'org.apache.carbondata.format' TBLPROPERTIES 
> ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> 2. Run load queries and create pre-agg table queries in diff console:
> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
> lineitem3 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> create datamap agr_lineitem3 ON TABLE lineitem3 USING 
> "org.apache.carbondata.datamap.AggregateDataMapHandler" as select 
> L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem3 
> group by  L_RETURNFLAG, L_LINESTATUS;
> 3.  Check table content using aggregate query:
> select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
> lineitem3 group by l_returnflag, l_linestatus;
> 0: jdbc:hive2://10.18.98.34:23040> select 
> l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem3 
> group by l_returnflag, l_linestatus;
> +---+---+--+---+--+
> | l_returnflag  | l_linestatus  | sum(l_quantity)  | sum(l_extendedprice)  |
> +---+---+--+---+--+
> +---+---+--+---+--+
> No rows selected (1.258 seconds)
> HDFS data:
> BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs 
> -ls /carbonstore/default/lineitem3_agr_lineitem3/Fact/Part0/Segment_0
> BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs 
> -ls /carbonstore/default/lineitem3/Fact/Part0/Segment_0
> Found 27 items
> -rw-r--r--   2 root users  22148 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/1510740293106.carbonindexmerge
> -rw-r--r--   2 root users   58353052 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58351680 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno1-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58364823 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-1_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58356303 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-2_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58342246 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58353186 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno1-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58352964 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-1_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58357183 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-2_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58345739 2017-11-15 18:05 
> 

[jira] [Updated] (CARBONDATA-1719) Carbon1.3.0-Pre-AggregateTable - Empty segment is created if pre-aggr table created in parallel with table load, aggregate query returns no data

2017-11-15 Thread Ramakrishna S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramakrishna S updated CARBONDATA-1719:
--
Request participants: Kanaka Kumar Avvaru  (was: )

> Carbon1.3.0-Pre-AggregateTable - Empty segment is created if pre-aggr table 
> created in parallel with table load, aggregate query returns no data
> 
>
> Key: CARBONDATA-1719
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1719
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: Test - 3 node ant cluster
>Reporter: Ramakrishna S
>  Labels: DFX
> Fix For: 1.3.0
>
>
> 1. Create a table
> create table if not exists lineitem3(L_SHIPDATE string,L_SHIPMODE 
> string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE 
> string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER 
> int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX 
> double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 
> 'org.apache.carbondata.format' TBLPROPERTIES 
> ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> 2. Run load queries and create pre-agg table queries in diff console:
> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
> lineitem3 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> create datamap agr_lineitem3 ON TABLE lineitem3 USING 
> "org.apache.carbondata.datamap.AggregateDataMapHandler" as select 
> L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem3 
> group by  L_RETURNFLAG, L_LINESTATUS;
> 3.  Check table content using aggregate query:
> select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
> lineitem3 group by l_returnflag, l_linestatus;
> 0: jdbc:hive2://10.18.98.34:23040> select 
> l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem3 
> group by l_returnflag, l_linestatus;
> +---+---+--+---+--+
> | l_returnflag  | l_linestatus  | sum(l_quantity)  | sum(l_extendedprice)  |
> +---+---+--+---+--+
> +---+---+--+---+--+
> No rows selected (1.258 seconds)
> HDFS data:
> BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs 
> -ls /carbonstore/default/lineitem3_agr_lineitem3/Fact/Part0/Segment_0
> BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs 
> -ls /carbonstore/default/lineitem3/Fact/Part0/Segment_0
> Found 27 items
> -rw-r--r--   2 root users  22148 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/1510740293106.carbonindexmerge
> -rw-r--r--   2 root users   58353052 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58351680 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno1-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58364823 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-1_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58356303 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-2_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58342246 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58353186 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno1-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58352964 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-1_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58357183 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-2_batchno0-0-1510740300247.carbondata
> -rw-r--r--   2 root users   58345739 2017-11-15 18:05 
> /carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-2-0_batchno0-0-1510740300247.carbondata
> Yarn job stages:
> 29
> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
> lineitem3 
> 

[jira] [Updated] (CARBONDATA-1719) Carbon1.3.0-Pre-AggregateTable - Empty segment is created if pre-aggr table created in parallel with table load, aggregate query returns no data

2017-11-15 Thread Ramakrishna S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramakrishna S updated CARBONDATA-1719:
--
Description: 
1. Create a table
create table if not exists lineitem3(L_SHIPDATE string,L_SHIPMODE 
string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE 
string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER 
int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX 
double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 
'org.apache.carbondata.format' TBLPROPERTIES 
('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
2. Run load queries and create pre-agg table queries in diff console:
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 
options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');

create datamap agr_lineitem3 ON TABLE lineitem3 USING 
"org.apache.carbondata.datamap.AggregateDataMapHandler" as select 
L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem3 
group by  L_RETURNFLAG, L_LINESTATUS;

3.  Check table content using aggregate query:
select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
lineitem3 group by l_returnflag, l_linestatus;

0: jdbc:hive2://10.18.98.34:23040> select 
l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem3 
group by l_returnflag, l_linestatus;
+---+---+--+---+--+
| l_returnflag  | l_linestatus  | sum(l_quantity)  | sum(l_extendedprice)  |
+---+---+--+---+--+
+---+---+--+---+--+
No rows selected (1.258 seconds)


HDFS data:
BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs -ls 
/carbonstore/default/lineitem3_agr_lineitem3/Fact/Part0/Segment_0
BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs -ls 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0
Found 27 items
-rw-r--r--   2 root users  22148 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/1510740293106.carbonindexmerge
-rw-r--r--   2 root users   58353052 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58351680 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno1-0-1510740300247.carbondata
-rw-r--r--   2 root users   58364823 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-1_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58356303 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-2_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58342246 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58353186 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno1-0-1510740300247.carbondata
-rw-r--r--   2 root users   58352964 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-1_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58357183 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-2_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58345739 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-2-0_batchno0-0-1510740300247.carbondata

Yarn job stages:
29  
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 
options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT')
collect at CommonUtil.scala:858 +details2017/11/15 18:10:51 0.1 s   
1/1 
28  
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 
options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT')
collect at CarbonDataRDDFactory.scala:918 +details  2017/11/15 18:10:50 
1 s 
3/3 10.8 KB 
27  
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 

[jira] [Updated] (CARBONDATA-1719) Carbon1.3.0-Pre-AggregateTable - Empty segment is created if pre-aggr table created in parallel with table load, aggregate query returns no data

2017-11-15 Thread Ramakrishna S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramakrishna S updated CARBONDATA-1719:
--
Description: 
1. Create a table
create table if not exists lineitem3(L_SHIPDATE string,L_SHIPMODE 
string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE 
string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER 
int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX 
double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 
'org.apache.carbondata.format' TBLPROPERTIES 
('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
2. Run load queries and create pre-agg table queries in diff console:
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 
options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');

create datamap agr_lineitem3 ON TABLE lineitem3 USING 
"org.apache.carbondata.datamap.AggregateDataMapHandler" as select 
L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem3 
group by  L_RETURNFLAG, L_LINESTATUS;

3.  Check table content using aggregate query:
select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
lineitem3 group by l_returnflag, l_linestatus;

0: jdbc:hive2://10.18.98.34:23040> select 
l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from lineitem3 
group by l_returnflag, l_linestatus;
+---+---+--+---+--+
| l_returnflag  | l_linestatus  | sum(l_quantity)  | sum(l_extendedprice)  |
+---+---+--+---+--+
+---+---+--+---+--+
No rows selected (1.258 seconds)


HDFS data:
BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs -ls 
/carbonstore/default/lineitem3_agr_lineitem3/Fact/Part0/Segment_0
17/11/15 18:15:18 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs -ls 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0
17/11/15 18:15:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 27 items
-rw-r--r--   2 root users  22148 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/1510740293106.carbonindexmerge
-rw-r--r--   2 root users   58353052 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58351680 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno1-0-1510740300247.carbondata
-rw-r--r--   2 root users   58364823 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-1_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58356303 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-2_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58342246 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58353186 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno1-0-1510740300247.carbondata
-rw-r--r--   2 root users   58352964 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-1_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58357183 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-2_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58345739 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-2-0_batchno0-0-1510740300247.carbondata

Yarn job stages:
29  
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 
options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT')
collect at CommonUtil.scala:858 +details2017/11/15 18:10:51 0.1 s   
1/1 
28  
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 
options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT')

[jira] [Created] (CARBONDATA-1719) Carbon1.3.0-Pre-AggregateTable - Empty segment is created if pre-aggr table created in parallel with table load, aggregate query returns no data

2017-11-15 Thread Ramakrishna S (JIRA)
Ramakrishna S created CARBONDATA-1719:
-

 Summary: Carbon1.3.0-Pre-AggregateTable - Empty segment is created 
if pre-aggr table created in parallel with table load, aggregate query returns 
no data
 Key: CARBONDATA-1719
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1719
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Affects Versions: 1.3.0
 Environment: Test - 3 node ant cluster
Reporter: Ramakrishna S
 Fix For: 1.3.0


1. Create a table
create table if not exists lineitem3(L_SHIPDATE string,L_SHIPMODE 
string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE 
string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER 
int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX 
double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 
'org.apache.carbondata.format' TBLPROPERTIES 
('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
2. Run load queries and create pre-agg table queries in diff console:
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 
options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');

create datamap agr_lineitem3 ON TABLE lineitem3 USING 
"org.apache.carbondata.datamap.AggregateDataMapHandler" as select 
L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem3 
group by  L_RETURNFLAG, L_LINESTATUS;

3.  Check table content using aggregate query:
select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
lineitem3 group by l_returnflag, l_linestatus;

0: jdbc:hive2://10.18.98.34:23040> show segments for table lineitem2;
++--+--+--++--+
| SegmentSequenceId  |  Status  | Load Start Time  |  Load End Time 
  | Merged To  |
++--+--+--++--+
| 0  | Success  | 2017-11-15 17:56:54.554  | 2017-11-15 
17:57:56.605  ||
++--+--+--++--+
1 row selected (0.179 seconds)
0: jdbc:hive2://10.18.98.34:23040> show segments for table lineitem1;

HDFS data:
BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs -ls 
/carbonstore/default/lineitem3_agr_lineitem3/Fact/Part0/Segment_0
17/11/15 18:15:18 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs -ls 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0
17/11/15 18:15:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 27 items
-rw-r--r--   2 root users  22148 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/1510740293106.carbonindexmerge
-rw-r--r--   2 root users   58353052 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58351680 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno1-0-1510740300247.carbondata
-rw-r--r--   2 root users   58364823 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-1_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58356303 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-2_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58342246 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58353186 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno1-0-1510740300247.carbondata
-rw-r--r--   2 root users   58352964 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-1_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58357183 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-2_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58345739 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-2-0_batchno0-0-1510740300247.carbondata

Yarn job stages:




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1719) Carbon1.3.0-Pre-AggregateTable - Empty segment is created if pre-aggr table created in parallel with table load, aggregate query returns no data

2017-11-15 Thread Ramakrishna S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramakrishna S updated CARBONDATA-1719:
--
Description: 
1. Create a table
create table if not exists lineitem3(L_SHIPDATE string,L_SHIPMODE 
string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE 
string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY   string,L_LINENUMBER 
int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX 
double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 
'org.apache.carbondata.format' TBLPROPERTIES 
('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
2. Run load queries and create pre-agg table queries in diff console:
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 
options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');

create datamap agr_lineitem3 ON TABLE lineitem3 USING 
"org.apache.carbondata.datamap.AggregateDataMapHandler" as select 
L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem3 
group by  L_RETURNFLAG, L_LINESTATUS;

3.  Check table content using aggregate query:
select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
lineitem3 group by l_returnflag, l_linestatus;

0: jdbc:hive2://10.18.98.34:23040> show segments for table lineitem2;
++--+--+--++--+
| SegmentSequenceId  |  Status  | Load Start Time  |  Load End Time 
  | Merged To  |
++--+--+--++--+
| 0  | Success  | 2017-11-15 17:56:54.554  | 2017-11-15 
17:57:56.605  ||
++--+--+--++--+
1 row selected (0.179 seconds)
0: jdbc:hive2://10.18.98.34:23040> show segments for table lineitem1;

HDFS data:
BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs -ls 
/carbonstore/default/lineitem3_agr_lineitem3/Fact/Part0/Segment_0
17/11/15 18:15:18 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
BLR114307:/srv/spark2.2Bigdata/install/hadoop/datanode # bin/hadoop fs -ls 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0
17/11/15 18:15:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 27 items
-rw-r--r--   2 root users  22148 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/1510740293106.carbonindexmerge
-rw-r--r--   2 root users   58353052 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58351680 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-0_batchno1-0-1510740300247.carbondata
-rw-r--r--   2 root users   58364823 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-1_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58356303 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-0-2_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58342246 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58353186 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-0_batchno1-0-1510740300247.carbondata
-rw-r--r--   2 root users   58352964 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-1_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58357183 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-1-2_batchno0-0-1510740300247.carbondata
-rw-r--r--   2 root users   58345739 2017-11-15 18:05 
/carbonstore/default/lineitem3/Fact/Part0/Segment_0/part-2-0_batchno0-0-1510740300247.carbondata

Yarn job stages:
29  
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 
options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT')
collect at CommonUtil.scala:858 +details2017/11/15 18:10:51 0.1 s   
1/1 
28  
load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
lineitem3 

[jira] [Updated] (CARBONDATA-1718) Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Attachment: (was: action redirect.PNG)

> Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT 
> configured in carbon.properties is not working as expected.
> ---
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: action redirect.PNG, action redirect1.PNG
>
>
> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> When we set carbon properties as carbon.options.bad.records.action=REDIRECT , 
> csv should be created. But Csv is not created.
> Expected Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv should be created.
> Actual Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1718) Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Attachment: action redirect.PNG

> Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT 
> configured in carbon.properties is not working as expected.
> ---
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: action redirect.PNG, action redirect1.PNG
>
>
> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> When we set carbon properties as carbon.options.bad.records.action=REDIRECT , 
> csv should be created. But Csv is not created.
> Expected Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv should be created.
> Actual Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1718) Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Attachment: (was: Bad_Record_REDIRECT.PNG)

> Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT 
> configured in carbon.properties is not working as expected.
> ---
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
>
> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> When we set carbon properties as carbon.options.bad.records.action=REDIRECT , 
> csv should be created. But Csv is not created.
> Expected Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv should be created.
> Actual Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1501: [CARBONDATA-1713] Fixed Aggregate query on main tabl...

2017-11-15 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1501
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/1148/



---


[jira] [Updated] (CARBONDATA-1718) Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Attachment: (was: Bad_Record_REDIRECT1.PNG)

> Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT 
> configured in carbon.properties is not working as expected.
> ---
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: Bad_Record_REDIRECT.PNG
>
>
> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> When we set carbon properties as carbon.options.bad.records.action=REDIRECT , 
> csv should be created. But Csv is not created.
> Expected Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv should be created.
> Actual Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1718) Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Summary: Carbon 1.3.0 Bad Record 
-carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
not working as expected.  (was: Carbon 1.3.0 Bad Record 
carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
not working as expected.)

> Carbon 1.3.0 Bad Record -carbon.options.bad.records.action=REDIRECT 
> configured in carbon.properties is not working as expected.
> ---
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: Bad_Record_REDIRECT.PNG, Bad_Record_REDIRECT1.PNG
>
>
> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> When we set carbon properties as carbon.options.bad.records.action=REDIRECT , 
> csv should be created. But Csv is not created.
> Expected Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv should be created.
> Actual Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1718) Carbon 1.3.0 Bad Record carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Description: 
carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
not working as expected.

When we set carbon properties as carbon.options.bad.records.action=REDIRECT , 
csv should be created. But Csv is not created.

Expected Result: When we set carbon properties as 
carbon.options.bad.records.action=REDIRECT , Csv should be created.

Actual Result: When we set carbon properties as 
carbon.options.bad.records.action=REDIRECT , Csv is not  created.





  was:
carbon.options.bad.records.action=REDIRECT is not working as expected.

When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, ideally csv 
should be created. But Csv is not created.

Expected Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
Csv should be created.

Actual Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
Csv is not  created.






> Carbon 1.3.0 Bad Record carbon.options.bad.records.action=REDIRECT configured 
> in carbon.properties is not working as expected.
> --
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: Bad_Record_REDIRECT.PNG, Bad_Record_REDIRECT1.PNG
>
>
> carbon.options.bad.records.action=REDIRECT configured in carbon.properties is 
> not working as expected.
> When we set carbon properties as carbon.options.bad.records.action=REDIRECT , 
> csv should be created. But Csv is not created.
> Expected Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv should be created.
> Actual Result: When we set carbon properties as 
> carbon.options.bad.records.action=REDIRECT , Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1718) Carbon 1.3.0 Bad Record carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Description: 
carbon.options.bad.records.action=REDIRECT is not working as expected.

When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, ideally csv 
should be created. But Csv is not created.

Expected Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
Csv should be created.

Actual Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
Csv is not  created.





  was:
BAD_RECORD_ACTION = REDIRECT is not working as expected.

When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, ideally csv 
should be created. But Csv is not created.

Expected Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
Csv should be created.

Actual Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
Csv is not  created.






> Carbon 1.3.0 Bad Record carbon.options.bad.records.action=REDIRECT configured 
> in carbon.properties is not working as expected.
> --
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: Bad_Record_REDIRECT.PNG, Bad_Record_REDIRECT1.PNG
>
>
> carbon.options.bad.records.action=REDIRECT is not working as expected.
> When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, ideally csv 
> should be created. But Csv is not created.
> Expected Result: When we set carbon properties as BAD_RECORD_ACTION = 
> REDIRECT, Csv should be created.
> Actual Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
> Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1718) Carbon 1.3.0 Bad Record carbon.options.bad.records.action=REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Summary: Carbon 1.3.0 Bad Record carbon.options.bad.records.action=REDIRECT 
configured in carbon.properties is not working as expected.  (was: Carbon 
1.3.0- BAD_RECORD_ACTION = REDIRECT configured in carbon.properties is not 
working as expected.)

> Carbon 1.3.0 Bad Record carbon.options.bad.records.action=REDIRECT configured 
> in carbon.properties is not working as expected.
> --
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: Bad_Record_REDIRECT.PNG, Bad_Record_REDIRECT1.PNG
>
>
> BAD_RECORD_ACTION = REDIRECT is not working as expected.
> When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, ideally csv 
> should be created. But Csv is not created.
> Expected Result: When we set carbon properties as BAD_RECORD_ACTION = 
> REDIRECT, Csv should be created.
> Actual Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
> Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1713) Carbon1.3.0-Pre-AggregateTable - Aggregate query on main table fails after creating pre-aggregate table

2017-11-15 Thread Ramakrishna S (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16253213#comment-16253213
 ] 

Ramakrishna S commented on CARBONDATA-1713:
---

Changing severity based on the clarification given.

> Carbon1.3.0-Pre-AggregateTable - Aggregate query on main table fails after 
> creating pre-aggregate table
> ---
>
> Key: CARBONDATA-1713
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1713
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.3.0
> Environment: ANT Test cluster - 3 node
>Reporter: Ramakrishna S
>Assignee: kumar vishal
>Priority: Minor
>  Labels: sanity
> Fix For: 1.3.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 0: jdbc:hive2://10.18.98.34:23040> load data inpath 
> "hdfs://hacluster/user/test/lineitem.tbl.1" into table lineitem 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> Error: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or 
> view 'lineitem' not found in database 'default'; (state=,code=0)
> 0: jdbc:hive2://10.18.98.34:23040> create table if not exists lineitem(
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPMODE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPINSTRUCT string,
> 0: jdbc:hive2://10.18.98.34:23040> L_RETURNFLAG string,
> 0: jdbc:hive2://10.18.98.34:23040> L_RECEIPTDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_ORDERKEY string,
> 0: jdbc:hive2://10.18.98.34:23040> L_PARTKEY string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SUPPKEY   string,
> 0: jdbc:hive2://10.18.98.34:23040> L_LINENUMBER int,
> 0: jdbc:hive2://10.18.98.34:23040> L_QUANTITY double,
> 0: jdbc:hive2://10.18.98.34:23040> L_EXTENDEDPRICE double,
> 0: jdbc:hive2://10.18.98.34:23040> L_DISCOUNT double,
> 0: jdbc:hive2://10.18.98.34:23040> L_TAX double,
> 0: jdbc:hive2://10.18.98.34:23040> L_LINESTATUS string,
> 0: jdbc:hive2://10.18.98.34:23040> L_COMMITDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_COMMENT  string
> 0: jdbc:hive2://10.18.98.34:23040> ) STORED BY 'org.apache.carbondata.format'
> 0: jdbc:hive2://10.18.98.34:23040> TBLPROPERTIES 
> ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.338 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> load data inpath 
> "hdfs://hacluster/user/test/lineitem.tbl.1" into table lineitem 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (48.634 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> create datamap agr_lineitem ON TABLE 
> lineitem USING "org.apache.carbondata.datamap.AggregateDataMapHandler" as 
> select L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from 
> lineitem group by  L_RETURNFLAG, L_LINESTATUS;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (16.552 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> select 
> L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem 
> group by  L_RETURNFLAG, L_LINESTATUS;
> Error: org.apache.spark.sql.AnalysisException: Column doesnot exists in Pre 
> Aggregate table; (state=,code=0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1718) Carbon 1.3.0- BAD_RECORD_ACTION = REDIRECT configured in carbon.properties is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

pakanati revathi updated CARBONDATA-1718:
-
Summary: Carbon 1.3.0- BAD_RECORD_ACTION = REDIRECT configured in 
carbon.properties is not working as expected.  (was: Carbon 1.3.0- 
BAD_RECORD_ACTION = REDIRECT is not working as expected.)

> Carbon 1.3.0- BAD_RECORD_ACTION = REDIRECT configured in carbon.properties is 
> not working as expected.
> --
>
> Key: CARBONDATA-1718
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.3.0
> Environment: 3 node ant cluster
>Reporter: pakanati revathi
>Priority: Minor
> Attachments: Bad_Record_REDIRECT.PNG, Bad_Record_REDIRECT1.PNG
>
>
> BAD_RECORD_ACTION = REDIRECT is not working as expected.
> When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, ideally csv 
> should be created. But Csv is not created.
> Expected Result: When we set carbon properties as BAD_RECORD_ACTION = 
> REDIRECT, Csv should be created.
> Actual Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
> Csv is not  created.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1713) Carbon1.3.0-Pre-AggregateTable - Aggregate query on main table fails after creating pre-aggregate table

2017-11-15 Thread Ramakrishna S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramakrishna S updated CARBONDATA-1713:
--
Priority: Minor  (was: Major)

> Carbon1.3.0-Pre-AggregateTable - Aggregate query on main table fails after 
> creating pre-aggregate table
> ---
>
> Key: CARBONDATA-1713
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1713
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.3.0
> Environment: ANT Test cluster - 3 node
>Reporter: Ramakrishna S
>Assignee: kumar vishal
>Priority: Minor
>  Labels: sanity
> Fix For: 1.3.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 0: jdbc:hive2://10.18.98.34:23040> load data inpath 
> "hdfs://hacluster/user/test/lineitem.tbl.1" into table lineitem 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> Error: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or 
> view 'lineitem' not found in database 'default'; (state=,code=0)
> 0: jdbc:hive2://10.18.98.34:23040> create table if not exists lineitem(
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPMODE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPINSTRUCT string,
> 0: jdbc:hive2://10.18.98.34:23040> L_RETURNFLAG string,
> 0: jdbc:hive2://10.18.98.34:23040> L_RECEIPTDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_ORDERKEY string,
> 0: jdbc:hive2://10.18.98.34:23040> L_PARTKEY string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SUPPKEY   string,
> 0: jdbc:hive2://10.18.98.34:23040> L_LINENUMBER int,
> 0: jdbc:hive2://10.18.98.34:23040> L_QUANTITY double,
> 0: jdbc:hive2://10.18.98.34:23040> L_EXTENDEDPRICE double,
> 0: jdbc:hive2://10.18.98.34:23040> L_DISCOUNT double,
> 0: jdbc:hive2://10.18.98.34:23040> L_TAX double,
> 0: jdbc:hive2://10.18.98.34:23040> L_LINESTATUS string,
> 0: jdbc:hive2://10.18.98.34:23040> L_COMMITDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_COMMENT  string
> 0: jdbc:hive2://10.18.98.34:23040> ) STORED BY 'org.apache.carbondata.format'
> 0: jdbc:hive2://10.18.98.34:23040> TBLPROPERTIES 
> ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.338 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> load data inpath 
> "hdfs://hacluster/user/test/lineitem.tbl.1" into table lineitem 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (48.634 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> create datamap agr_lineitem ON TABLE 
> lineitem USING "org.apache.carbondata.datamap.AggregateDataMapHandler" as 
> select L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from 
> lineitem group by  L_RETURNFLAG, L_LINESTATUS;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (16.552 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> select 
> L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem 
> group by  L_RETURNFLAG, L_LINESTATUS;
> Error: org.apache.spark.sql.AnalysisException: Column doesnot exists in Pre 
> Aggregate table; (state=,code=0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1713) Carbon1.3.0-Pre-AggregateTable - Aggregate query on main table fails after creating pre-aggregate table

2017-11-15 Thread Ramakrishna S (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramakrishna S updated CARBONDATA-1713:
--
Priority: Major  (was: Blocker)

> Carbon1.3.0-Pre-AggregateTable - Aggregate query on main table fails after 
> creating pre-aggregate table
> ---
>
> Key: CARBONDATA-1713
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1713
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 1.3.0
> Environment: ANT Test cluster - 3 node
>Reporter: Ramakrishna S
>Assignee: kumar vishal
>  Labels: sanity
> Fix For: 1.3.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 0: jdbc:hive2://10.18.98.34:23040> load data inpath 
> "hdfs://hacluster/user/test/lineitem.tbl.1" into table lineitem 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> Error: org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or 
> view 'lineitem' not found in database 'default'; (state=,code=0)
> 0: jdbc:hive2://10.18.98.34:23040> create table if not exists lineitem(
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPMODE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SHIPINSTRUCT string,
> 0: jdbc:hive2://10.18.98.34:23040> L_RETURNFLAG string,
> 0: jdbc:hive2://10.18.98.34:23040> L_RECEIPTDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_ORDERKEY string,
> 0: jdbc:hive2://10.18.98.34:23040> L_PARTKEY string,
> 0: jdbc:hive2://10.18.98.34:23040> L_SUPPKEY   string,
> 0: jdbc:hive2://10.18.98.34:23040> L_LINENUMBER int,
> 0: jdbc:hive2://10.18.98.34:23040> L_QUANTITY double,
> 0: jdbc:hive2://10.18.98.34:23040> L_EXTENDEDPRICE double,
> 0: jdbc:hive2://10.18.98.34:23040> L_DISCOUNT double,
> 0: jdbc:hive2://10.18.98.34:23040> L_TAX double,
> 0: jdbc:hive2://10.18.98.34:23040> L_LINESTATUS string,
> 0: jdbc:hive2://10.18.98.34:23040> L_COMMITDATE string,
> 0: jdbc:hive2://10.18.98.34:23040> L_COMMENT  string
> 0: jdbc:hive2://10.18.98.34:23040> ) STORED BY 'org.apache.carbondata.format'
> 0: jdbc:hive2://10.18.98.34:23040> TBLPROPERTIES 
> ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.338 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> load data inpath 
> "hdfs://hacluster/user/test/lineitem.tbl.1" into table lineitem 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (48.634 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> create datamap agr_lineitem ON TABLE 
> lineitem USING "org.apache.carbondata.datamap.AggregateDataMapHandler" as 
> select L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from 
> lineitem group by  L_RETURNFLAG, L_LINESTATUS;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (16.552 seconds)
> 0: jdbc:hive2://10.18.98.34:23040> select 
> L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem 
> group by  L_RETURNFLAG, L_LINESTATUS;
> Error: org.apache.spark.sql.AnalysisException: Column doesnot exists in Pre 
> Aggregate table; (state=,code=0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1501: [CARBONDATA-1713] Fixed Aggregate query on ma...

2017-11-15 Thread kumarvishal09
GitHub user kumarvishal09 opened a pull request:

https://github.com/apache/carbondata/pull/1501

[CARBONDATA-1713] Fixed Aggregate query on main table fails after creating 
pre-aggregate table

**Problem:** when select query columns are in upper case pre aggregate 
table selection is failing
**Solution:**: Need to convert column name to lower case as table columns 
are in lower case
Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kumarvishal09/incubator-carbondata 
master_14-NOV

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1501.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1501


commit 541113a8c77172e60eef8d1621ef9286ca7e4eb9
Author: kumarvishal 
Date:   2017-11-15T09:49:05Z

Fixed CARBONDATA-1713




---


[jira] [Created] (CARBONDATA-1718) Carbon 1.3.0- BAD_RECORD_ACTION = REDIRECT is not working as expected.

2017-11-15 Thread pakanati revathi (JIRA)
pakanati revathi created CARBONDATA-1718:


 Summary: Carbon 1.3.0- BAD_RECORD_ACTION = REDIRECT is not working 
as expected.
 Key: CARBONDATA-1718
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1718
 Project: CarbonData
  Issue Type: Bug
  Components: sql
Affects Versions: 1.3.0
 Environment: 3 node ant cluster
Reporter: pakanati revathi
Priority: Minor
 Attachments: Bad_Record_REDIRECT.PNG, Bad_Record_REDIRECT1.PNG

BAD_RECORD_ACTION = REDIRECT is not working as expected.

When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, ideally csv 
should be created. But Csv is not created.

Expected Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
Csv should be created.

Actual Result: When we set carbon properties as BAD_RECORD_ACTION = REDIRECT, 
Csv is not  created.







--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >