[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3011/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1781/



---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread anubhav100
Github user anubhav100 commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
@jackylk can merge this one


---


[GitHub] carbondata issue #1806: modify default config: change the default of tempCSV...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1806
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3009/



---


[GitHub] carbondata issue #1806: modify default config: change the default of tempCSV...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1806
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1779/



---


[GitHub] carbondata issue #1806: modify default config: change the default of tempCSV...

2018-01-19 Thread qiuchenjian
Github user qiuchenjian commented on the issue:

https://github.com/apache/carbondata/pull/1806
  
retest this please


---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3013/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1778/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3008/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3012/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1777/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3007/



---


[GitHub] carbondata issue #1680: [WIP] fixing text parsing exception

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1680
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3011/



---


[jira] [Resolved] (CARBONDATA-2045) Query from segment set is not effective when pre-aggregate table is present

2018-01-19 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-2045.
-
Resolution: Fixed

> Query from segment set is not effective when pre-aggregate table is present
> ---
>
> Key: CARBONDATA-2045
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2045
> Project: CarbonData
>  Issue Type: Bug
>Reporter: kumar vishal
>Assignee: kumar vishal
>Priority: Major
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> 1. Create a table
> create table if not exists lineitem1(L_SHIPDATE string,L_SHIPMODE 
> string,L_SHIPINSTRUCT string,L_RETURNFLAG string,L_RECEIPTDATE 
> string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY string,L_LINENUMBER 
> int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX 
> double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT string) STORED BY 
> 'org.apache.carbondata.format' TBLPROPERTIES 
> ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> 2. Run load :
> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
> lineitem1 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> 3. create pre-agg table 
> create datamap agr_lineitem3 ON TABLE lineitem3 USING 
> "org.apache.carbondata.datamap.AggregateDataMapHandler" as select 
> L_RETURNFLAG,L_LINESTATUS,sum(L_QUANTITY),sum(L_EXTENDEDPRICE) from lineitem3 
> group by L_RETURNFLAG, L_LINESTATUS;
> 3. Check table content using aggregate query:
> select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
> lineitem3 group by l_returnflag, l_linestatus;
> ++--++--++--
> |l_returnflag|l_linestatus|sum(l_quantity)|sum(l_extendedprice)|
> ++--++--++--
> |N|F|4913382.0|7.369901176949993E9|
> |A|F|1.88818373E8|2.8310705145736383E11|
> |N|O|3.82400594E8|5.734650756707479E11|
> |R|F|1.88960009E8|2.833523780876951E11|
> ++--++--++--
> 4 rows selected (1.568 seconds)
> 4. Load one more time:
> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.1" into table 
> lineitem1 
> options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> 5. Check table content using aggregate query:
> select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
> lineitem3 group by l_returnflag, l_linestatus;
> ++--++--++--
> |l_returnflag|l_linestatus|sum(l_quantity)|sum(l_extendedprice)|
> ++--++--++--
> |N|F|9826764.0|1.4739802353899986E10|
> |A|F|3.77636746E8|5.662141029147278E11|
> |N|O|7.64801188E8|1.1469301513414958E12|
> |R|F|3.77920018E8|5.667047561753901E11|
> ++--++--++--
> 6. Set query from segment 1:
> 0: jdbc:hive2://10.18.98.48:23040> set 
> carbon.input.segments.test_db1.lilneitem1=1;
> +-+---++--
> |key|value|
> +-+---++--
> |carbon.input.segments.test_db1.lilneitem1|1|
> +-+---++--
> 7. Check table content using aggregate query:
> select l_returnflag,l_linestatus,sum(l_quantity),sum(l_extendedprice) from 
> lineitem3 group by l_returnflag, l_linestatus;
> *+Expected+*: It should return the values from segment 1 alone.
> *+Actual :+* : It returns values from both segments
> ++--++--++--
> |l_returnflag|l_linestatus|sum(l_quantity)|sum(l_extendedprice)|
> ++--++--++--
> |N|F|9826764.0|1.4739802353899986E10|
> |A|F|3.77636746E8|5.662141029147278E11|
> |N|O|7.64801188E8|1.1469301513414958E12|
> |R|F|3.77920018E8|5.667047561753901E11|
> ++--++--++--



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #1823: [CARBONDATA-2045][PreAggregate]Fixed Pre Aggr...

2018-01-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1823


---


[GitHub] carbondata issue #1823: [CARBONDATA-2045][PreAggregate]Fixed Pre Aggregate f...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1823
  
LGTM


---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3005/



---


[GitHub] carbondata issue #1823: [CARBONDATA-2045][PreAggregate]Fixed Pre Aggregate f...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1823
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1776/



---


[GitHub] carbondata issue #1823: [CARBONDATA-2045][PreAggregate]Fixed Pre Aggregate f...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1823
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3006/



---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3010/



---


[jira] [Resolved] (CARBONDATA-2036) Insert overwrite on static partition cannot work properly

2018-01-19 Thread kumar vishal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kumar vishal resolved CARBONDATA-2036.
--
   Resolution: Fixed
Fix Version/s: 1.3.0

> Insert overwrite on static partition cannot work properly
> -
>
> Key: CARBONDATA-2036
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2036
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Ravindra Pesala
>Priority: Minor
> Fix For: 1.3.0
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> When trying to insert overwrite on the static partition with 0 at first on 
> int column has an issue.
> Example : 
> create table test(d1 string) partition by (c1 int, c2 int, c3 int)
> And use insert overwrite table partition(01, 02, 03) select "s1"
>  
> The above case has a problem as 01 is not converting to actual integer to 
> partition map file.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #1833: [CARBONDATA-2036] Fix the insert static parti...

2018-01-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1833


---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread kumarvishal09
Github user kumarvishal09 commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
LGTM


---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1775/



---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1772/



---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3002/



---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
retest sdv please


---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3009/



---


[GitHub] carbondata issue #1823: [CARBONDATA-2045][PreAggregate]Fixed Pre Aggregate f...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1823
  
retest this please


---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread anubhav100
Github user anubhav100 commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
retest this please


---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1774/



---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread anubhav100
Github user anubhav100 commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
retest this please


---


[GitHub] carbondata issue #1795: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1795
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3008/



---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1770/



---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread anubhav100
Github user anubhav100 commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
retest this please


---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3000/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1771/



---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1769/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3001/



---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2999/



---


[GitHub] carbondata issue #1838: [CARBONDATA-2060]fix insert overwrite on partition t...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1838
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3007/



---


[GitHub] carbondata issue #1838: [CARBONDATA-2060]fix insert overwrite on partition t...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1838
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1768/



---


[GitHub] carbondata issue #1823: [CARBONDATA-2045][PreAggregate]Fixed Pre Aggregate f...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1823
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2997/



---


[GitHub] carbondata issue #1838: [CARBONDATA-2060]fix insert overwrite on partition t...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1838
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2998/



---


[GitHub] carbondata issue #1823: [CARBONDATA-2045][PreAggregate]Fixed Pre Aggregate f...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1823
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1767/



---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3006/



---


[GitHub] carbondata issue #1795: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread anubhav100
Github user anubhav100 commented on the issue:

https://github.com/apache/carbondata/pull/1795
  
@jackylk  i accidently deleted the commits of this branch when squashing 
the commit i raised the new pull request with same changes please review #1839


---


[GitHub] carbondata issue #1839: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread anubhav100
Github user anubhav100 commented on the issue:

https://github.com/apache/carbondata/pull/1839
  
@jackylk  i accidently deleted my old branch i have created this new branch 
for same issue please merge this one 


---


[GitHub] carbondata pull request #1839: [CARBONDATA-2016] Exception displays while ex...

2018-01-19 Thread anubhav100
GitHub user anubhav100 opened a pull request:

https://github.com/apache/carbondata/pull/1839

[CARBONDATA-2016] Exception displays while executing compaction with alter 
query

**Root Cause**
When we apply the alter table command to add column with default value it 
is storing it as long object,it is wrongly written in restructure util we 
should get the value as the same type
as that of datatype of columnschema in restructure util earlier in master 
branch in restuctureutil class if our data type is long or short or int we are 
always returning back a long object which is wong due to same reason compaction 
was failing
if it was applied after alter table add columns command with default value 
because in sortdatarows there was mismatch between data type and its 
corresponding value

**Testing:**
1.mvn clean install is passing
2.added new test case for same

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anubhav100/incubator-carbondata 
bugfix/CARBONDATA-2016

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1839.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1839


commit eee0a2325b410c9d1f9e18b40002446b067ed581
Author: anubhav100 
Date:   2018-01-19T16:01:14Z

When we apply the alter table command to add column with default value it 
is storing it as long object,it is wrongly written in restructure util we 
should get the value as the same type




---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1765/



---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2995/



---


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2996/



---


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1766/



---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread manishgupta88
Github user manishgupta88 commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
retest this please


---


[GitHub] carbondata issue #1823: [CARBONDATA-2045][PreAggregate]Fixed Pre Aggregate f...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1823
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3005/



---


[GitHub] carbondata pull request #1795: [CARBONDATA-2016] Exception displays while ex...

2018-01-19 Thread anubhav100
Github user anubhav100 closed the pull request at:

https://github.com/apache/carbondata/pull/1795


---


[GitHub] carbondata pull request #1838: [CARBONDATA-2060]fix insert overwrite on part...

2018-01-19 Thread akashrn5
GitHub user akashrn5 opened a pull request:

https://github.com/apache/carbondata/pull/1838

[CARBONDATA-2060]fix insert overwrite on partition table

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

Problem:
when insert overwrite is done on partition table with the table which has 
empty data, it was not overwriting.

Solution: 
when insert OverWrite is fired on partition table from empty table, it 
should create new empty segment and it should delete old segments.

 - [X] Any interfaces changed?
NA
 
 - [X] Any backward compatibility impacted?
NA
 
 - [X] Document update required?
NA

 - [X] Testing done
Unit tests cases are added to test the scenario
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [x] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
NA



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/akashrn5/incubator-carbondata 
partition_overwrite

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1838.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1838


commit 56e3054ade07bfd39e7ec1050fe2962166d4ad23
Author: akashrn5 
Date:   2018-01-19T14:57:05Z

fix insert overwrite on partition table




---


[GitHub] carbondata issue #1795: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/1795
  
please squash and push again


---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread manishgupta88
Github user manishgupta88 commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
retest sdv please


---


[GitHub] carbondata issue #1795: [CARBONDATA-2016] Exception displays while executing...

2018-01-19 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/1795
  
LGTM


---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3004/



---


[jira] [Created] (CARBONDATA-2060) Fix InsertOverwrite on partition table

2018-01-19 Thread Akash R Nilugal (JIRA)
Akash R Nilugal created CARBONDATA-2060:
---

 Summary: Fix InsertOverwrite on partition table
 Key: CARBONDATA-2060
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2060
 Project: CarbonData
  Issue Type: Bug
Reporter: Akash R Nilugal
Assignee: Akash R Nilugal


{color:#33}when partition table overwrite with empty table , it is not 
overwriting the partition table , and when insert overwrite is done on dynamic 
partition table , overwrite was not happening.{color}

 

{color:#33}sql("create table partitionLoadTable(name string, age int) 
PARTITIONED BY(address string) stored by 'carbondata'"){color}
{color:#33}sql("insert into partitionLoadTable select 'abc',4,'def'"){color}
{color:#33}sql("insert into partitionLoadTable select 'abd',5,'xyz'"){color}
{color:#33}sql("create table noLoadTable (name string, age int, address 
string) stored by 'carbondata'"){color}
{color:#33}sql("insert overwrite table partitionLoadTable select * from 
noLoadTable"){color}

{color:#33}when we do select * after insert overwrite operation, ideally it 
should give empty data, but it is giving all data.{color}

 

{color:#33}sql("CREATE TABLE uniqdata_hive_static (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ','"){color}
{color:#33}sql("CREATE TABLE uniqdata_string_static(CUST_ID int,CUST_NAME 
String,DOB timestamp,DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10),DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) PARTITIONED BY(ACTIVE_EMUI_VERSION string) STORED BY 
'org.apache.carbondata.format' TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 
MB')"){color}
{color:#33}sql(s"LOAD DATA INPATH '$resourcesPath/partData.csv' into table 
uniqdata_string_static OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME 
,ACTIVE_EMUI_VERSION,DOB,DOJ, 
BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE')"){color}
{color:#33}sql(s"LOAD DATA INPATH '$resourcesPath/partData.csv' into table 
uniqdata_string_static OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME 
,ACTIVE_EMUI_VERSION,DOB,DOJ, 
BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE')"){color}

{color:#33}sql("insert overwrite table uniqdata_string_static select 
CUST_ID, CUST_NAME,DOB,doj, bigint_column1, bigint_column2, decimal_column1, 
decimal_column2,double_column1, 
double_column2,integer_column1,active_emui_version from uniqdata_hive_static 
limit 10"){color}

 

{color:#33}after this, select * was giving result, ideally it should give 
empty result.{color}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1763/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1764/



---


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2993/



---


[GitHub] carbondata issue #1837: [WIP] Refactored code segregated process meta and pr...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1837
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2994/



---


[GitHub] carbondata issue #1821: [HOTFIX] Listeners not getting registered to the bus...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1821
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3003/



---


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-01-19 Thread xuchuanyin
Github user xuchuanyin commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
retest this please


---


[GitHub] carbondata issue #1821: [HOTFIX] Listeners not getting registered to the bus...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1821
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3002/



---


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-19 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

test({color:#008000}"overwrite whole partition table with empty data"{color}) {
 sql({color:#008000}"create table partitionLoadTable(name string, age int) 
PARTITIONED BY(address string) stored by 'carbondata'"{color})
 sql({color:#008000}"insert into partitionLoadTable select 
'abc',4,'def'"{color})
 sql({color:#008000}"insert into partitionLoadTable select 
'abd',5,'xyz'"{color})
 sql({color:#008000}"create table noLoadTable (name string, age int, address 
string) stored by 'carbondata'"{color})
 sql({color:#008000}"insert overwrite table partitionLoadTable select * from 
noLoadTable"{color})
 checkAnswer(sql({color:#008000}"select * from partitionLoadTable"{color}), 
sql({color:#008000}"select * from noLoadTable"{color}))
}


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-19 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

test({color:#008000}"overwrite whole partition table with empty data"{color}) {
 sql({color:#008000}"create table partitionLoadTable(name string, age int) 
PARTITIONED BY(address string) stored by 'carbondata'"{color})
 sql({color:#008000}"insert into partitionLoadTable select 
'abc',4,'def'"{color})
 sql({color:#008000}"insert into partitionLoadTable select 
'abd',5,'xyz'"{color})
 sql({color:#008000}"create table noLoadTable (name string, age int, address 
string) stored by 'carbondata'"{color})
 sql({color:#008000}"insert overwrite table partitionLoadTable select * from 
noLoadTable"{color})
 checkAnswer(sql({color:#008000}"select * from partitionLoadTable"{color}), 
sql({color:#008000}"select * from noLoadTable"{color}))
}

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  
> test({color:#008000}"overwrite whole partition table with empty data"{color}) 
> {
>  sql({color:#008000}"create table partitionLoadTable(name string, age int) 
> PARTITIONED BY(address string) stored by 'carbondata'"{color})
>  sql({color:#008000}"insert into partitionLoadTable select 
> 'abc',4,'def'"{color})
>  sql({color:#008000}"insert into partitionLoadTable select 
> 'abd',5,'xyz'"{color})
>  sql({color:#008000}"create table noLoadTable (name string, age int, address 
> string) stored by 'carbondata'"{color})
>  sql({color:#008000}"insert overwrite table partitionLoadTable select * from 
> noLoadTable"{color})
>  checkAnswer(sql({color:#008000}"select * from partitionLoadTable"{color}), 
> sql({color:#008000}"select * from noLoadTable"{color}))
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #1837: [WIP] Refactored code segregated process meta...

2018-01-19 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1837#discussion_r162627861
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonInsertIntoCommand.scala
 ---
@@ -45,10 +47,10 @@ case class CarbonInsertIntoCommand(
   updateModel = None,
   tableInfoOp = None,
   internalOptions = Map.empty,
-  partition = partition).run(sparkSession)
-// updating relation metadata. This is in case of auto detect high 
cardinality
-relation.carbonRelation.metaData =
-  CarbonSparkUtil.createSparkMeta(relation.carbonRelation.carbonTable)
-load
+  partition = partition)
+loadCommand.processMetadata(sparkSession)
+  }
+  override def processData(sparkSession: SparkSession): Seq[Row] = {
+loadCommand.processData(sparkSession)
--- End diff --

add `if (loadCommand != null)` check


---


[GitHub] carbondata issue #1821: [HOTFIX] Listeners not getting registered to the bus...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1821
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1761/



---


[GitHub] carbondata issue #1821: [HOTFIX] Listeners not getting registered to the bus...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1821
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2991/



---


[GitHub] carbondata issue #1751: [CARBONDATA-1971][Blocklet Prunning] Measure Null va...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1751
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2992/



---


[GitHub] carbondata issue #1751: [CARBONDATA-1971][Blocklet Prunning] Measure Null va...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1751
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1762/



---


[GitHub] carbondata pull request #1837: [WIP] Refactored code segregated process meta...

2018-01-19 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1837#discussion_r162626990
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/indexstore/BlockletDataMapIndexStore.java
 ---
@@ -136,7 +136,8 @@ public BlockletDataMap 
get(TableBlockIndexUniqueIdentifier identifier)
 partitionFileStore.readAllPartitionsOfSegment(carbonFiles, 
segmentPath);
 partitionFileStoreMap.put(identifier.getSegmentId(), 
partitionFileStore);
 for (CarbonFile file : carbonFiles) {
-  locationMap.put(file.getAbsolutePath(), file.getLocations());
+  locationMap
+  
.put(FileFactory.getUpdatedFilePath(file.getAbsolutePath()), 
file.getLocations());
--- End diff --

move .put to previous line


---


[GitHub] carbondata pull request #1837: [WIP] Refactored code segregated process meta...

2018-01-19 Thread kumarvishal09
GitHub user kumarvishal09 opened a pull request:

https://github.com/apache/carbondata/pull/1837

[WIP] Refactored code segregated process meta and process data in load 
command

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kumarvishal09/incubator-carbondata 
CodeRefactor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1837.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1837


commit f8e6fc8b036a5fde537d2db135d8860fd2c50c95
Author: kumarvishal 
Date:   2018-01-19T11:52:28Z

Refactor code segregated load from metadata




---


[GitHub] carbondata pull request #1821: [HOTFIX] Listeners not getting registered to ...

2018-01-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1821


---


[GitHub] carbondata issue #1821: [HOTFIX] Listeners not getting registered to the bus...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1821
  
LGTM


---


[jira] [Resolved] (CARBONDATA-2001) Unable to save a dataframe result as carbondata streaming table

2018-01-19 Thread Jacky Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacky Li resolved CARBONDATA-2001.
--
Resolution: Fixed

> Unable to save a dataframe result as carbondata streaming table
> ---
>
> Key: CARBONDATA-2001
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2001
> Project: CarbonData
>  Issue Type: Bug
>  Components: spark-integration
>Affects Versions: 1.3.0
> Environment: spark-2.1
>Reporter: anubhav tarar
>Assignee: anubhav tarar
>Priority: Trivial
> Fix For: 1.3.0
>
>  Time Spent: 12h 40m
>  Remaining Estimate: 0h
>
> 1.create carbonsession
>  import org.apache.spark.sql.SparkSession
> import org.apache.spark.sql.CarbonSession._
> val carbon = SparkSession.builder().config(sc.getConf) 
> .getOrCreateCarbonSession("hdfs://localhost:54311/newCarbonStore","/tmp"
> 2.create a dataframe with carbonsession
> import carbon.sqlContext.implicits._
> carbon.sql("drop table if exists streamingtable"); 
> val df =carbon.sparkContext.parallelize(1 to 5).toDF("colId")
> 3.register dataframe as carbon streaming table
>  
> df.write.format("carbondata").option("tableName","streamingTable").option("streaming","true").mode(SaveMode.Overwrite).save
> 4,desc formatted the table
> carbon.sql("describe formatted streamingTable").show(100)
> ++++
> |col_name|   data_type| comment|
> ++++
> |colid...|int  ...|MEASURE,null ...|
> | ...| ...| ...|
> |##Detailed Table ...| ...| ...|
> |Database Name...|default  ...| ...|
> |Table Name   ...|streamingtable   ...| ...|
> |CARBON Store Path...|hdfs://localhost:...| ...|
> |Comment  ...| ...| ...|
> |Table Block Size ...|1024 MB  ...| ...|
> |Table Data Size  ...|316  ...| ...|
> |Table Index Size ...|283  ...| ...|
> |Last Update Time ...|1515393447642...| ...|
> |SORT_SCOPE   ...|LOCAL_SORT   ...|LOCAL_SORT   ...|
> |Streaming...|false...| ...|
> |SORT_SCOPE   ...|LOCAL_SORT   ...|LOCAL_SORT   ...|
> | ...| ...| ...|
> |##Detailed Column...| ...| ...|
> |ADAPTIVE ...| ...| ...|
> |SORT_COLUMNS ...| ...| ...|
> ++++
> here property  streaming is false it should be true



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #1774: [CARBONDATA-2001] Unable to Save DataFrame As Carbon...

2018-01-19 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/1774
  
LGTM


---


[jira] [Resolved] (CARBONDATA-2046) agg Query failed when non supported aggregate is present in Query

2018-01-19 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-2046.
-
   Resolution: Fixed
Fix Version/s: 1.3.0

> agg Query failed when non supported aggregate is present in Query
> -
>
> Key: CARBONDATA-2046
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2046
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Babulal
>Assignee: Babulal
>Priority: Major
> Fix For: 1.3.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Run below Query where var_samp is not supported aggregate for aggregate table
> spark.sql(
>  s"""create datamap preagg_sum on table tbl_1 using 'preaggregate' as select 
> mac,avg(age) from tbl_1 group by mac"""
>  .stripMargin)
> spark.sql("select var_samp(mac) from tbl_1 where mac='Mikaa1' ").explain()
> Exception :-
> Exception in thread "main" org.apache.spark.sql.AnalysisException: resolved 
> attribute(s) mac#2 missing from 
> tbl_1_mac#56,tbl_1_age_sum#57L,tbl_1_age_count#58L in operator !Aggregate 
> [var_samp(cast(mac#2 as double)) AS var_samp(CAST(mac AS DOUBLE))#59];;
> !Aggregate [var_samp(cast(mac#2 as double)) AS var_samp(CAST(mac AS 
> DOUBLE))#59]
> +- Filter (tbl_1_mac#56 = Mikaa1)
>+- Relation[tbl_1_mac#56,tbl_1_age_sum#57L,tbl_1_age_count#58L] 
> CarbonDatasourceHadoopRelation [ Database name :default, Table name 
> :tbl_1_preagg_sum, Schema 
> :Some(StructType(StructField(tbl_1_mac,StringType,true), 
> StructField(tbl_1_age_sum,LongType,true), 
> StructField(tbl_1_age_count,LongType,true))) ]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #1824: [CARBONDATA-2046]agg Query failed when non su...

2018-01-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1824


---


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-19 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

CREATE TABLE uniqdata_string(CUST_ID int,CUST_NAME String,DOB timestamp,DOJ 
timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 
decimal(30,10),DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, 
Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY(ACTIVE_EMUI_VERSION 
string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
('TABLE_BLOCKSIZE'= '256 MB');

 

 LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
uniqdata_string partition(active_emui_version='abc') 
OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');

 

CREATE TABLE uniqdata_hive (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

 

insert overwrite table uniqdata_string partition(active_emui_version='xxx') 
select CUST_ID, CUST_NAME,DOB,doj, bigint_column1, bigint_column2, 
decimal_column1, decimal_column2,double_column1, double_column2,integer_column1 
from uniqdata_hive limit 10;

 9000,CUST_NAME_0,ACTIVE_EMUI_VERSION_0,1970-01-01 01:00:03,1970-01-01 
02:00:03,123372036854,-223372036854,12345678901.123400,22345678901.123400,11234567489.797600,-11234567489.797600,1


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2021) when delete is success and update is failed while writing status file then a stale carbon data file is created.

2018-01-19 Thread Akash R Nilugal (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal updated CARBONDATA-2021:

Description: 
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 

CREATE TABLE uniqdata_string(CUST_ID int,CUST_NAME String,DOB timestamp,DOJ 
timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 
decimal(30,10),DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, 
Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY(ACTIVE_EMUI_VERSION 
string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
('TABLE_BLOCKSIZE'= '256 MB');

 

 LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
uniqdata_string partition(active_emui_version='abc') 
OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');

 

CREATE TABLE uniqdata_hive (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

 

insert overwrite table uniqdata_string partition(active_emui_version='xxx') 
select CUST_ID, CUST_NAME,DOB,doj, bigint_column1, bigint_column2, 
decimal_column1, decimal_column2,double_column1, double_column2,integer_column1 
from uniqdata_hive limit 10;

 9000,CUST_NAME_0,ACTIVE_EMUI_VERSION_0,1970-01-01 01:00:03,1970-01-01 
02:00:03,123372036854,-223372036854,12345678901.123400,22345678901.123400,11234567489.797600,-11234567489.797600,1

  was:
when delete is success and update is failed while writing status file then a 
stale carbon data file is created.
 so removing that file on clean up . and also not considering that one during 
query.

when the update operation is running and the user stops it abruptly,
 then the carbon data file will be remained in the store .

so extra data is coming.

during the next update the clean up of the files need to be handled.
 and in query also new data file should be excluded.

 


> when delete is success and update is failed while writing status file  then a 
> stale carbon data file is created.
> 
>
> Key: CARBONDATA-2021
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2021
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> when delete is success and update is failed while writing status file then a 
> stale carbon data file is created.
>  so removing that file on clean up . and also not considering that one during 
> query.
> when the update operation is running and the user stops it abruptly,
>  then the carbon data file will be remained in the store .
> so extra data is coming.
> during the next update the clean up of the files need to be handled.
>  and in query also new data file should be excluded.
>  
> CREATE TABLE uniqdata_string(CUST_ID int,CUST_NAME String,DOB timestamp,DOJ 
> timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 
> decimal(30,10),DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, 
> Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY(ACTIVE_EMUI_VERSION 
> string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ('TABLE_BLOCKSIZE'= '256 MB');
>  
>  LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_string partition(active_emui_version='abc') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
>  
> CREATE TABLE uniqdata_hive (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION 
> string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
>  
> insert overwrite table uniqdata_string 

[GitHub] carbondata pull request #1774: [CARBONDATA-2001] Unable to Save DataFrame As...

2018-01-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1774


---


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3001/



---


[GitHub] carbondata issue #1821: [HOTFIX] Listeners not getting registered to the bus...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1821
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3000/



---


[jira] [Resolved] (CARBONDATA-2058) Streaming throw NullPointerException after batch loading

2018-01-19 Thread Jacky Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacky Li resolved CARBONDATA-2058.
--
   Resolution: Fixed
Fix Version/s: 1.3.0

> Streaming throw NullPointerException after batch loading
> 
>
> Key: CARBONDATA-2058
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2058
> Project: CarbonData
>  Issue Type: Bug
>Reporter: QiangCai
>Priority: Critical
> Fix For: 1.3.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Driver stacktrace:
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1478)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1466)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1465)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1465)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:813)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:813)
> at scala.Option.foreach(Option.scala:257)
> at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:813)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1693)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1648)
> at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1637)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:639)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1949)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1962)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1982)
> at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1.apply$mcV$sp(CarbonAppendableStreamSink.scala:197)
> ... 20 more
> Caused by: org.apache.carbondata.streaming.CarbonStreamException: Task failed 
> while writing rows
> at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$.writeDataFileTask(CarbonAppendableStreamSink.scala:295)
> at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1$$anonfun$apply$mcV$sp$1.apply(CarbonAppendableStreamSink.scala:199)
> at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1$$anonfun$apply$mcV$sp$1.apply(CarbonAppendableStreamSink.scala:198)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> at org.apache.spark.scheduler.Task.run(Task.scala:99)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.carbondata.hadoop.streaming.CarbonStreamRecordWriter.appendBlockletToDataFile(CarbonStreamRecordWriter.java:287)
> at 
> org.apache.carbondata.hadoop.streaming.CarbonStreamRecordWriter.close(CarbonStreamRecordWriter.java:300)
> at 
> org.apache.carbondata.streaming.segment.StreamSegment.appendBatchData(StreamSegment.java:276)
> at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply$mcV$sp(CarbonAppendableStreamSink.scala:286)
> at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply(CarbonAppendableStreamSink.scala:276)
> at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply(CarbonAppendableStreamSink.scala:276)
> at 
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1388)
> at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$.writeDataFileTask(CarbonAppendableStreamSink.scala:288)
> ... 8 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #1836: [CARBONDATA-2058] Block append data to streaming seg...

2018-01-19 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/1836
  
LGTM


---


[GitHub] carbondata pull request #1836: [CARBONDATA-2058] Block append data to stream...

2018-01-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1836


---


[jira] [Assigned] (CARBONDATA-2059) Compaction support for complex type

2018-01-19 Thread Ashwini K (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwini K reassigned CARBONDATA-2059:
-

Assignee: Ashwini K

> Compaction support for complex type 
> 
>
> Key: CARBONDATA-2059
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2059
> Project: CarbonData
>  Issue Type: Sub-task
>Reporter: Ashwini K
>Assignee: Ashwini K
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1759/



---


[jira] [Created] (CARBONDATA-2059) Compaction support for complex type

2018-01-19 Thread Ashwini K (JIRA)
Ashwini K created CARBONDATA-2059:
-

 Summary: Compaction support for complex type 
 Key: CARBONDATA-2059
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2059
 Project: CarbonData
  Issue Type: Sub-task
Reporter: Ashwini K






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #1833: [CARBONDATA-2036] Fix the insert static partition wi...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1833
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2989/



---


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1760/



---


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2990/



---


[GitHub] carbondata issue #1821: [HOTFIX] Listeners not getting registered to the bus...

2018-01-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1821
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2988/



---


[GitHub] carbondata issue #1821: [HOTFIX] Listeners not getting registered to the bus...

2018-01-19 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1821
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/2999/



---


  1   2   >