Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Materialized View Core #2155

2019-12-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 1.13 MB...]
2019-12-29 08:22:23 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:23 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:23 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:23 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:23 AUDIT audit:72 - {"time":"December 29, 2019 12:22:23 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"7732293119096939","opStatus":"START"}
2019-12-29 08:22:26 AUDIT audit:93 - {"time":"December 29, 2019 12:22:26 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"7732293119096939","opStatus":"SUCCESS","opTime":"2564 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 08:22:26 AUDIT audit:93 - {"time":"December 29, 2019 12:22:26 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"7732291744880784","opStatus":"SUCCESS","opTime":"3947 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 08:22:26 AUDIT audit:72 - {"time":"December 29, 2019 12:22:26 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"7732295695819218","opStatus":"START"}
2019-12-29 08:22:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:27 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:27 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:27 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:27 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:27 AUDIT audit:72 - {"time":"December 29, 2019 12:22:27 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"7732296694985180","opStatus":"START"}
2019-12-29 08:22:29 AUDIT audit:93 - {"time":"December 29, 2019 12:22:29 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"7732296694985180","opStatus":"SUCCESS","opTime":"1991 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 08:22:29 AUDIT audit:93 - {"time":"December 29, 2019 12:22:29 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"7732295695819218","opStatus":"SUCCESS","opTime":"2994 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 08:22:29 AUDIT audit:72 - {"time":"December 29, 2019 12:22:29 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"7732298696493752","opStatus":"START"}
2019-12-29 08:22:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:30 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:30 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:30 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:30 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:30 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:22:30 AUDIT audit:72 - {"time":"December 29, 2019 12:22:30 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"7732299894833986","opStatus":"START"}
2019-12-29 08:22:33 AUDIT audit:93 - {"time":"December 29, 2019 12:22:33 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"7732299894833986","opStatus":"SUCCESS","opTime":"2625 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 08:22:33 AUDIT audit:93 - {"time":"D

Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark2 #2155

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #2155

2019-12-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.2 #2155

2019-12-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 2.89 MB...]
2019-12-29 08:23:31 AUDIT audit:93 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"7732360032308736","opStatus":"SUCCESS","opTime":"356 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p3"}}
2019-12-29 08:23:31 AUDIT audit:72 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"7732360392017045","opStatus":"START"}
2019-12-29 08:23:31 AUDIT audit:72 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"7732360418430245","opStatus":"START"}
2019-12-29 08:23:31 AUDIT audit:93 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"7732360418430245","opStatus":"SUCCESS","opTime":"57 
ms","table":"partition_mv.p4_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p4"}}
2019-12-29 08:23:31 AUDIT audit:93 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"7732360392017045","opStatus":"SUCCESS","opTime":"271 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p4"}}
2019-12-29 08:23:31 AUDIT audit:72 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"7732360666480473","opStatus":"START"}
2019-12-29 08:23:31 AUDIT audit:72 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"7732360694812150","opStatus":"START"}
2019-12-29 08:23:31 AUDIT audit:93 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"7732360694812150","opStatus":"SUCCESS","opTime":"80 
ms","table":"partition_mv.p5_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p5"}}
2019-12-29 08:23:31 AUDIT audit:93 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"7732360666480473","opStatus":"SUCCESS","opTime":"307 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p5"}}
2019-12-29 08:23:31 AUDIT audit:72 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"7732360976924706","opStatus":"START"}
2019-12-29 08:23:31 AUDIT audit:72 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"7732361005994538","opStatus":"START"}
2019-12-29 08:23:31 AUDIT audit:93 - {"time":"December 29, 2019 12:23:31 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"7732361005994538","opStatus":"SUCCESS","opTime":"145 
ms","table":"partition_mv.p6_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p6"}}
2019-12-29 08:23:32 AUDIT audit:93 - {"time":"December 29, 2019 12:23:32 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"7732360976924706","opStatus":"SUCCESS","opTime":"400 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p6"}}
2019-12-29 08:23:32 AUDIT audit:72 - {"time":"December 29, 2019 12:23:32 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"7732361385729409","opStatus":"START"}
2019-12-29 08:23:33 AUDIT audit:93 - {"time":"December 29, 2019 12:23:33 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"7732361385729409","opStatus":"SUCCESS","opTime":"1265 
ms","table":"partition_mv.partitionone","extraInfo":{}}
- check partitioning for child tables with various combinations
2019-12-29 08:23:33 AUDIT audit:72 - {"time":"December 29, 2019 12:23:33 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"7732362661390827","opStatus":"START"}
2019-12-29 08:23:33 AUDIT audit:93 - {"time":"December 29, 2019 12:23:33 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"7732362661390827","opStatus":"SUCCESS","opTime":"56 
ms","table":"partition_mv.partitionone","extraInfo":{"bad_record_path":"","local_dictionary_enable":"true","external":"false","sort_columns":"","comment":""}}
2019

Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #2155

2019-12-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Materialized View Core #3934

2019-12-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 1.14 MB...]
2019-12-29 08:36:26 AUDIT audit:72 - {"time":"December 29, 2019 12:36:26 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"10795700027393628","opStatus":"START"}
2019-12-29 08:36:28 ERROR DataTypeUtil:385 - Problem while converting data 
typeprotocol
2019-12-29 08:36:28 ERROR DataTypeUtil:385 - Problem while converting data 
typeprotocol
2019-12-29 08:36:28 ERROR DataTypeUtil:385 - Problem while converting data 
typenetwork
2019-12-29 08:36:28 ERROR DataTypeUtil:385 - Problem while converting data 
typeconfigManagement
2019-12-29 08:36:28 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 08:36:28 ERROR DataTypeUtil:385 - Problem while converting data 
typesecurity
2019-12-29 08:36:28 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 08:36:28 ERROR DataTypeUtil:385 - Problem while converting data 
typesecurity
2019-12-29 08:36:28 ERROR DataTypeUtil:385 - Problem while converting data 
typenetwork
2019-12-29 08:36:28 AUDIT audit:93 - {"time":"December 29, 2019 12:36:28 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"10795700027393628","opStatus":"SUCCESS","opTime":"1996 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 08:36:28 AUDIT audit:93 - {"time":"December 29, 2019 12:36:28 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"10795699057020181","opStatus":"SUCCESS","opTime":"2973 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 08:36:28 AUDIT audit:72 - {"time":"December 29, 2019 12:36:28 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"10795702037385172","opStatus":"START"}
2019-12-29 08:36:28 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:29 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:29 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:29 AUDIT audit:72 - {"time":"December 29, 2019 12:36:29 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"10795703159068857","opStatus":"START"}
2019-12-29 08:36:31 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 08:36:31 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 08:36:31 AUDIT audit:93 - {"time":"December 29, 2019 12:36:31 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"10795703159068857","opStatus":"SUCCESS","opTime":"1891 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 08:36:31 AUDIT audit:93 - {"time":"December 29, 2019 12:36:31 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"10795702037385172","opStatus":"SUCCESS","opTime":"3018 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 08:36:31 AUDIT audit:72 - {"time":"December 29, 2019 12:36:31 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"10795705059132210","opStatus":"START"}
2019-12-29 08:36:31 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:31 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:32 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:32 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:32 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:32 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:32 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:32 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:32 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 08:36:32 AUDIT audit:72 - 

Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3934

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #3934

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #3934

2019-12-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.1 #3934

2019-12-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 2.81 MB...]
2019-12-29 08:37:10 AUDIT audit:93 - {"time":"December 29, 2019 12:37:10 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10795744348927964","opStatus":"SUCCESS","opTime":"44 
ms","table":"partition_mv.p3_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p3"}}
2019-12-29 08:37:11 AUDIT audit:93 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10795744326305570","opStatus":"SUCCESS","opTime":"225 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p3"}}
2019-12-29 08:37:11 AUDIT audit:72 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10795744555644559","opStatus":"START"}
2019-12-29 08:37:11 AUDIT audit:72 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10795744578723886","opStatus":"START"}
2019-12-29 08:37:11 AUDIT audit:93 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10795744578723886","opStatus":"SUCCESS","opTime":"42 
ms","table":"partition_mv.p4_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p4"}}
2019-12-29 08:37:11 AUDIT audit:93 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10795744555644559","opStatus":"SUCCESS","opTime":"229 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p4"}}
2019-12-29 08:37:11 AUDIT audit:72 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10795744789083645","opStatus":"START"}
2019-12-29 08:37:11 AUDIT audit:72 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10795744811645594","opStatus":"START"}
2019-12-29 08:37:11 AUDIT audit:93 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10795744811645594","opStatus":"SUCCESS","opTime":"42 
ms","table":"partition_mv.p5_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p5"}}
2019-12-29 08:37:11 AUDIT audit:93 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10795744789083645","opStatus":"SUCCESS","opTime":"223 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p5"}}
2019-12-29 08:37:11 AUDIT audit:72 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10795745061061713","opStatus":"START"}
2019-12-29 08:37:11 AUDIT audit:72 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10795745092384939","opStatus":"START"}
2019-12-29 08:37:11 AUDIT audit:93 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10795745092384939","opStatus":"SUCCESS","opTime":"43 
ms","table":"partition_mv.p6_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p6"}}
2019-12-29 08:37:11 AUDIT audit:93 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10795745061061713","opStatus":"SUCCESS","opTime":"239 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p6"}}
2019-12-29 08:37:11 AUDIT audit:72 - {"time":"December 29, 2019 12:37:11 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"10795745307292282","opStatus":"START"}
2019-12-29 08:37:12 AUDIT audit:93 - {"time":"December 29, 2019 12:37:12 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"10795745307292282","opStatus":"SUCCESS","opTime":"490 
ms","table":"partition_mv.partitionone","extraInfo":{}}
- check partitioning for chi

[carbondata] branch master updated: [CARBONDATA-3629] Fix Select query failure on aggregation of same column on MV

2019-12-29 Thread jackylk
This is an automated email from the ASF dual-hosted git repository.

jackylk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 2b222d4  [CARBONDATA-3629] Fix Select query failure on aggregation of 
same column on MV
2b222d4 is described below

commit 2b222d4d279e7f0c53c062b0740d4aed3904a960
Author: Indhumathi27 
AuthorDate: Tue Dec 24 13:15:02 2019 +0530

[CARBONDATA-3629] Fix Select query failure on aggregation of same column on 
MV

Problem:
If MV datamap is created with SELECT a,sum(a) from maintable group by a and 
later queried with SELECT sum(a) from maintable, query fails as rewritten plan 
output list doesn't match with the table.

Solution:
Check if Aggregation exists in GroupBy Node and copy select node with 
aliasMap

This closes #3530
---
 .../org/apache/carbondata/mv/datamap/MVUtil.scala  | 39 ++---
 .../carbondata/mv/rewrite/DefaultMatchMaker.scala  | 51 --
 .../mv/rewrite/TestAllOperationsOnMV.scala | 29 
 .../TestMVTimeSeriesCreateDataMapCommand.scala | 13 +-
 docs/datamap/mv-datamap-guide.md   | 11 -
 .../command/timeseries/TimeSeriesUtil.scala|  4 +-
 6 files changed, 122 insertions(+), 25 deletions(-)

diff --git 
a/datamap/mv/core/src/main/scala/org/apache/carbondata/mv/datamap/MVUtil.scala 
b/datamap/mv/core/src/main/scala/org/apache/carbondata/mv/datamap/MVUtil.scala
index fe76cc3..f3e8091 100644
--- 
a/datamap/mv/core/src/main/scala/org/apache/carbondata/mv/datamap/MVUtil.scala
+++ 
b/datamap/mv/core/src/main/scala/org/apache/carbondata/mv/datamap/MVUtil.scala
@@ -49,23 +49,37 @@ class MVUtil {
   case select: Select =>
 select.children.map {
   case groupBy: GroupBy =>
-getFieldsFromProject(groupBy.outputList, groupBy.predicateList, 
logicalRelation)
+getFieldsFromProject(groupBy.outputList, groupBy.predicateList,
+  logicalRelation, groupBy.flagSpec)
   case _: ModularRelation =>
-getFieldsFromProject(select.outputList, select.predicateList, 
logicalRelation)
+getFieldsFromProject(select.outputList, select.predicateList,
+  logicalRelation, select.flagSpec)
 }.head
   case groupBy: GroupBy =>
 groupBy.child match {
   case select: Select =>
-getFieldsFromProject(groupBy.outputList, select.predicateList, 
logicalRelation)
+getFieldsFromProject(groupBy.outputList, select.predicateList,
+  logicalRelation, select.flagSpec)
   case _: ModularRelation =>
-getFieldsFromProject(groupBy.outputList, groupBy.predicateList, 
logicalRelation)
+getFieldsFromProject(groupBy.outputList, groupBy.predicateList,
+  logicalRelation, groupBy.flagSpec)
 }
 }
   }
 
+  /**
+   * Create's main table to datamap table field relation map by using modular 
plan generated from
+   * user query
+   * @param outputList of the modular plan
+   * @param predicateList of the modular plan
+   * @param logicalRelation list of main table from query
+   * @param flagSpec to get SortOrder attribute if exists
+   * @return fieldRelationMap
+   */
   def getFieldsFromProject(outputList: Seq[NamedExpression],
   predicateList: Seq[Expression],
-  logicalRelation: Seq[LogicalRelation]): mutable.LinkedHashMap[Field, 
DataMapField] = {
+  logicalRelation: Seq[LogicalRelation],
+  flagSpec: Seq[Seq[Any]]): mutable.LinkedHashMap[Field, DataMapField] = {
 var fieldToDataMapFieldMap = 
scala.collection.mutable.LinkedHashMap.empty[Field, DataMapField]
 fieldToDataMapFieldMap ++== getFieldsFromProject(outputList, 
logicalRelation)
 var finalPredicateList: Seq[NamedExpression] = Seq.empty
@@ -75,6 +89,21 @@ class MVUtil {
   finalPredicateList = finalPredicateList.:+(attr)
   }
 }
+// collect sort by columns
+if (flagSpec.nonEmpty) {
+  flagSpec.map { f =>
+f.map {
+  case list: ArrayBuffer[_] =>
+list.map {
+  case s: SortOrder =>
+s.collect {
+  case attr: AttributeReference =>
+finalPredicateList = finalPredicateList.:+(attr)
+}
+}
+}
+  }
+}
 fieldToDataMapFieldMap ++== 
getFieldsFromProject(finalPredicateList.distinct, logicalRelation)
 fieldToDataMapFieldMap
   }
diff --git 
a/datamap/mv/core/src/main/scala/org/apache/carbondata/mv/rewrite/DefaultMatchMaker.scala
 
b/datamap/mv/core/src/main/scala/org/apache/carbondata/mv/rewrite/DefaultMatchMaker.scala
index 616d0bd..7e8eb96 100644
--- 
a/datamap/mv/core/src/main/scala/org/apache/carbondata/mv/rewrite/DefaultMatchMaker.scala
+++ 
b/datamap/mv/core/src/main/scala/org/apache/carbondata/mv/rewrite/DefaultMatchMaker.sc

[carbondata] branch master updated: [CARBONDATA-3628] Support alter hive table add array and map type column

2019-12-29 Thread jackylk
This is an automated email from the ASF dual-hosted git repository.

jackylk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new bd54ce8  [CARBONDATA-3628] Support alter hive table add array and map 
type column
bd54ce8 is described below

commit bd54ce83d819d8350b1a594c3007ac747d5485dc
Author: IceMimosa 
AuthorDate: Tue Dec 24 11:07:30 2019 +0800

[CARBONDATA-3628] Support alter hive table add array and map type column

Support adding array and map data type column by ALTER TABLE

This closes #3529
---
 .../cluster/sdv/generated/AlterTableTestCase.scala | 26 +-
 .../cluster/sdv/generated/SDKwriterTestCase.scala  |  3 +--
 .../apache/spark/sql/common/util/QueryTest.scala   | 10 -
 .../carbondata/spark/rdd/CarbonMergerRDD.scala | 11 -
 .../spark/sql/execution/strategy/DDLStrategy.scala |  7 +++---
 5 files changed, 39 insertions(+), 18 deletions(-)

diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
index 297ff04..cc34df5 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
@@ -1022,7 +1022,31 @@ class AlterTableTestCase extends QueryTest with 
BeforeAndAfterAll {
   assert(exception.getMessage.contains("Unsupported alter operation on 
hive table"))
 } else if (SparkUtil.isSparkVersionXandAbove("2.2")) {
   sql("alter table alter_hive add columns(add string)")
-  sql("insert into alter_hive select 'abc','banglore'")
+  sql("alter table alter_hive add columns (var map)")
+  sql("insert into alter_hive select 
'abc','banglore',map('age','10','birth','2020')")
+  checkAnswer(
+sql("select * from alter_hive"),
+Seq(Row("abc", "banglore", Map("age" -> "10", "birth" -> "2020")))
+  )
+}
+  }
+
+  test("Alter table add column for hive partitioned table for spark version 
above 2.1") {
+sql("drop table if exists alter_hive")
+sql("create table alter_hive(name string) stored as rcfile partitioned by 
(dt string)")
+if (SparkUtil.isSparkVersionXandAbove("2.2")) {
+  sql("alter table alter_hive add columns(add string)")
+  sql("alter table alter_hive add columns (var map)")
+  sql("alter table alter_hive add columns (loves array)")
+  sql(
+s"""
+   |insert into alter_hive partition(dt='par')
+   |select 'abc', 'banglore', map('age', '10', 'birth', '2020'), 
array('a', 'b', 'c')
+ """.stripMargin)
+  checkAnswer(
+sql("select * from alter_hive where dt='par'"),
+Seq(Row("abc", "banglore", Map("age" -> "10", "birth" -> "2020"), 
Seq("a", "b", "c"), "par"))
+  )
 }
   }
 
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SDKwriterTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SDKwriterTestCase.scala
index d6a9413..82541b2 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SDKwriterTestCase.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SDKwriterTestCase.scala
@@ -146,8 +146,7 @@ class SDKwriterTestCase extends QueryTest with 
BeforeAndAfterEach {
   }
 
   def deleteFile(path: String, extension: String): Unit = {
-val file: CarbonFile = FileFactory
-  .getCarbonFile(path, FileFactory.getFileType(path))
+val file: CarbonFile = FileFactory.getCarbonFile(path)
 
 for (eachDir <- file.listFiles) {
   if (!eachDir.isDirectory) {
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
index 9d4fe79..eca20ed 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
@@ -88,14 +88,13 @@ class QueryTest extends PlanTest with Suite {
 
   protected def checkAnswer(carbon: String, hive: String, uniqueIdentifier: 
String): Unit = {
 val path = TestQueryExecutor.hiveresultpath + "/" + uniqueIdentifier
-if (FileFactory.isFileExist(path, FileFactory.getFileType(path))) {
-  val objinp = new ObjectInputStream(FileFactory
-.getDataInputStream(path, FileFactory.getFileType(path)))
+if 

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Materialized View Core #2156

2019-12-29 Thread Apache Jenkins Server
See 


Changes:

[jacky.likun] [CARBONDATA-3629] Fix Select query failure on aggregation of same 
column


--
[...truncated 1.15 MB...]
2019-12-29 12:14:17 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:17 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:17 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:17 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:18 AUDIT audit:72 - {"time":"December 29, 2019 4:14:18 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5307707169463614","opStatus":"START"}
2019-12-29 12:14:21 AUDIT audit:93 - {"time":"December 29, 2019 4:14:21 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5307707169463614","opStatus":"SUCCESS","opTime":"3173 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 12:14:21 AUDIT audit:93 - {"time":"December 29, 2019 4:14:21 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5307705820329484","opStatus":"SUCCESS","opTime":"4531 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 12:14:21 AUDIT audit:72 - {"time":"December 29, 2019 4:14:21 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5307710354375071","opStatus":"START"}
2019-12-29 12:14:21 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:21 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:21 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:21 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:22 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:22 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:22 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:22 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:22 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:22 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:22 AUDIT audit:72 - {"time":"December 29, 2019 4:14:22 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5307711608779808","opStatus":"START"}
2019-12-29 12:14:25 AUDIT audit:93 - {"time":"December 29, 2019 4:14:25 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5307711608779808","opStatus":"SUCCESS","opTime":"3044 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 12:14:25 AUDIT audit:93 - {"time":"December 29, 2019 4:14:25 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5307710354375071","opStatus":"SUCCESS","opTime":"4309 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 12:14:25 AUDIT audit:72 - {"time":"December 29, 2019 4:14:25 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5307714670750868","opStatus":"START"}
2019-12-29 12:14:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:26 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:26 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 12:14:27 AUDIT audit:72 - {"time":"December 29, 2019 4:14:27 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5307716023748268","opStatus":"START"}
2019-12-29 12:14:30 AUDIT audit:93 - {"time":"December 29, 2019 4:14:30 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5307716023748268","opStatus":"SUCCESS","opTime":"2946 
ms","table":"

Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark2 #2156

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #2156

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #2156

2019-12-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.2 #2156

2019-12-29 Thread Apache Jenkins Server
See 


Changes:

[jacky.likun] [CARBONDATA-3629] Fix Select query failure on aggregation of same 
column


--
[...truncated 2.94 MB...]
2019-12-29 12:15:30 AUDIT audit:93 - {"time":"December 29, 2019 4:15:30 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5307778879917823","opStatus":"SUCCESS","opTime":"425 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p3"}}
2019-12-29 12:15:30 AUDIT audit:72 - {"time":"December 29, 2019 4:15:30 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5307779308790709","opStatus":"START"}
2019-12-29 12:15:30 AUDIT audit:72 - {"time":"December 29, 2019 4:15:30 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5307779343310498","opStatus":"START"}
2019-12-29 12:15:30 AUDIT audit:93 - {"time":"December 29, 2019 4:15:30 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5307779343310498","opStatus":"SUCCESS","opTime":"53 
ms","table":"partition_mv.p4_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p4"}}
2019-12-29 12:15:30 AUDIT audit:93 - {"time":"December 29, 2019 4:15:30 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5307779308790709","opStatus":"SUCCESS","opTime":"382 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p4"}}
2019-12-29 12:15:30 AUDIT audit:72 - {"time":"December 29, 2019 4:15:30 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5307779696107908","opStatus":"START"}
2019-12-29 12:15:30 AUDIT audit:72 - {"time":"December 29, 2019 4:15:30 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5307779739383845","opStatus":"START"}
2019-12-29 12:15:30 AUDIT audit:93 - {"time":"December 29, 2019 4:15:30 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5307779739383845","opStatus":"SUCCESS","opTime":"58 
ms","table":"partition_mv.p5_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p5"}}
2019-12-29 12:15:31 AUDIT audit:93 - {"time":"December 29, 2019 4:15:31 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5307779696107908","opStatus":"SUCCESS","opTime":"394 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p5"}}
2019-12-29 12:15:31 AUDIT audit:72 - {"time":"December 29, 2019 4:15:31 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5307780096496104","opStatus":"START"}
2019-12-29 12:15:31 AUDIT audit:72 - {"time":"December 29, 2019 4:15:31 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5307780144873087","opStatus":"START"}
2019-12-29 12:15:31 AUDIT audit:93 - {"time":"December 29, 2019 4:15:31 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5307780144873087","opStatus":"SUCCESS","opTime":"60 
ms","table":"partition_mv.p6_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p6"}}
2019-12-29 12:15:31 AUDIT audit:93 - {"time":"December 29, 2019 4:15:31 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5307780096496104","opStatus":"SUCCESS","opTime":"404 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p6"}}
2019-12-29 12:15:31 AUDIT audit:72 - {"time":"December 29, 2019 4:15:31 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"5307780512329955","opStatus":"START"}
2019-12-29 12:15:32 AUDIT audit:93 - {"time":"December 29, 2019 4:15:32 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"5307780512329955","opStatus":"SUCCESS","opTime":"1163 
ms","table":"partition_mv.partitionone","extraInfo":{}}
- check partitioning for child tables with various combinations
2019-12-29 12:15:32 AUDIT audit:72 - {"time":"December 29, 2019 4:15:32 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5307781685569749","opStatus":"START"}
2019-12-29 12:15:32 AUDIT audit:93 - {"time":"December 29, 2019 4:15:32 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5307781685569749","opStatus":"SUCCESS","opTime":"53 
ms","table":"partition_mv.partitionone","extraInfo":{"bad_record_path":"","loca

Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Materialized View Core #3935

2019-12-29 Thread Apache Jenkins Server
See 


Changes:

[jacky.likun] [CARBONDATA-3629] Fix Select query failure on aggregation of same 
column


--
[...truncated 1.15 MB...]
2019-12-29 13:00:30 AUDIT audit:72 - {"time":"December 29, 2019 5:00:30 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5310438009425728","opStatus":"START"}
2019-12-29 13:00:33 ERROR DataTypeUtil:385 - Problem while converting data 
typeprotocol
2019-12-29 13:00:33 ERROR DataTypeUtil:385 - Problem while converting data 
typeprotocol
2019-12-29 13:00:33 ERROR DataTypeUtil:385 - Problem while converting data 
typenetwork
2019-12-29 13:00:33 ERROR DataTypeUtil:385 - Problem while converting data 
typeconfigManagement
2019-12-29 13:00:33 ERROR DataTypeUtil:385 - Problem while converting data 
typesecurity
2019-12-29 13:00:33 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 13:00:33 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 13:00:33 ERROR DataTypeUtil:385 - Problem while converting data 
typesecurity
2019-12-29 13:00:33 ERROR DataTypeUtil:385 - Problem while converting data 
typenetwork
2019-12-29 13:00:33 AUDIT audit:93 - {"time":"December 29, 2019 5:00:33 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5310438009425728","opStatus":"SUCCESS","opTime":"2864 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 13:00:33 AUDIT audit:93 - {"time":"December 29, 2019 5:00:33 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5310436754880509","opStatus":"SUCCESS","opTime":"4130 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 13:00:33 AUDIT audit:72 - {"time":"December 29, 2019 5:00:33 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5310440893462200","opStatus":"START"}
2019-12-29 13:00:34 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:34 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:34 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:34 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:34 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:34 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:34 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:34 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:34 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:35 AUDIT audit:72 - {"time":"December 29, 2019 5:00:35 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5310442197526932","opStatus":"START"}
2019-12-29 13:00:37 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 13:00:37 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 13:00:38 AUDIT audit:93 - {"time":"December 29, 2019 5:00:38 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5310442197526932","opStatus":"SUCCESS","opTime":"3045 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 13:00:38 AUDIT audit:93 - {"time":"December 29, 2019 5:00:38 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5310440893462200","opStatus":"SUCCESS","opTime":"4360 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 13:00:38 AUDIT audit:72 - {"time":"December 29, 2019 5:00:38 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5310445258704174","opStatus":"START"}
2019-12-29 13:00:38 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:38 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:38 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:38 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:38 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:38 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:38 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:39 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:00:39 ERROR GlobalSortHelper$:38 - Data Load is par

Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #3935

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #3935

2019-12-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.1 #3935

2019-12-29 Thread Apache Jenkins Server
See 


Changes:

[jacky.likun] [CARBONDATA-3629] Fix Select query failure on aggregation of same 
column


--
[...truncated 2.81 MB...]
2019-12-29 13:01:41 AUDIT audit:72 - {"time":"December 29, 2019 5:01:41 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5310509030310244","opStatus":"START"}
2019-12-29 13:01:42 AUDIT audit:72 - {"time":"December 29, 2019 5:01:42 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5310509074549774","opStatus":"START"}
2019-12-29 13:01:42 AUDIT audit:93 - {"time":"December 29, 2019 5:01:42 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5310509074549774","opStatus":"SUCCESS","opTime":"63 
ms","table":"partition_mv.p6_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p6"}}
2019-12-29 13:01:42 AUDIT audit:93 - {"time":"December 29, 2019 5:01:42 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5310509030310244","opStatus":"SUCCESS","opTime":"375 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p6"}}
2019-12-29 13:01:42 AUDIT audit:72 - {"time":"December 29, 2019 5:01:42 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"5310509417219407","opStatus":"START"}
2019-12-29 13:01:43 AUDIT audit:93 - {"time":"December 29, 2019 5:01:43 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"5310509417219407","opStatus":"SUCCESS","opTime":"1081 
ms","table":"partition_mv.partitionone","extraInfo":{}}
- check partitioning for child tables with various combinations
2019-12-29 13:01:43 AUDIT audit:72 - {"time":"December 29, 2019 5:01:43 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5310510508218635","opStatus":"START"}
2019-12-29 13:01:43 AUDIT audit:93 - {"time":"December 29, 2019 5:01:43 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5310510508218635","opStatus":"SUCCESS","opTime":"71 
ms","table":"partition_mv.partitionone","extraInfo":{"bad_record_path":"","local_dictionary_enable":"true","external":"false","sort_columns":"","comment":""}}
2019-12-29 13:01:43 AUDIT audit:72 - {"time":"December 29, 2019 5:01:43 AM 
PST","username":"jenkins","opName":"INSERT 
INTO","opId":"5310510711533883","opStatus":"START"}
2019-12-29 13:01:44 AUDIT audit:93 - {"time":"December 29, 2019 5:01:44 AM 
PST","username":"jenkins","opName":"INSERT 
INTO","opId":"5310510711533883","opStatus":"SUCCESS","opTime":"592 
ms","table":"partition_mv.partitionone","extraInfo":{}}
2019-12-29 13:01:44 AUDIT audit:72 - {"time":"December 29, 2019 5:01:44 AM 
PST","username":"jenkins","opName":"DROP 
DATAMAP","opId":"5310511306635870","opStatus":"START"}
2019-12-29 13:01:44 AUDIT audit:93 - {"time":"December 29, 2019 5:01:44 AM 
PST","username":"jenkins","opName":"DROP 
DATAMAP","opId":"5310511306635870","opStatus":"SUCCESS","opTime":"5 
ms","table":"NA","extraInfo":{"dmName":"dm1"}}
2019-12-29 13:01:44 AUDIT audit:72 - {"time":"December 29, 2019 5:01:44 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5310511315936840","opStatus":"START"}
2019-12-29 13:01:44 AUDIT audit:72 - {"time":"December 29, 2019 5:01:44 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5310511346001155","opStatus":"START"}
2019-12-29 13:01:44 AUDIT audit:93 - {"time":"December 29, 2019 5:01:44 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5310511346001155","opStatus":"SUCCESS","opTime":"67 
ms","table":"partition_mv.dm1_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"dm1"}}
2019-12-29 13:01:44 AUDIT audit:72 - {"time":"December 29, 2019 5:01:44 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5310511663173728","opStatus":"START"}
2019-12-29 13:01:47 AUDIT audit:93 - {"time":"December 29, 2019 5:01:47 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5310511663173728","opStatus":"SUCCESS","opTime":"2479 
ms","table":"partition_mv.dm1_table","extraInfo":{}}
2019-12-29 13:01:47 AUDIT audit:93 - {"time":"December 29, 2019 5:01:47 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5310511315936840","opStatus":"SUCCESS","opTime":"2840 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"dm1"}}
2019-12-29 13:01:47 AUDIT audit:72 - {"time":"December 29, 2019 5:01:47 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"5

Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3935

2019-12-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Materialized View Core #3936

2019-12-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 1.16 MB...]
2019-12-29 13:39:49 AUDIT audit:72 - {"time":"December 29, 2019 5:39:49 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"10813902830593354","opStatus":"START"}
2019-12-29 13:39:50 ERROR DataTypeUtil:385 - Problem while converting data 
typeprotocol
2019-12-29 13:39:50 ERROR DataTypeUtil:385 - Problem while converting data 
typeprotocol
2019-12-29 13:39:50 ERROR DataTypeUtil:385 - Problem while converting data 
typenetwork
2019-12-29 13:39:50 ERROR DataTypeUtil:385 - Problem while converting data 
typeconfigManagement
2019-12-29 13:39:51 ERROR DataTypeUtil:385 - Problem while converting data 
typesecurity
2019-12-29 13:39:51 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 13:39:51 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 13:39:51 ERROR DataTypeUtil:385 - Problem while converting data 
typesecurity
2019-12-29 13:39:51 ERROR DataTypeUtil:385 - Problem while converting data 
typenetwork
2019-12-29 13:39:51 AUDIT audit:93 - {"time":"December 29, 2019 5:39:51 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"10813902830593354","opStatus":"SUCCESS","opTime":"1993 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 13:39:51 AUDIT audit:93 - {"time":"December 29, 2019 5:39:51 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"10813901747453526","opStatus":"SUCCESS","opTime":"3082 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 13:39:51 AUDIT audit:72 - {"time":"December 29, 2019 5:39:51 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"10813904836224374","opStatus":"START"}
2019-12-29 13:39:51 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:51 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:51 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:51 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:52 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:52 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:52 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:52 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:52 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:52 AUDIT audit:72 - {"time":"December 29, 2019 5:39:52 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"10813905895976285","opStatus":"START"}
2019-12-29 13:39:54 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 13:39:54 ERROR DataTypeUtil:385 - Problem while converting data 
typeLearning
2019-12-29 13:39:54 AUDIT audit:93 - {"time":"December 29, 2019 5:39:54 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"10813905895976285","opStatus":"SUCCESS","opTime":"1992 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 13:39:54 AUDIT audit:93 - {"time":"December 29, 2019 5:39:54 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"10813904836224374","opStatus":"SUCCESS","opTime":"3059 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 13:39:54 AUDIT audit:72 - {"time":"December 29, 2019 5:39:54 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"10813907898371538","opStatus":"START"}
2019-12-29 13:39:54 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:54 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:54 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:54 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:55 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:55 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:55 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:55 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:55 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:39:55 AUDIT audit:72 - {"time":

Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #3936

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #3936

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3936

2019-12-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.1 #3936

2019-12-29 Thread Apache Jenkins Server
See 


Changes:

[jacky.likun] [CARBONDATA-3628] Support alter hive table add array and map type 
column


--
[...truncated 2.82 MB...]
2019-12-29 13:40:33 AUDIT audit:93 - {"time":"December 29, 2019 5:40:33 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10813947065464098","opStatus":"SUCCESS","opTime":"80 
ms","table":"partition_mv.p3_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p3"}}
2019-12-29 13:40:33 AUDIT audit:93 - {"time":"December 29, 2019 5:40:33 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10813947045903560","opStatus":"SUCCESS","opTime":"233 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p3"}}
2019-12-29 13:40:33 AUDIT audit:72 - {"time":"December 29, 2019 5:40:33 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10813947282692425","opStatus":"START"}
2019-12-29 13:40:33 AUDIT audit:72 - {"time":"December 29, 2019 5:40:33 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10813947302425424","opStatus":"START"}
2019-12-29 13:40:33 AUDIT audit:93 - {"time":"December 29, 2019 5:40:33 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10813947302425424","opStatus":"SUCCESS","opTime":"35 
ms","table":"partition_mv.p4_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p4"}}
2019-12-29 13:40:34 AUDIT audit:93 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10813947282692425","opStatus":"SUCCESS","opTime":"190 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p4"}}
2019-12-29 13:40:34 AUDIT audit:72 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10813947475134267","opStatus":"START"}
2019-12-29 13:40:34 AUDIT audit:72 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10813947505905257","opStatus":"START"}
2019-12-29 13:40:34 AUDIT audit:93 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10813947505905257","opStatus":"SUCCESS","opTime":"40 
ms","table":"partition_mv.p5_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p5"}}
2019-12-29 13:40:34 AUDIT audit:93 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10813947475134267","opStatus":"SUCCESS","opTime":"213 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p5"}}
2019-12-29 13:40:34 AUDIT audit:72 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10813947692506219","opStatus":"START"}
2019-12-29 13:40:34 AUDIT audit:72 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10813947715913309","opStatus":"START"}
2019-12-29 13:40:34 AUDIT audit:93 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"10813947715913309","opStatus":"SUCCESS","opTime":"42 
ms","table":"partition_mv.p6_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p6"}}
2019-12-29 13:40:34 AUDIT audit:93 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"10813947692506219","opStatus":"SUCCESS","opTime":"231 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p6"}}
2019-12-29 13:40:34 AUDIT audit:72 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"10813947930811094","opStatus":"START"}
2019-12-29 13:40:34 AUDIT audit:93 - {"time":"December 29, 2019 5:40:34 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"10813947930811094","opStatus":"SUCCESS","opTime":"484 
ms"

Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark2 #2157

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build became unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Processing #2157

2019-12-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Materialized View Core #2157

2019-12-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 1.15 MB...]
2019-12-29 13:52:07 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:07 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:07 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:07 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:08 AUDIT audit:72 - {"time":"December 29, 2019 5:52:08 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5313577216219898","opStatus":"START"}
2019-12-29 13:52:11 AUDIT audit:93 - {"time":"December 29, 2019 5:52:11 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5313577216219898","opStatus":"SUCCESS","opTime":"3297 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 13:52:11 AUDIT audit:93 - {"time":"December 29, 2019 5:52:11 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5313575715770732","opStatus":"SUCCESS","opTime":"4805 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 13:52:11 AUDIT audit:72 - {"time":"December 29, 2019 5:52:11 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5313580524689566","opStatus":"START"}
2019-12-29 13:52:11 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:12 AUDIT audit:72 - {"time":"December 29, 2019 5:52:12 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5313581688957219","opStatus":"START"}
2019-12-29 13:52:15 AUDIT audit:93 - {"time":"December 29, 2019 5:52:15 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5313581688957219","opStatus":"SUCCESS","opTime":"3229 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 13:52:15 AUDIT audit:93 - {"time":"December 29, 2019 5:52:15 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5313580524689566","opStatus":"SUCCESS","opTime":"4402 
ms","table":"partition_mv.partitionallcompaction","extraInfo":{}}
2019-12-29 13:52:15 AUDIT audit:72 - {"time":"December 29, 2019 5:52:15 AM 
PST","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"5313584935072957","opStatus":"START"}
2019-12-29 13:52:16 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:16 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:16 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:16 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:16 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:16 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:16 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:16 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:16 ERROR GlobalSortHelper$:38 - Data Load is partially success 
for table partitionallcompaction
2019-12-29 13:52:17 AUDIT audit:72 - {"time":"December 29, 2019 5:52:17 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5313586265622799","opStatus":"START"}
2019-12-29 13:52:20 AUDIT audit:93 - {"time":"December 29, 2019 5:52:20 AM 
PST","username":"jenkins","opName":"LOAD 
DATA","opId":"5313586265622799","opStatus":"SUCCESS","opTime":"2992 
ms","table":"partition_mv.sensor_1_table","extraInfo":{}}
2019-12-29 13:52:20 AUDIT audit:93 - {"time":"December 29

Build failed in Jenkins: carbondata-master-spark-2.2 #2157

2019-12-29 Thread Apache Jenkins Server
See 


Changes:

[jacky.likun] [CARBONDATA-3628] Support alter hive table add array and map type 
column


--
[...truncated 2.94 MB...]
2019-12-29 13:53:24 AUDIT audit:93 - {"time":"December 29, 2019 5:53:24 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5313653211731765","opStatus":"SUCCESS","opTime":"433 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p3"}}
2019-12-29 13:53:24 AUDIT audit:72 - {"time":"December 29, 2019 5:53:24 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5313653650647209","opStatus":"START"}
2019-12-29 13:53:24 AUDIT audit:72 - {"time":"December 29, 2019 5:53:24 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5313653692209153","opStatus":"START"}
2019-12-29 13:53:24 AUDIT audit:93 - {"time":"December 29, 2019 5:53:24 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5313653692209153","opStatus":"SUCCESS","opTime":"58 
ms","table":"partition_mv.p4_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p4"}}
2019-12-29 13:53:25 AUDIT audit:93 - {"time":"December 29, 2019 5:53:25 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5313653650647209","opStatus":"SUCCESS","opTime":"417 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p4"}}
2019-12-29 13:53:25 AUDIT audit:72 - {"time":"December 29, 2019 5:53:25 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5313654074085135","opStatus":"START"}
2019-12-29 13:53:25 AUDIT audit:72 - {"time":"December 29, 2019 5:53:25 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5313654120832501","opStatus":"START"}
2019-12-29 13:53:25 AUDIT audit:93 - {"time":"December 29, 2019 5:53:25 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5313654120832501","opStatus":"SUCCESS","opTime":"61 
ms","table":"partition_mv.p5_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p5"}}
2019-12-29 13:53:25 AUDIT audit:93 - {"time":"December 29, 2019 5:53:25 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5313654074085135","opStatus":"SUCCESS","opTime":"429 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p5"}}
2019-12-29 13:53:25 AUDIT audit:72 - {"time":"December 29, 2019 5:53:25 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5313654508575298","opStatus":"START"}
2019-12-29 13:53:25 AUDIT audit:72 - {"time":"December 29, 2019 5:53:25 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5313654558386193","opStatus":"START"}
2019-12-29 13:53:25 AUDIT audit:93 - {"time":"December 29, 2019 5:53:25 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5313654558386193","opStatus":"SUCCESS","opTime":"67 
ms","table":"partition_mv.p6_table","extraInfo":{"local_dictionary_threshold":"1","bad_record_path":"","table_blocksize":"1024","local_dictionary_enable":"true","flat_folder":"false","external":"false","parent_tables":"partitionone","sort_columns":"","comment":"","_internal.deferred.rebuild":"false","carbon.column.compressor":"snappy","datamap_name":"p6"}}
2019-12-29 13:53:26 AUDIT audit:93 - {"time":"December 29, 2019 5:53:26 AM 
PST","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"5313654508575298","opStatus":"SUCCESS","opTime":"965 
ms","table":"partition_mv.partitionone","extraInfo":{"provider":"mv","dmName":"p6"}}
2019-12-29 13:53:26 AUDIT audit:72 - {"time":"December 29, 2019 5:53:26 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"5313655484971049","opStatus":"START"}
2019-12-29 13:53:27 AUDIT audit:93 - {"time":"December 29, 2019 5:53:27 AM 
PST","username":"jenkins","opName":"DROP 
TABLE","opId":"5313655484971049","opStatus":"SUCCESS","opTime":"1113 
ms","table":"partition_mv.partitionone","extraInfo":{}}
- check partitioning for child tables with various combinations
2019-12-29 13:53:27 AUDIT audit:72 - {"time":"December 29, 2019 5:53:27 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5313656611809813","opStatus":"START"}
2019-12-29 13:53:27 AUDIT audit:93 - {"time":"December 29, 2019 5:53:27 AM 
PST","username":"jenkins","opName":"CREATE 
TABLE","opId":"5313656611809813","opStatus":"SUCCESS","opTime":"45 
ms","table":"partition_mv.partitionone","extraInfo":{"bad_record_path":"","loca

Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #2157

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #2157

2019-12-29 Thread Apache Jenkins Server
See 




[carbondata] branch master updated: Revert "wip"

2019-12-29 Thread jackylk
This is an automated email from the ASF dual-hosted git repository.

jackylk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 3b85e9f  Revert "wip"
3b85e9f is described below

commit 3b85e9f1c6c4d13da80b8fb6094461d9a1f404eb
Author: Jacky Li 
AuthorDate: Mon Dec 30 09:36:00 2019 +0800

Revert "wip"

This reverts commit 32bd37fe082daa413ae7a80c9bcde7e859a5df67.
---
 .../carbondata/mv/rewrite/TestAllOperationsOnMV.scala |  2 +-
 .../complexType/TestCreateTableWithDouble.scala   |  3 ++-
 .../createTable/TestCreateTableAsSelect.scala | 19 +++
 .../spark/sql/parser/CarbonSparkSqlParserUtil.scala   |  6 --
 4 files changed, 22 insertions(+), 8 deletions(-)

diff --git 
a/datamap/mv/core/src/test/scala/org/apache/carbondata/mv/rewrite/TestAllOperationsOnMV.scala
 
b/datamap/mv/core/src/test/scala/org/apache/carbondata/mv/rewrite/TestAllOperationsOnMV.scala
index 1750ce7..19170c5 100644
--- 
a/datamap/mv/core/src/test/scala/org/apache/carbondata/mv/rewrite/TestAllOperationsOnMV.scala
+++ 
b/datamap/mv/core/src/test/scala/org/apache/carbondata/mv/rewrite/TestAllOperationsOnMV.scala
@@ -392,7 +392,7 @@ class TestAllOperationsOnMV extends QueryTest with 
BeforeAndAfterEach {
 sql("insert into table maintable select 'abc',21,2000")
 sql("drop datamap if exists dm ")
 intercept[MalformedCarbonCommandException] {
-  sql("create datamap dm using 'mv' dmproperties('sort_columns'='name') as 
select name from maintable")
+  sql("create datamap dm using 'mv' 
dmproperties('dictionary_include'='name', 'sort_columns'='name') as select name 
from maintable")
 }.getMessage.contains("DMProperties dictionary_include,sort_columns are 
not allowed for this datamap")
   }
 
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/complexType/TestCreateTableWithDouble.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/complexType/TestCreateTableWithDouble.scala
index e46e3ba..f08aa20 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/complexType/TestCreateTableWithDouble.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/complexType/TestCreateTableWithDouble.scala
@@ -64,7 +64,8 @@ class TestCreateTableWithDouble extends QueryTest with 
BeforeAndAfterAll {
 try {
   sql("CREATE TABLE doubleComplex2 (Id int, number double, name string, " +
 "gamePoint array, mac struct) " +
-"STORED BY 'org.apache.carbondata.format' ")
+"STORED BY 'org.apache.carbondata.format' " +
+"TBLPROPERTIES('DICTIONARY_INCLUDE'='number,gamePoint,mac')")
   sql(s"LOAD DATA LOCAL INPATH '$dataPath' INTO TABLE doubleComplex2")
   countNum = sql(s"SELECT COUNT(*) FROM doubleComplex2").collect
   doubleField = sql(s"SELECT number FROM doubleComplex2 SORT BY 
Id").collect
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/createTable/TestCreateTableAsSelect.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/createTable/TestCreateTableAsSelect.scala
index 7591cd0..8e4d8fa 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/createTable/TestCreateTableAsSelect.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/createTable/TestCreateTableAsSelect.scala
@@ -145,6 +145,25 @@ class TestCreateTableAsSelect extends QueryTest with 
BeforeAndAfterAll {
 checkAnswer(sql("select * from ctas_select_direct_data"), Seq(Row(300, 
"carbondata")))
   }
 
+  test("test create table as select with TBLPROPERTIES") {
+sql("DROP TABLE IF EXISTS ctas_tblproperties_testt")
+sql(
+  "create table ctas_tblproperties_testt stored by 'carbondata' 
TBLPROPERTIES" +
+"('DICTIONARY_INCLUDE'='key', 'sort_scope'='global_sort') as select * 
from carbon_ctas_test")
+checkAnswer(sql("select * from ctas_tblproperties_testt"), sql("select * 
from carbon_ctas_test"))
+val carbonTable = 
CarbonEnv.getInstance(Spark2TestQueryExecutor.spark).carbonMetaStore
+  .lookupRelation(Option("default"), 
"ctas_tblproperties_testt")(Spark2TestQueryExecutor.spark)
+  .asInstanceOf[CarbonRelation].carbonTable
+val metadataFolderPath: CarbonFile = 
FileFactory.getCarbonFile(carbonTable.getMetadataPath)
+assert(metadataFolderPath.exists())
+val dictFiles: Array[CarbonFile] = metadataFolderPath.listFiles(new 
CarbonFileFilter {
+  override def accept(file: CarbonFile): Boolean = {
+file.getName.contains(".dict") || file.getName.contains(".sortindex")
+  }
+})
+assert(dictFiles.length == 3)
+  }
+
   test("test crea

Jenkins build is back to stable : carbondata-master-spark-2.2 » Apache CarbonData :: Processing #2158

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is back to stable : carbondata-master-spark-2.2 » Apache CarbonData :: Spark2 #2158

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #2158

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #2158

2019-12-29 Thread Apache Jenkins Server
See 




[carbondata] branch master updated: [CARBONDATA-3640][CARBONDATA-3557] Support flink ingest carbon partition table

2019-12-29 Thread jackylk
This is an automated email from the ASF dual-hosted git repository.

jackylk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new b0bdab2   [CARBONDATA-3640][CARBONDATA-3557] Support flink ingest 
carbon partition table
b0bdab2 is described below

commit b0bdab2597dd658eceaea0b87672c76e06eaf340
Author: liuzhi <371684...@qq.com>
AuthorDate: Mon Dec 30 10:05:52 2019 +0800

 [CARBONDATA-3640][CARBONDATA-3557] Support flink ingest carbon partition 
table

 Add support for flink carbon sink to write partitioned carbondata files
 as stage files.
 Add support for INSERT STAGE command to load stage files into CarbonData
 table.

 This closes #3542
---
 .../carbondata/core/statusmanager/StageInput.java  |  58 ++
 .../apache/carbondata/core/util/DataTypeUtil.java  |  23 ++-
 .../org/apache/carbon/flink/ProxyFileWriter.java   |   4 +-
 .../carbon/flink/ProxyFileWriterFactory.java   |   2 +-
 .../org/apache/carbon/flink/ProxyRecoverable.java  |  18 +-
 .../carbon/flink/ProxyRecoverableOutputStream.java |   6 +-
 .../carbon/flink/ProxyRecoverableSerializer.java   |   8 +-
 .../apache/carbon/flink/CarbonLocalProperty.java   |   2 +
 .../org/apache/carbon/flink/CarbonLocalWriter.java | 150 +++---
 .../carbon/flink/CarbonLocalWriterFactory.java |  64 +-
 .../org/apache/carbon/flink/CarbonS3Property.java  |   2 +
 .../org/apache/carbon/flink/CarbonS3Writer.java| 124 ++--
 .../apache/carbon/flink/CarbonS3WriterFactory.java |  71 +--
 .../java/org/apache/carbon/flink/CarbonWriter.java | 221 -
 .../apache/carbon/flink/CarbonWriterFactory.java   |   8 +-
 ...riter.scala => TestCarbonPartitionWriter.scala} | 123 +---
 .../org/apache/carbon/flink/TestCarbonWriter.scala |  10 +-
 .../scala/org/apache/carbon/flink/TestSource.scala |  16 +-
 .../management/CarbonInsertFromStageCommand.scala  | 137 -
 .../command/management/CarbonLoadDataCommand.scala |  44 ++--
 20 files changed, 760 insertions(+), 331 deletions(-)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/statusmanager/StageInput.java 
b/core/src/main/java/org/apache/carbondata/core/statusmanager/StageInput.java
index b4bf084..10dd51d 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/statusmanager/StageInput.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/statusmanager/StageInput.java
@@ -39,6 +39,12 @@ public class StageInput {
*/
   private Map files;
 
+  /**
+   * this list of partition data information in this StageInput
+   * @see PartitionLocation
+   */
+  private List locations;
+
   public StageInput() {
 
   }
@@ -48,6 +54,11 @@ public class StageInput {
 this.files = files;
   }
 
+  public StageInput(String base, List locations) {
+this.base = base;
+this.locations = locations;
+  }
+
   public String getBase() {
 return base;
   }
@@ -64,6 +75,14 @@ public class StageInput {
 this.files = files;
   }
 
+  public List getLocations() {
+return this.locations;
+  }
+
+  public void setLocations(final List locations) {
+this.locations = locations;
+  }
+
   public List createSplits() {
 return
 files.entrySet().stream().filter(
@@ -75,4 +94,43 @@ public class StageInput {
 ).collect(Collectors.toList());
   }
 
+  public static final class PartitionLocation {
+
+public PartitionLocation() {
+
+}
+
+public PartitionLocation(final Map partitions, final 
Map files) {
+  this.partitions = partitions;
+  this.files = files;
+}
+
+/**
+ * the list of (partitionColumn, partitionValue) of this partition.
+ */
+private Map partitions;
+
+/**
+ * the list of (file, length) in this partition.
+ */
+private Map files;
+
+public Map getPartitions() {
+  return this.partitions;
+}
+
+public void setPartitions(final Map partitions) {
+  this.partitions = partitions;
+}
+
+public Map getFiles() {
+  return this.files;
+}
+
+public void setFiles(final Map files) {
+  this.files = files;
+}
+
+  }
+
 }
diff --git 
a/core/src/main/java/org/apache/carbondata/core/util/DataTypeUtil.java 
b/core/src/main/java/org/apache/carbondata/core/util/DataTypeUtil.java
index a33f2d4..c07f08b 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/DataTypeUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/DataTypeUtil.java
@@ -75,7 +75,7 @@ public final class DataTypeUtil {
   /**
* DataType converter for different computing engines
*/
-  private static DataTypeConverter converter;
+  private static final ThreadLocal converter = new 
ThreadLocal<>();
 
   /**
* This method will convert a given value to its specific type
@@ -105,7 +105,7 @@ public final class DataTypeUtil {
   new BigDecimal(msrValue).setScale(scale, RoundingMode.HALF_UP

[carbondata] branch master updated: [CARBONDATA-3641] Refactory data loading for partition table

2019-12-29 Thread jackylk
This is an automated email from the ASF dual-hosted git repository.

jackylk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 45e84e5  [CARBONDATA-3641] Refactory data loading for partition table
45e84e5 is described below

commit 45e84e58bf6235393653c8e2c3d85a3c27c7872c
Author: QiangCai 
AuthorDate: Fri Dec 27 19:56:27 2019 +0800

[CARBONDATA-3641] Refactory data loading for partition table

[Background]

Currently, CarbonData only implemented hadoop commit algorithm version 1, 
which generated too many segment files during loading and generated too many 
small data files and index files

[Modification]
 1. implemented carbon commit algorithm, avoid to move data file and index 
files
 2. generate the final segment file directly
 3. optimize global_sort to avoid small files issue
 4. support complex data type in partition table (non-partition column)

This closes #3535
---
 .../carbondata/core/metadata/SegmentFileStore.java |  18 +-
 .../apache/carbondata/core/util/CarbonUtil.java|  59 ++-
 .../core/util/OutputFilesInfoHolder.java   |  78 
 .../BigDecimalSerializableComparator.java} |  41 +-
 .../comparator/BooleanSerializableComparator.java} |  45 +--
 .../ByteArraySerializableComparator.java}  |  44 +--
 .../core/util/comparator/Comparator.java   | 135 ---
 .../comparator/DoubleSerializableComparator.java}  |  39 +-
 .../comparator/FloatSerializableComparator.java}   |  39 +-
 .../comparator/IntSerializableComparator.java} |  45 +--
 .../comparator/LongSerializableComparator.java}|  45 +--
 .../comparator/ShortSerializableComparator.java}   |  45 +--
 .../comparator/StringSerializableComparator.java}  |  41 +-
 .../core/writer/CarbonIndexFileMergeWriter.java|  70 +++-
 .../apache/carbondata/events/OperationContext.java |   2 +-
 dev/findbugs-exclude.xml   |   4 +
 .../hadoop/api/CarbonOutputCommitter.java  | 139 +--
 .../hadoop/api/CarbonTableOutputFormat.java|  35 ++
 .../StandardPartitionTableLoadingTestCase.scala|   2 +-
 .../StandardPartitionTableQueryTestCase.scala  |  30 +-
 .../spark/load/DecimalSerializableComparator.java  |  43 +--
 .../carbondata/spark/load/GlobalSortHelper.scala   | 161 
 .../org/apache/spark/rdd/CarbonMergeFilesRDD.scala | 166 ++--
 .../spark/sql/events/MergeIndexEventListener.scala |  35 +-
 .../command/management/CarbonLoadDataCommand.scala |  99 +++--
 .../datasources/SparkCarbonTableFormat.scala   | 417 +++--
 .../loading/CarbonDataLoadConfiguration.java   |  11 +
 .../processing/loading/DataLoadProcessBuilder.java |   1 +
 .../processing/loading/model/CarbonLoadModel.java  |  13 +
 .../InputProcessorStepWithNoConverterImpl.java |   8 +-
 .../store/CarbonFactDataHandlerModel.java  |  14 +
 .../store/writer/AbstractFactDataWriter.java   |  10 +-
 .../processing/util/CarbonLoaderUtil.java  |  40 ++
 33 files changed, 1313 insertions(+), 661 deletions(-)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java 
b/core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
index 87b68c0..a4d3a29 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
@@ -111,7 +111,7 @@ public class SegmentFileStore {
 if (!carbonFile.exists()) {
   carbonFile.mkdirs();
 }
-CarbonFile tempFolder = null;
+CarbonFile tempFolder;
 if (isMergeIndexFlow) {
   tempFolder = FileFactory.getCarbonFile(location);
 } else {
@@ -1228,12 +1228,12 @@ public class SegmentFileStore {
   locationMap = new HashMap<>();
 }
 
-SegmentFile merge(SegmentFile mapper) {
-  if (this == mapper) {
+public SegmentFile merge(SegmentFile segmentFile) {
+  if (this == segmentFile) {
 return this;
   }
-  if (locationMap != null && mapper.locationMap != null) {
-for (Map.Entry entry : 
mapper.locationMap.entrySet()) {
+  if (locationMap != null && segmentFile.locationMap != null) {
+for (Map.Entry entry : 
segmentFile.locationMap.entrySet()) {
   FolderDetails folderDetails = locationMap.get(entry.getKey());
   if (folderDetails != null) {
 folderDetails.merge(entry.getValue());
@@ -1243,7 +1243,7 @@ public class SegmentFileStore {
 }
   }
   if (locationMap == null) {
-locationMap = mapper.locationMap;
+locationMap = segmentFile.locationMap;
   }
   return this;
 }
@@ -1268,6 +1268,12 @@ public class SegmentFileStore {
 }
   }
 
+  public static SegmentFile createSegmentFile(String partitionPath, 
FolderDetails folderDetails) {
+SegmentFile se

Jenkins build is back to stable : carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #3937

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3937

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #2159

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #2159

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3938

2019-12-29 Thread Apache Jenkins Server
See 




Jenkins build is unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #3938

2019-12-29 Thread Apache Jenkins Server
See 




[carbondata] branch master updated: [HOTFIX] Modify pull request template

2019-12-29 Thread jackylk
This is an automated email from the ASF dual-hosted git repository.

jackylk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 7374a89  [HOTFIX] Modify pull request template
7374a89 is described below

commit 7374a894d6a53ac32734d59a5f320a3635efb905
Author: Jacky Li 
AuthorDate: Mon Dec 30 15:33:51 2019 +0800

[HOTFIX] Modify pull request template
---
 .github/PULL_REQUEST_TEMPLATE.md | 25 +++--
 1 file changed, 11 insertions(+), 14 deletions(-)

diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index d80ff46..5c3dceb 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,18 +1,15 @@
-Be sure to do all of the following checklist to help us incorporate 
-your contribution quickly and easily:
-
- - [ ] Any interfaces changed?
+ ### Why is this PR needed?
  
- - [ ] Any backward compatibility impacted?
  
- - [ ] Document update required?
+ ### What changes were proposed in this PR?
+
+
+ ### Does this PR introduce any user interface change?
+ - No
+ - Yes. (please explain the change and update document)
 
- - [ ] Testing done
-Please provide details on 
-- Whether new unit test cases have been added or why no new tests are 
required?
-- How it is tested? Please attach test report.
-- Is it a performance related change? Please attach the performance 
test report.
-- Any additional information to help reviewers in testing this change.
-   
- - [ ] For large changes, please consider breaking it into sub-tasks under an 
umbrella JIRA. 
+ ### Is any new testcase added?
+ - No
+ - Yes
 
+