See <https://builds.apache.org/job/Tajo-master-build/780/changes>

Changes:

[hyunsik] TAJO-1718: Refine code for Parquet 1.8.1.

------------------------------------------
[...truncated 660052 lines...]
2015-07-29 11:37:21,029 INFO: org.apache.tajo.worker.TaskImpl (initPlan(179)) - 
==================================
2015-07-29 11:37:21,029 INFO: org.apache.tajo.worker.TaskAttemptContext 
(setState(137)) - Query status of ta_1438168766776_1910_000001_000000_00 is 
changed to TA_RUNNING
2015-07-29 11:37:21,043 INFO: BlockStateChange (logAddStoredBlock(2473)) - 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46614 is added to 
blk_1073747950_7126{blockUCState=COMMITTED, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-25139a9a-064e-4199-8cc7-fe4efd4e0414:NORMAL:127.0.0.1:46614|RBW]]}
 size 18
2015-07-29 11:37:21,444 INFO: org.apache.tajo.worker.TaskAttemptContext 
(setState(137)) - Query status of ta_1438168766776_1910_000001_000000_00 is 
changed to TA_SUCCEEDED
2015-07-29 11:37:21,445 INFO: org.apache.tajo.worker.TaskImpl (run(457)) - 
ta_1438168766776_1910_000001_000000_00 completed. Worker's task counter - 
total:1, succeeded: 1, killed: 0, failed: 0
2015-07-29 11:37:21,445 INFO: org.apache.tajo.querymaster.Stage 
(transition(1282)) - Stage - eb_1438168766776_1910_000001 finalize NONE_SHUFFLE 
(total=1, success=1, killed=0)
2015-07-29 11:37:21,449 INFO: org.apache.tajo.querymaster.DefaultTaskScheduler 
(stop(158)) - Task Scheduler stopped
2015-07-29 11:37:21,449 INFO: org.apache.tajo.querymaster.DefaultTaskScheduler 
(run(139)) - TaskScheduler schedulingThread stopped
2015-07-29 11:37:21,450 INFO: org.apache.tajo.querymaster.Stage 
(transition(1340)) - Stage completed - eb_1438168766776_1910_000001 (total=1, 
success=1, killed=0)
2015-07-29 11:37:21,450 INFO: org.apache.tajo.querymaster.Query (handle(774)) - 
Processing q_1438168766776_1910 of type STAGE_COMPLETED
2015-07-29 11:37:21,450 INFO: 
org.apache.tajo.engine.planner.global.ParallelExecutionQueue (next(95)) - Next 
executable block eb_1438168766776_1910_000002
2015-07-29 11:37:21,450 INFO: org.apache.tajo.querymaster.Query 
(transition(721)) - Complete Stage[eb_1438168766776_1910_000001], State: 
SUCCEEDED, 1/1. 
2015-07-29 11:37:21,450 INFO: org.apache.tajo.querymaster.Query (handle(774)) - 
Processing q_1438168766776_1910 of type QUERY_COMPLETED
2015-07-29 11:37:21,450 INFO: org.apache.tajo.worker.TaskManager 
(stopExecutionBlock(162)) - Stopped execution block:eb_1438168766776_1910_000001
2015-07-29 11:37:21,452 INFO: org.apache.tajo.querymaster.Query (handle(792)) - 
q_1438168766776_1910 Query Transitioned from QUERY_RUNNING to QUERY_SUCCEEDED
2015-07-29 11:37:21,452 INFO: org.apache.tajo.querymaster.QueryMasterTask 
(handle(294)) - Query completion notified from q_1438168766776_1910 final 
state: QUERY_SUCCEEDED
2015-07-29 11:37:21,453 INFO: org.apache.tajo.master.QueryInProgress 
(heartbeat(252)) - Received QueryMaster 
heartbeat:q_1438168766776_1910,state=QUERY_SUCCEEDED,progress=1.0, 
queryMaster=asf900.gq1.ygridcore.net
2015-07-29 11:37:21,453 INFO: org.apache.tajo.master.QueryManager 
(stopQuery(279)) - Stop QueryInProgress:q_1438168766776_1910
2015-07-29 11:37:21,453 INFO: org.apache.tajo.querymaster.QueryMasterTask 
(serviceStop(171)) - Stopping QueryMasterTask:q_1438168766776_1910
2015-07-29 11:37:21,453 INFO: org.apache.tajo.master.QueryInProgress 
(stopProgress(117)) - =========================================================
2015-07-29 11:37:21,454 INFO: org.apache.tajo.master.QueryInProgress 
(stopProgress(118)) - Stop query:q_1438168766776_1910
2015-07-29 11:37:21,454 INFO: org.apache.tajo.querymaster.QueryMasterTask 
(cleanupQuery(463)) - Cleanup resources of all workers. Query: 
q_1438168766776_1910, workers: 1
2015-07-29 11:37:21,454 INFO: org.apache.tajo.querymaster.QueryMasterTask 
(serviceStop(187)) - Stopped QueryMasterTask:q_1438168766776_1910
2015-07-29 11:37:21,599 INFO: org.apache.tajo.util.history.HistoryWriter 
(writeQueryHistory(358)) - Saving query summary: 
hdfs://localhost:44682/tmp/tajo-jenkins/staging/history/20150729/query-detail/q_1438168766776_1910/query.hist
2015-07-29 11:37:21,607 INFO: BlockStateChange (logAddStoredBlock(2473)) - 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46614 is added to 
blk_1073747951_7127{blockUCState=COMMITTED, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-25139a9a-064e-4199-8cc7-fe4efd4e0414:NORMAL:127.0.0.1:46614|RBW]]}
 size 6067
2015-07-29 11:37:21,718 INFO: org.apache.tajo.master.TajoMasterClientService 
(getQueryResultData(575)) - Send result to client for 
33f84bd6-4768-49f3-9645-3e9602806635,q_1438168766776_1910, 2 rows
2015-07-29 11:37:21,719 INFO: org.apache.tajo.master.TajoMasterClientService 
(getQueryResultData(575)) - Send result to client for 
33f84bd6-4768-49f3-9645-3e9602806635,q_1438168766776_1910, 0 rows
2015-07-29 11:37:21,721 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session 33f84bd6-4768-49f3-9645-3e9602806635 is removed.
2015-07-29 11:37:21,722 INFO: org.apache.tajo.master.GlobalEngine 
(updateQuery(234)) - SQL: DROP TABLE IF EXISTS "TestTajoJdbc".table1
2015-07-29 11:37:21,723 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(277)) - Non Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2015-07-29 11:37:21,723 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(279)) - =============================================
2015-07-29 11:37:21,723 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(280)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2015-07-29 11:37:21,723 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(281)) - =============================================
2015-07-29 11:37:21,724 INFO: org.apache.tajo.master.exec.DDLExecutor 
(dropTable(320)) - relation "TestTajoJdbc.table1" is already exists.
2015-07-29 11:37:21,725 INFO: org.apache.tajo.master.GlobalEngine 
(updateQuery(234)) - SQL: DROP TABLE IF EXISTS testaltertablepartition
2015-07-29 11:37:21,725 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(277)) - Non Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2015-07-29 11:37:21,725 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(279)) - =============================================
2015-07-29 11:37:21,725 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(280)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2015-07-29 11:37:21,725 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(281)) - =============================================
2015-07-29 11:37:21,726 INFO: org.apache.tajo.catalog.CatalogServer 
(dropTable(697)) - relation "TestTajoJdbc.testaltertablepartition" is deleted 
from the catalog (127.0.0.1:47816)
2015-07-29 11:37:21,726 INFO: org.apache.tajo.master.exec.DDLExecutor 
(dropTable(338)) - relation "TestTajoJdbc.testaltertablepartition" is  dropped.
2015-07-29 11:37:21,727 INFO: org.apache.tajo.master.exec.DDLExecutor 
(dropDatabase(191)) - database TestTajoJdbc is dropped.
2015-07-29 11:37:21,728 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session ae4e1293-7043-46c8-9061-20454bc0f0ad is removed.
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.197 sec - in 
org.apache.tajo.jdbc.TestTajoJdbc
2015-07-29 11:37:21,739 INFO: org.apache.tajo.worker.TajoWorker (run(565)) - 
============================================
2015-07-29 11:37:21,740 INFO: org.apache.tajo.worker.TajoWorker (run(566)) - 
TajoWorker received SIGINT Signal
2015-07-29 11:37:21,740 INFO: org.apache.tajo.worker.TajoWorker (run(567)) - 
============================================
2015-07-29 11:37:21,745 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session ed9ef584-f94f-4116-b568-145454f6efda is removed.
2015-07-29 11:37:21,747 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session 5f9d03f5-830d-407b-9e74-e69072b175c3 is removed.
hadoop.InternalParquetRecordReader: RecordReader initialized will read a total 
of 2 records.
Jul 29, 2015 11:22:29 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 29, 2015 11:22:29 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 2
Jul 29, 2015 11:22:44 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 29, 2015 11:22:44 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:44 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[RLE, PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:44 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 29, 2015 11:22:44 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:44 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[RLE, PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[RLE, PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[RLE, PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[RLE, PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:45 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:45 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:45 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:45 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 29, 2015 11:22:45 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:45 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 29, 2015 11:22:45 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:45 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 29, 2015 11:22:45 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Jul 29, 2015 11:22:45 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Jul 29, 2015 11:22:46 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:46 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:46 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 29, 2015 11:22:46 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:46 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 29, 2015 11:22:46 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:46 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 29, 2015 11:22:46 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 29, 2015 11:22:46 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 29, 2015 11:22:46 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 29, 2015 11:22:46 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Jul 29, 2015 11:22:46 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Jul 29, 2015 11:22:49 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 212
Jul 29, 2015 11:22:49 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 5 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 3 entries, 12B raw, 3B comp}
Jul 29, 2015 11:22:49 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: [RLE, 
PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:49 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate_function] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: 
[RLE, PLAIN, BIT_PACKED]
Jul 29, 2015 11:22:50 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:50 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 29, 2015 11:22:50 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 29, 2015 11:22:50 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 5 records.
Jul 29, 2015 11:22:50 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 29, 2015 11:22:50 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
2 ms. row count = 5
2015-07-29 11:37:21,748 INFO: org.apache.tajo.master.TajoMaster (run(567)) - 
============================================
2015-07-29 11:37:21,751 INFO: org.apache.tajo.master.TajoMaster (run(568)) - 
TajoMaster received SIGINT Signal
2015-07-29 11:37:21,751 INFO: org.apache.tajo.master.TajoMaster (run(569)) - 
============================================
2015-07-29 11:37:21,753 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (Tajo-REST) listened on 0:0:0:0:0:0:0:0:47819) shutdown
2015-07-29 11:37:21,754 WARN: org.apache.hadoop.hdfs.DFSClient 
(completeFile(2275)) - Caught exception 
java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2269)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2234)
        at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
        at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
        at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:237)
        at 
org.apache.tajo.util.history.HistoryWriter$WriterThread.writeQueryHistory(HistoryWriter.java:362)
        at 
org.apache.tajo.util.history.HistoryWriter$WriterThread.writeHistory(HistoryWriter.java:311)
        at 
org.apache.tajo.util.history.HistoryWriter$WriterThread.run(HistoryWriter.java:237)
2015-07-29 11:37:21,754 INFO: org.apache.tajo.ws.rs.TajoRestService 
(serviceStop(129)) - Tajo Rest Service stopped.
2015-07-29 11:37:21,754 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2015-07-29 11:37:21,757 INFO: org.apache.tajo.catalog.CatalogServer 
(serviceStop(191)) - Catalog Server (127.0.0.1:47816) shutdown
2015-07-29 11:37:21,758 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (CatalogProtocol) listened on 127.0.0.1:47816) shutdown
2015-07-29 11:37:21,759 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(268)) - HistoryWriter_127.0.0.1_47818 stopped.
2015-07-29 11:37:21,763 INFO: org.apache.tajo.util.history.HistoryWriter 
(writeQueryHistory(372)) - Saving query unit: 
hdfs://localhost:44682/tmp/tajo-jenkins/staging/history/20150729/query-detail/q_1438168766776_1910/eb_1438168766776_1910_000001.hist
2015-07-29 11:37:21,763 INFO: BlockStateChange (logAddStoredBlock(2473)) - 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46614 is added to 
blk_1073741834_1010{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-f01d96cd-499a-4f9a-9e3f-981dbe813808:NORMAL:127.0.0.1:46614|RBW]]}
 size 704
2015-07-29 11:37:21,764 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2015-07-29 11:37:21,764 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryCoordinatorProtocol) listened on 127.0.0.1:47818) 
shutdown
2015-07-29 11:37:21,766 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoMasterClientProtocol) listened on 127.0.0.1:47817) 
shutdown
2015-07-29 11:37:21,768 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoResourceTrackerProtocol) listened on 
127.0.0.1:47815) shutdown
2015-07-29 11:37:21,769 INFO: org.apache.tajo.master.TajoMaster 
(serviceStop(401)) - Tajo Master main thread exiting
2015-07-29 11:37:21,770 INFO: BlockStateChange (logAddStoredBlock(2473)) - 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:46614 is added to 
blk_1073747952_7128{blockUCState=COMMITTED, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-25139a9a-064e-4199-8cc7-fe4efd4e0414:NORMAL:127.0.0.1:46614|RBW]]}
 size 518
2015-07-29 11:37:22,172 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(268)) - HistoryWriter_asf900.gq1.ygridcore.net_47820 stopped.
2015-07-29 11:37:22,184 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(serviceStop(111)) - NodeStatusUpdater stopped.
2015-07-29 11:37:22,184 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(run(262)) - Heartbeat Thread stopped.
2015-07-29 11:37:22,185 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryMasterProtocol) listened on 0:0:0:0:0:0:0:0:47822) 
shutdown
2015-07-29 11:37:22,185 INFO: 
org.apache.tajo.querymaster.QueryMasterManagerService (serviceStop(106)) - 
QueryMasterManagerService stopped
2015-07-29 11:37:22,186 INFO: org.apache.tajo.querymaster.QueryMaster 
(run(417)) - QueryMaster heartbeat thread stopped
2015-07-29 11:37:22,188 INFO: org.apache.tajo.querymaster.QueryMaster 
(serviceStop(168)) - QueryMaster stopped
2015-07-29 11:37:22,188 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(99)) - TajoWorkerClientService stopping
2015-07-29 11:37:22,189 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryMasterClientProtocol) listened on 
0:0:0:0:0:0:0:0:47821) shutdown
2015-07-29 11:37:22,189 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(103)) - TajoWorkerClientService stopped
2015-07-29 11:37:22,189 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoWorkerProtocol) listened on 0:0:0:0:0:0:0:0:47820) 
shutdown
2015-07-29 11:37:22,190 INFO: org.apache.tajo.worker.TajoWorkerManagerService 
(serviceStop(93)) - TajoWorkerManagerService stopped
2015-07-29 11:37:22,190 INFO: org.apache.tajo.worker.TajoWorker 
(serviceStop(375)) - TajoWorker main thread exiting

Results :

Tests in error: 
  
TestGroupByQuery.testDistinctAggregation3:269->QueryTestCaseBase.executeQuery:397->QueryTestCaseBase.executeFile:591
 ยป SQL

Tests run: 1561, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  1.528 s]
[INFO] Tajo Project POM .................................. SUCCESS [  1.158 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  2.661 s]
[INFO] Tajo Common ....................................... SUCCESS [ 29.559 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  1.413 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  4.925 s]
[INFO] Tajo Plan ......................................... SUCCESS [  4.865 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [  0.414 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [ 47.161 s]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.336 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [  9.661 s]
[INFO] Tajo Storage Common ............................... SUCCESS [ 10.487 s]
[INFO] Tajo HDFS Storage ................................. SUCCESS [ 55.233 s]
[INFO] Tajo HBase Storage ................................ SUCCESS [  4.331 s]
[INFO] Tajo PullServer ................................... SUCCESS [  1.045 s]
[INFO] Tajo Client ....................................... SUCCESS [  1.836 s]
[INFO] Tajo CLI tools .................................... SUCCESS [  1.108 s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [  3.148 s]
[INFO] ASM (thirdparty) .................................. SUCCESS [  0.739 s]
[INFO] Tajo RESTful Container ............................ SUCCESS [  3.325 s]
[INFO] Tajo Metrics ...................................... SUCCESS [  0.951 s]
[INFO] Tajo Core ......................................... FAILURE [18:27 min]
[INFO] Tajo RPC .......................................... SKIPPED
[INFO] Tajo Catalog Drivers Hive ......................... SKIPPED
[INFO] Tajo Catalog Drivers .............................. SKIPPED
[INFO] Tajo Catalog ...................................... SKIPPED
[INFO] Tajo Storage ...................................... SKIPPED
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 21:34 min
[INFO] Finished at: 2015-07-29T11:37:22+00:00
[INFO] Final Memory: 67M/479M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project tajo-core: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Tajo-master-build/ws/tajo-core/target/surefire-reports>
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-core
Build step 'Execute shell' marked build as failure
Updating TAJO-1718

Reply via email to