See <https://builds.apache.org/job/Tajo-master-build/221/changes>

Changes:

[hyunsik] TAJO-832: NPE occurs when Exception's message is null in Task. 
(Hyoungjun Kim via hyunsik)

[hyunsik] TAJO-827: SUM() overflow in the case of INT4. (Hyoungjun Kim via 
hyunsik)

[jinossy] TAJO-808: Fix pre-commit build failure. (jinho)

[jhjung] TAJO-819: KillQuery does not work for running query on TajoWorker. 
(jaehwa)

[jihoonson] TAJO-825: Datetime type refactoring. (Hyoungjun Kim via jihoon)

[jihoonson] TAJO-825: Datetime type refactoring. (fixed missing changes)

------------------------------------------
[...truncated 92791 lines...]
2014-05-23 19:52:48,711 INFO: org.apache.tajo.catalog.CatalogServer 
(createTable(523)) - relation "Jdbc_Test4.table3" is added to the catalog 
(127.0.0.1:14903)
2014-05-23 19:52:48,712 INFO: org.apache.tajo.master.GlobalEngine 
(createTableOnPath(686)) - Table Jdbc_Test4.table3 is created (0)
2014-05-23 19:52:48,714 INFO: org.apache.tajo.master.GlobalEngine 
(executeQuery(121)) - Query: CREATE TABLE "Jdbc_Test4".table4 (age int)
2014-05-23 19:52:48,714 INFO: org.apache.tajo.master.GlobalEngine 
(buildExpressionFromSql(175)) - hive.query.mode:false
2014-05-23 19:52:48,714 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(482)) - =============================================
2014-05-23 19:52:48,715 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(483)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2014-05-23 19:52:48,715 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(484)) - =============================================
2014-05-23 19:52:48,718 INFO: org.apache.tajo.catalog.CatalogServer 
(createTable(523)) - relation "Jdbc_Test4.table4" is added to the catalog 
(127.0.0.1:14903)
2014-05-23 19:52:48,719 INFO: org.apache.tajo.master.GlobalEngine 
(createTableOnPath(686)) - Table Jdbc_Test4.table4 is created (0)
2014-05-23 19:52:48,724 INFO: org.apache.tajo.master.session.SessionManager 
(removeSession(80)) - Session 94b20d65-b39f-4db6-b15b-ce5511f2ccc3 is removed.
2014-05-23 19:52:48,747 INFO: org.apache.tajo.master.session.SessionManager 
(createSession(73)) - Session 9d9f555c-6504-414c-b4ec-90a32bd7b8c1 is created.
2014-05-23 19:52:48,748 INFO: org.apache.tajo.master.session.SessionManager 
(removeSession(80)) - Session 9d9f555c-6504-414c-b4ec-90a32bd7b8c1 is removed.
2014-05-23 19:52:48,752 INFO: org.apache.tajo.master.GlobalEngine 
(executeQuery(121)) - Query: DROP TABLE jdbc_test3.table1
2014-05-23 19:52:48,752 INFO: org.apache.tajo.master.GlobalEngine 
(buildExpressionFromSql(175)) - hive.query.mode:false
2014-05-23 19:52:48,753 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(482)) - =============================================
2014-05-23 19:52:48,753 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(483)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2014-05-23 19:52:48,753 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(484)) - =============================================
2014-05-23 19:52:48,753 INFO: org.apache.tajo.catalog.CatalogServer 
(dropTable(554)) - relation "jdbc_test3.table1" is deleted from the catalog 
(127.0.0.1:14903)
2014-05-23 19:52:48,753 INFO: org.apache.tajo.master.GlobalEngine 
(dropTable(790)) - relation "jdbc_test3.table1" is  dropped.
2014-05-23 19:52:48,754 INFO: org.apache.tajo.master.GlobalEngine 
(executeQuery(121)) - Query: DROP TABLE jdbc_test3.table2
2014-05-23 19:52:48,755 INFO: org.apache.tajo.master.GlobalEngine 
(buildExpressionFromSql(175)) - hive.query.mode:false
2014-05-23 19:52:48,755 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(482)) - =============================================
2014-05-23 19:52:48,755 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(483)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2014-05-23 19:52:48,755 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(484)) - =============================================
2014-05-23 19:52:48,755 INFO: org.apache.tajo.catalog.CatalogServer 
(dropTable(554)) - relation "jdbc_test3.table2" is deleted from the catalog 
(127.0.0.1:14903)
2014-05-23 19:52:48,756 INFO: org.apache.tajo.master.GlobalEngine 
(dropTable(790)) - relation "jdbc_test3.table2" is  dropped.
2014-05-23 19:52:48,757 INFO: org.apache.tajo.master.GlobalEngine 
(executeQuery(121)) - Query: DROP DATABASE jdbc_test3
2014-05-23 19:52:48,757 INFO: org.apache.tajo.master.GlobalEngine 
(buildExpressionFromSql(175)) - hive.query.mode:false
2014-05-23 19:52:48,757 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(482)) - =============================================
2014-05-23 19:52:48,757 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(483)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------

DROP_DATABASE(1) jdbc_test3

2014-05-23 19:52:48,758 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(484)) - =============================================
2014-05-23 19:52:48,758 INFO: org.apache.tajo.master.GlobalEngine 
(dropDatabase(743)) - database jdbc_test3 is dropped.
2014-05-23 19:52:48,780 INFO: org.apache.tajo.master.session.SessionManager 
(createSession(73)) - Session 2bfbf7b8-1577-4718-8088-6d12608e4ccf is created.
2014-05-23 19:52:48,782 INFO: org.apache.tajo.master.session.SessionManager 
(removeSession(80)) - Session 2bfbf7b8-1577-4718-8088-6d12608e4ccf is removed.
2014-05-23 19:52:48,786 INFO: org.apache.tajo.master.GlobalEngine 
(executeQuery(121)) - Query: DROP TABLE "Jdbc_Test4".table3
2014-05-23 19:52:48,786 INFO: org.apache.tajo.master.GlobalEngine 
(buildExpressionFromSql(175)) - hive.query.mode:false
2014-05-23 19:52:48,786 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(482)) - =============================================
2014-05-23 19:52:48,787 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(483)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2014-05-23 19:52:48,787 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(484)) - =============================================
2014-05-23 19:52:48,787 INFO: org.apache.tajo.catalog.CatalogServer 
(dropTable(554)) - relation "Jdbc_Test4.table3" is deleted from the catalog 
(127.0.0.1:14903)
2014-05-23 19:52:48,787 INFO: org.apache.tajo.master.GlobalEngine 
(dropTable(790)) - relation "Jdbc_Test4.table3" is  dropped.
2014-05-23 19:52:48,788 INFO: org.apache.tajo.master.GlobalEngine 
(executeQuery(121)) - Query: DROP TABLE "Jdbc_Test4".table4
2014-05-23 19:52:48,788 INFO: org.apache.tajo.master.GlobalEngine 
(buildExpressionFromSql(175)) - hive.query.mode:false
2014-05-23 19:52:48,788 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(482)) - =============================================
2014-05-23 19:52:48,789 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(483)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------


2014-05-23 19:52:48,789 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(484)) - =============================================
2014-05-23 19:52:48,789 INFO: org.apache.tajo.catalog.CatalogServer 
(dropTable(554)) - relation "Jdbc_Test4.table4" is deleted from the catalog 
(127.0.0.1:14903)
2014-05-23 19:52:48,789 INFO: org.apache.tajo.master.GlobalEngine 
(dropTable(790)) - relation "Jdbc_Test4.table4" is  dropped.
2014-05-23 19:52:48,790 INFO: org.apache.tajo.master.GlobalEngine 
(executeQuery(121)) - Query: DROP DATABASE "Jdbc_Test4"
2014-05-23 19:52:48,790 INFO: org.apache.tajo.master.GlobalEngine 
(buildExpressionFromSql(175)) - hive.query.mode:false
2014-05-23 19:52:48,791 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(482)) - =============================================
2014-05-23 19:52:48,791 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(483)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------

DROP_DATABASE(1) Jdbc_Test4

2014-05-23 19:52:48,791 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(484)) - =============================================
2014-05-23 19:52:48,792 INFO: org.apache.tajo.master.GlobalEngine 
(dropDatabase(743)) - database Jdbc_Test4 is dropped.
2014-05-23 19:52:48,794 INFO: org.apache.tajo.master.GlobalEngine 
(dropDatabase(743)) - database TestTajoDatabaseMetaData is dropped.
2014-05-23 19:52:48,794 INFO: org.apache.tajo.master.session.SessionManager 
(removeSession(80)) - Session 50cb4a84-8076-415c-b532-3ecbb45218be is removed.
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.086 sec
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 63,754,392
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [l_orderkey] INT32: 5 values, 10B raw, 10B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 3 entries, 12B raw, 3B comp}
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [l_partkey] INT32: 5 values, 10B raw, 10B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 3 entries, 12B raw, 3B comp}
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
43B for [l_suppkey] INT32: 5 values, 26B raw, 26B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [l_linenumber] INT32: 5 values, 9B raw, 9B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 2 entries, 8B raw, 2B comp}
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
63B for [l_quantity] DOUBLE: 5 values, 46B raw, 46B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
63B for [l_extendedprice] DOUBLE: 5 values, 46B raw, 46B comp, 1 pages, 
encodings: [PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
63B for [l_discount] DOUBLE: 5 values, 46B raw, 46B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [l_tax] DOUBLE: 5 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 4 entries, 32B raw, 4B comp}
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [l_returnflag] BINARY: 5 values, 9B raw, 9B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 2 entries, 10B raw, 2B comp}
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
26B for [l_linestatus] BINARY: 5 values, 9B raw, 9B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 2 entries, 10B raw, 2B comp}
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
95B for [l_shipdate] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
95B for [l_commitdate] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
95B for [l_receiptdate] BINARY: 5 values, 76B raw, 76B comp, 1 pages, 
encodings: [PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [l_shipinstruct] BINARY: 5 values, 10B raw, 10B comp, 1 pages, 
encodings: [RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 3 entries, 49B raw, 3B 
comp}
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
27B for [l_shipmode] BINARY: 5 values, 10B raw, 10B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 4 entries, 32B raw, 4B comp}
May 23, 2014 7:51:01 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
190B for [l_comment] BINARY: 5 values, 171B raw, 171B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.InternalParquetRecordWriter: 
Flushing mem store to file. allocated memory: 63,753,806
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
25B for [l_orderkey] INT32: 2 values, 8B raw, 8B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [l_partkey] INT32: 2 values, 14B raw, 14B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [l_suppkey] INT32: 2 values, 14B raw, 14B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
31B for [l_linenumber] INT32: 2 values, 14B raw, 14B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
39B for [l_quantity] DOUBLE: 2 values, 22B raw, 22B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
39B for [l_extendedprice] DOUBLE: 2 values, 22B raw, 22B comp, 1 pages, 
encodings: [PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
39B for [l_discount] DOUBLE: 2 values, 22B raw, 22B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
25B for [l_tax] DOUBLE: 2 values, 8B raw, 8B comp, 1 pages, encodings: [RLE, 
BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
25B for [l_returnflag] BINARY: 2 values, 8B raw, 8B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 5B raw, 1B comp}
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
25B for [l_linestatus] BINARY: 2 values, 8B raw, 8B comp, 1 pages, encodings: 
[RLE, BIT_PACKED, PLAIN_DICTIONARY], dic { 1 entries, 5B raw, 1B comp}
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
51B for [l_shipdate] BINARY: 2 values, 34B raw, 34B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
51B for [l_commitdate] BINARY: 2 values, 34B raw, 34B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
51B for [l_receiptdate] BINARY: 2 values, 34B raw, 34B comp, 1 pages, 
encodings: [PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
51B for [l_shipinstruct] BINARY: 2 values, 34B raw, 34B comp, 1 pages, 
encodings: [PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
38B for [l_shipmode] BINARY: 2 values, 21B raw, 21B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 
90B for [l_comment] BINARY: 2 values, 71B raw, 71B comp, 1 pages, encodings: 
[PLAIN, RLE, BIT_PACKED]
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 2 records.
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 8 ms. row count = 2
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.ParquetFileReader: reading another 
1 footers
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.InternalParquetRecordReader: 
RecordReader initialized will read a total of 2 records.
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.InternalParquetRecordReader: at 
row 0. reading next block
May 23, 2014 7:51:02 PM INFO: parquet.hadoop.InternalParquetRecordReader: block 
read in memory in 1 ms. row count = 2
2014-05-23 19:52:48,803 ERROR: org.apache.tajo.rpc.RpcProtos 
(exceptionCaught(225)) - RPC Exception:Worker has already been shutdown
2014-05-23 19:52:48,807 ERROR: org.apache.tajo.rpc.RpcProtos 
(exceptionCaught(225)) - RPC Exception:Worker has already been shutdown
2014-05-23 19:52:48,809 INFO: org.apache.tajo.master.session.SessionManager 
(removeSession(80)) - Session 91efa4cc-64c2-4618-8371-ef29f188152b is removed.
2014-05-23 19:52:48,806 ERROR: org.apache.tajo.rpc.RpcProtos 
(exceptionCaught(225)) - RPC Exception:Worker has already been shutdown
2014-05-23 19:52:48,809 ERROR: org.apache.tajo.client.TajoClient (close(140)) - 
java.io.IOException: Connect error to localhost/127.0.0.1:14904 cause 
java.util.concurrent.RejectedExecutionException: Worker has already been 
shutdown
2014-05-23 19:52:48,805 ERROR: org.apache.tajo.rpc.RpcProtos 
(exceptionCaught(225)) - RPC Exception:Worker has already been shutdown
2014-05-23 19:52:48,811 ERROR: org.apache.tajo.client.TajoClient (close(140)) - 
java.io.IOException: Connect error to localhost/127.0.0.1:14904 cause 
java.util.concurrent.RejectedExecutionException: Worker has already been 
shutdown
2014-05-23 19:52:48,810 INFO: org.apache.tajo.master.session.SessionManager 
(removeSession(80)) - Session 78091b59-5841-42a6-8fd3-12d8d53358cf is removed.
2014-05-23 19:52:48,810 INFO: org.apache.tajo.worker.TajoWorker (run(492)) - 
============================================
2014-05-23 19:52:48,814 INFO: org.apache.tajo.worker.TajoWorker (run(493)) - 
TajoWorker received SIGINT Signal
2014-05-23 19:52:48,814 INFO: org.apache.tajo.worker.TajoWorker (run(494)) - 
============================================
2014-05-23 19:52:48,809 ERROR: org.apache.tajo.client.TajoClient (close(140)) - 
java.io.IOException: Connect error to localhost/127.0.0.1:14904 cause 
java.util.concurrent.RejectedExecutionException: Worker has already been 
shutdown
2014-05-23 19:52:48,808 ERROR: org.apache.tajo.client.TajoClient (close(140)) - 
java.io.IOException: Connect error to localhost/127.0.0.1:14904 cause 
java.util.concurrent.RejectedExecutionException: Worker has already been 
shutdown
2014-05-23 19:52:48,820 INFO: org.apache.tajo.worker.WorkerHeartbeatService 
(run(260)) - Worker Resource Heartbeat Thread stopped.
2014-05-23 19:52:48,842 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(128)) - Rpc (TajoWorkerProtocol) listened on 0:0:0:0:0:0:0:0:14908) 
shutdown
2014-05-23 19:52:48,843 INFO: org.apache.tajo.worker.TajoWorkerManagerService 
(stop(95)) - TajoWorkerManagerService stopped
2014-05-23 19:52:48,846 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(128)) - Rpc (QueryMasterProtocol) listened on 0:0:0:0:0:0:0:0:14907) 
shutdown
2014-05-23 19:52:48,846 INFO: 
org.apache.tajo.master.querymaster.QueryMasterManagerService (stop(111)) - 
QueryMasterManagerService stopped
2014-05-23 19:52:48,846 INFO: org.apache.tajo.master.querymaster.QueryMaster 
(run(434)) - QueryMaster heartbeat thread stopped
2014-05-23 19:52:48,849 INFO: org.apache.tajo.master.TajoAsyncDispatcher 
(stop(122)) - AsyncDispatcher stopped:querymaster_1400874462979
2014-05-23 19:52:48,850 INFO: org.apache.tajo.master.querymaster.QueryMaster 
(stop(160)) - QueryMaster stop
2014-05-23 19:52:48,850 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(109)) - TajoWorkerClientService stopping
2014-05-23 19:52:48,860 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(128)) - Rpc (QueryMasterClientProtocol) listened on 
0:0:0:0:0:0:0:0:14906) shutdown
2014-05-23 19:52:48,860 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(113)) - TajoWorkerClientService stopped
2014-05-23 19:52:48,860 INFO: org.apache.tajo.worker.TajoWorker (stop(334)) - 
TajoWorker main thread exiting

Results :

Failed tests:   
testCastFromTable(org.apache.tajo.engine.eval.TestSQLExpression): select 
col1::timestamp as t1, col2::float from table1 where t1 = '1980-04-01 
01:50:01'::timestamp expected:<1980-0[3-31 16]:50:01> but was:<1980-0[4-01 
01]:50:01>

Tests run: 747, Failures: 1, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [36.634s]
[INFO] Tajo Project POM .................................. SUCCESS [0.846s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [3.024s]
[INFO] Tajo Common ....................................... SUCCESS [56.582s]
[INFO] Tajo Algebra ...................................... SUCCESS [1.668s]
[INFO] Tajo Catalog Common ............................... SUCCESS [6.044s]
[INFO] Tajo Rpc .......................................... SUCCESS [21.690s]
[INFO] Tajo Catalog Client ............................... SUCCESS [1.181s]
[INFO] Tajo Catalog Server ............................... SUCCESS [7.653s]
[INFO] Tajo Storage ...................................... SUCCESS [58.924s]
[INFO] Tajo Core PullServer .............................. SUCCESS [1.258s]
[INFO] Tajo Client ....................................... SUCCESS [4.480s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [1.022s]
[INFO] Tajo Core ......................................... FAILURE [7:27.777s]
[INFO] Tajo Catalog Drivers .............................. SKIPPED
[INFO] Tajo Catalog ...................................... SKIPPED
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10:50.071s
[INFO] Finished at: Fri May 23 19:54:44 UTC 2014
[INFO] Final Memory: 52M/394M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test (default-test) on 
project tajo-core: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Tajo-master-build/ws/tajo-core/target/surefire-reports>
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-core
Build step 'Execute shell' marked build as failure
Updating TAJO-832
Updating TAJO-808
Updating TAJO-819
Updating TAJO-825
Updating TAJO-827

Reply via email to