See <https://builds.apache.org/job/Tajo-master-nightly/1020/changes>

Changes:

[hyunsik] TAJO-2145: Error codes based on errno.h need prefix.

------------------------------------------
[...truncated 739168 lines...]
2016-05-10 11:11:45,894 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(284)) - Optimized Query: 

-----------------------------
Query Block Graph
-----------------------------
|-#ROOT
-----------------------------
Optimization Log:
-----------------------------

CREATE_DATABASE(0) IF NOT EXISTS TestTajoCliNegatives

2016-05-10 11:11:45,894 INFO: org.apache.tajo.master.GlobalEngine 
(createLogicalPlan(285)) - =============================================
2016-05-10 11:11:45,894 INFO: org.apache.tajo.master.exec.DDLExecutor 
(createDatabase(245)) - database "TestTajoCliNegatives" is already exists.
Used heap: 481.6 MiB/889.0 MiB, direct:33.3 MiB/33.3 MiB, mapped:0 B/0 B, 
Active Threads: 372, Run: TestTajoCliNegatives.testDescTable
2016-05-10 11:11:45,897 INFO: org.apache.tajo.session.SessionManager 
(removeSession(85)) - Session cd063c4f-cacd-47e9-baaf-e7764ded1b9d is removed.
2016-05-10 11:11:45,900 INFO: org.apache.tajo.session.SessionManager 
(removeSession(85)) - Session d1ae9fdc-b3cc-4646-835a-0600463020cc is removed.
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.196 sec - in 
org.apache.tajo.cli.tsql.TestTajoCliNegatives
Running org.apache.tajo.cli.tsql.TestDefaultCliOutputFormatter
2016-05-10 11:11:45,912 INFO: org.apache.tajo.session.SessionManager 
(createSession(79)) - Session 6597909b-e12b-403c-9224-9f06d43e397b is created.
2016-05-10 11:11:45,922 INFO: org.apache.tajo.session.SessionManager 
(removeSession(85)) - Session 6597909b-e12b-403c-9224-9f06d43e397b is removed.
2016-05-10 11:11:45,926 INFO: org.apache.tajo.session.SessionManager 
(createSession(79)) - Session e0be3ddc-b0ed-4a7a-9165-e2cac7313e5a is created.
2016-05-10 11:11:45,934 INFO: org.apache.tajo.session.SessionManager 
(removeSession(85)) - Session e0be3ddc-b0ed-4a7a-9165-e2cac7313e5a is removed.
2016-05-10 11:11:45,938 INFO: org.apache.tajo.session.SessionManager 
(createSession(79)) - Session debfda69-800f-4fa8-a0aa-acad9892452a is created.
2016-05-10 11:11:45,955 INFO: org.apache.tajo.session.SessionManager 
(removeSession(85)) - Session debfda69-800f-4fa8-a0aa-acad9892452a is removed.
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec - in 
org.apache.tajo.cli.tsql.TestDefaultCliOutputFormatter
Running org.apache.tajo.cli.tsql.TestSimpleParser
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec - in 
org.apache.tajo.cli.tsql.TestSimpleParser
Running org.apache.tajo.cli.tsql.commands.TestHdfsCommand
2016-05-10 11:11:45,963 INFO: org.apache.tajo.session.SessionManager 
(createSession(79)) - Session ca256675-9943-465c-936c-2d54bfedae68 is created.
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.056 sec - in 
org.apache.tajo.cli.tsql.commands.TestHdfsCommand
Running org.apache.tajo.cli.tsql.commands.TestExecExternalShellCommand
2016-05-10 11:11:46,021 INFO: org.apache.tajo.session.SessionManager 
(createSession(79)) - Session b1cb1280-c4e1-4aeb-9d27-210d2fe4ae85 is created.
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.03 sec - in 
org.apache.tajo.cli.tsql.commands.TestExecExternalShellCommand
mnChunkPageWriteStore: written 79B for [l_receiptdate] BINARY: 2 values, 34B 
raw, 34B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:15 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 79B for 
[l_shipinstruct] BINARY: 2 values, 34B raw, 34B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:15 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 53B for 
[l_shipmode] BINARY: 2 values, 21B raw, 21B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:15 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 155B for 
[l_comment] BINARY: 2 values, 71B raw, 71B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:15 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:15 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
May 10, 2016 10:52:15 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:15 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:15 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
May 10, 2016 10:52:15 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:36 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 0
May 10, 2016 10:52:36 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 27B for 
[l_orderkey] INT32: 3 values, 6B raw, 6B comp, 1 pages, encodings: [BIT_PACKED, 
RLE, PLAIN]
May 10, 2016 10:52:36 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 27B for 
[l_shipdate_function] BINARY: 3 values, 6B raw, 6B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:37 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
May 10, 2016 10:52:38 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 404
May 10, 2016 10:52:40 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 8 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 3 entries, 12B raw, 3B comp}
May 10, 2016 10:52:40 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate] BINARY: 8 values, 76B raw, 76B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:40 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate_function] BINARY: 8 values, 76B raw, 76B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
May 10, 2016 10:52:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
May 10, 2016 10:52:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
May 10, 2016 10:52:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
2016-05-10 11:11:46,058 INFO: org.mortbay.log (info(67)) - Shutdown hook 
executing
2016-05-10 11:11:46,058 INFO: org.mortbay.log (info(67)) - Shutdown hook 
complete
2016-05-10 11:11:46,065 INFO: org.apache.tajo.session.SessionManager 
(removeSession(85)) - Session ca256675-9943-465c-936c-2d54bfedae68 is removed.
2016-05-10 11:11:46,068 INFO: org.apache.tajo.session.SessionManager 
(removeSession(85)) - Session b1cb1280-c4e1-4aeb-9d27-210d2fe4ae85 is removed.
2016-05-10 11:11:46,071 INFO: org.apache.tajo.worker.TajoWorker (run(505)) - 
============================================
2016-05-10 11:11:46,071 INFO: org.apache.tajo.worker.TajoWorker (run(506)) - 
TajoWorker received SIGINT Signal
2016-05-10 11:11:46,071 INFO: org.apache.tajo.worker.TajoWorker (run(507)) - 
============================================
2016-05-10 11:11:46,075 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(288)) - HistoryWriter_localhost_48724 stopped.
2016-05-10 11:11:46,076 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(serviceStop(113)) - NodeStatusUpdater stopped.
2016-05-10 11:11:46,076 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(run(261)) - Heartbeat Thread stopped.
2016-05-10 11:11:46,076 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2016-05-10 11:11:46,080 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(172)) - Rpc (QueryMasterProtocol) listened on 127.0.0.1:48726) 
shutdown
2016-05-10 11:11:46,081 INFO: 
org.apache.tajo.querymaster.QueryMasterManagerService (serviceStop(98)) - 
QueryMasterManagerService stopped
2016-05-10 11:11:46,082 INFO: org.apache.tajo.querymaster.QueryMaster 
(run(432)) - QueryMaster heartbeat thread stopped
2016-05-10 11:11:46,082 INFO: org.apache.tajo.querymaster.QueryMaster 
(serviceStop(164)) - QueryMaster stopped
2016-05-10 11:11:46,082 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(serviceStop(85)) - TajoWorkerClientService stopping
2016-05-10 11:11:46,083 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(172)) - Rpc (QueryMasterClientProtocol) listened on 127.0.0.1:48725) 
shutdown
2016-05-10 11:11:46,083 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(serviceStop(89)) - TajoWorkerClientService stopped
2016-05-10 11:11:46,083 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(172)) - Rpc (TajoWorkerProtocol) listened on 127.0.0.1:48724) shutdown
2016-05-10 11:11:46,083 INFO: org.apache.tajo.worker.TajoWorkerManagerService 
(serviceStop(90)) - TajoWorkerManagerService stopped
2016-05-10 11:11:46,083 INFO: org.apache.tajo.worker.TajoWorker 
(serviceStop(319)) - TajoWorker main thread exiting
2016-05-10 11:11:48,663 INFO: BlockStateChange (invalidateWorkForOneNode(3482)) 
- BLOCK* BlockManager: ask 127.0.0.1:58046 to delete [blk_1073749168_8344, 
blk_1073749169_8345, blk_1073749170_8346]
2016-05-10 11:11:50,390 INFO: org.apache.tajo.master.TajoMaster (run(567)) - 
============================================
2016-05-10 11:11:50,390 INFO: org.apache.tajo.master.TajoMaster (run(568)) - 
TajoMaster received SIGINT Signal
2016-05-10 11:11:50,390 INFO: org.apache.tajo.master.TajoMaster (run(569)) - 
============================================
2016-05-10 11:11:50,391 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(172)) - Rpc (Tajo-REST) listened on 127.0.0.1:48723) shutdown
2016-05-10 11:11:50,392 INFO: org.apache.tajo.ws.rs.TajoRestService 
(serviceStop(90)) - Tajo Rest Service stopped.
2016-05-10 11:11:50,394 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(288)) - HistoryWriter_127.0.0.1_48722 stopped.
2016-05-10 11:11:50,395 INFO: BlockStateChange (logAddStoredBlock(2621)) - 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:58046 is added to 
blk_1073741835_1011{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-d42120e1-dd40-4014-b7bb-1d6a4f9c192d:NORMAL:127.0.0.1:58046|RBW]]}
 size 704
2016-05-10 11:11:50,396 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(172)) - Rpc (QueryCoordinatorProtocol) listened on 127.0.0.1:48722) 
shutdown
2016-05-10 11:11:50,396 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2016-05-10 11:11:52,600 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(172)) - Rpc (TajoMasterClientProtocol) listened on 127.0.0.1:48721) 
shutdown
2016-05-10 11:11:52,600 INFO: org.apache.tajo.catalog.CatalogServer 
(serviceStop(179)) - Catalog Server (localhost/127.0.0.1:48720) shutdown
2016-05-10 11:11:52,601 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(172)) - Rpc (CatalogProtocol) listened on 127.0.0.1:48720) shutdown
2016-05-10 11:11:52,602 INFO: org.apache.tajo.catalog.store.DerbyStore 
(close(2979)) - Close database 
(jdbc:derby:memory:<https://builds.apache.org/job/Tajo-master-nightly/ws/tajo-core-tests/target/test-data/95901550-ea33-4599-869b-c91bac01fe02/db;create=true)>
2016-05-10 11:11:52,604 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(172)) - Rpc (TajoResourceTrackerProtocol) listened on 
127.0.0.1:48719) shutdown
2016-05-10 11:11:52,604 INFO: org.apache.tajo.master.TajoMaster 
(serviceStop(423)) - Tajo Master main thread exiting
2016-05-10 11:11:52,655 INFO: org.apache.tajo.catalog.store.DerbyStore 
(shutdown(68)) - Derby shutdown complete normally.
2016-05-10 11:11:52,655 INFO: org.apache.tajo.catalog.store.DerbyStore 
(shutdown(75)) - Shutdown database

Results :

Failed tests: 
  
TestSelectQuery.testSimpleQueryWithLimitPartitionedTable:102->QueryTestCaseBase.assertResultSet:735->QueryTestCaseBase.assertResultSet:759->QueryTestCaseBase.verifyResultText:894
 Result Verification expected:<...-------------------
[3,Customer#000000003,MG9kdTD2WBHm,11-719-748-3364,7498.12,AUTOMOBILE, deposits 
eat slyly ironic, even instructions. express foxes detect slyly. blithely even 
accounts abov,1
3,Customer#000000003,MG9kdTD2WBHm,11-719-748-3364,7498.12,AUTOMOBILE, deposits 
eat slyly ironic, even instructions. express foxes detect slyly. blithely even 
accounts abov,1
3,Customer#000000003,MG9kdTD2WBHm,11-719-748-3364,7498.12,AUTOMOBILE, deposits 
eat slyly ironic, even instructions. express foxes detect slyly. blithely even 
accounts abov,1
3,Customer#000000003,MG9kdTD2WBHm,11-719-748-3364,7498.12,AUTOMOBILE, deposits 
eat slyly ironic, even instructions. express foxes detect slyly. blithely even 
accounts abov,1
3,Customer#000000003,MG9kdTD2WBHm,11-719-748-3364,7498.12,AUTOMOBILE, deposits 
eat slyly ironic, even instructions. express foxes detect slyly. blithely even 
accounts abov,1
2,Customer#000000002,XSTf4,NCwDVaWNe6tEgvwfmRchLXak,23-768-687-3665,121.65,AUTOMOBILE,l
 accounts. blithely ironic theodolites integrate boldly: caref,13
2,Customer#000000002,XSTf4,NCwDVaWNe6tEgvwfmRchLXak,23-768-687-3665,121.65,AUTOMOBILE,l
 accounts. blithely ironic theodolites integrate boldly: caref,13
2,Customer#000000002,XSTf4,NCwDVaWNe6tEgvwfmRchLXak,23-768-687-3665,121.65,AUTOMOBILE,l
 accounts. blithely ironic theodolites integrate boldly: caref,13
2,Customer#000000002,XSTf4,NCwDVaWNe6tEgvwfmRchLXak,23-768-687-3665,121.65,AUTOMOBILE,l
 accounts. blithely ironic theodolites integrate boldly: caref,13
2,Customer#000000002,XSTf4,NCwDVaWNe6tEgvwfmRchLXak,23-768-687-3665,121.65,AUTOMOBILE,l
 accounts. blithely ironic theodolites integrate boldly: caref,13]> but 
was:<...-------------------
[null,null,null,null,null,null,for null test,null
null,null,null,null,null,null,for null test2,null
null,null,null,null,null,null,for null test3,null
null,null,null,null,null,null,for null test,null
null,null,null,null,null,null,for null test2,null
null,null,null,null,null,null,for null test3,null
null,null,null,null,null,null,for null test,null
null,null,null,null,null,null,for null test2,null
null,null,null,null,null,null,for null test3,null
null,null,null,null,null,null,for null test,null]>
  TestTablePartitions.testPartitionWithInOperator:1895 
expected:<...-------------------
[N,1,1,17.0
N,1,1,36.0
N,2,2,38.0
R,3,2,45.0
R,3,3,49].0
null,null,null,nu...> but was:<...-------------------
[R,3,3,49.0
N,1,1,17.0
N,2,2,38.0
R,3,2,45.0
N,1,1,36].0
null,null,null,nu...>
  
TestTablePartitions.testQueryCasesOnColumnPartitionedTable:312->QueryTestCaseBase.assertResultSet:746->QueryTestCaseBase.assertResultSet:759->QueryTestCaseBase.verifyResultText:894
 Result Verification expected:<...------------------
2[89.0
1296.0
1444.0
2025.0
2401.0
null
null
null]> but was:<...------------------
2[401.0
289.0
1444.0
2025.0
null
null
null
1296.0]>
Tests in error: 
  
TestTablePartitions.testPartitionWithInOperator:1876->QueryTestCaseBase.executeString:394
 » DuplicateTable
  
TestTablePartitions.testQueryCasesOnColumnPartitionedTable:290->QueryTestCaseBase.executeString:394
 » DuplicateTable

Tests run: 1808, Failures: 3, Errors: 2, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  1.856 s]
[INFO] Tajo Project POM .................................. SUCCESS [  3.173 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  4.773 s]
[INFO] Tajo Common ....................................... SUCCESS [ 35.063 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  2.677 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  5.989 s]
[INFO] Tajo Plan ......................................... SUCCESS [  8.410 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [  1.321 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [02:25 min]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.586 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [ 12.505 s]
[INFO] Tajo Storage Common ............................... SUCCESS [  3.597 s]
[INFO] Tajo HDFS Storage ................................. SUCCESS [01:21 min]
[INFO] Tajo PullServer ................................... SUCCESS [  1.251 s]
[INFO] Tajo Client ....................................... SUCCESS [  2.713 s]
[INFO] Tajo SQL Parser ................................... SUCCESS [  3.716 s]
[INFO] Tajo CLI tools .................................... SUCCESS [  1.997 s]
[INFO] ASM (thirdparty) .................................. SUCCESS [  1.988 s]
[INFO] Tajo RESTful Container ............................ SUCCESS [  3.645 s]
[INFO] Tajo Metrics ...................................... SUCCESS [  1.750 s]
[INFO] Tajo Core ......................................... SUCCESS [  7.325 s]
[INFO] Tajo RPC .......................................... SUCCESS [  0.915 s]
[INFO] Tajo Catalog Drivers Hive ......................... SUCCESS [ 10.454 s]
[INFO] Tajo Catalog Drivers .............................. SUCCESS [  0.040 s]
[INFO] Tajo Catalog ...................................... SUCCESS [  1.114 s]
[INFO] Tajo Client Example ............................... SUCCESS [  1.104 s]
[INFO] Tajo HBase Storage ................................ SUCCESS [  3.941 s]
[INFO] Tajo Cluster Tests ................................ SUCCESS [  2.755 s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [ 37.080 s]
[INFO] Tajo JDBC storage common .......................... SUCCESS [  0.840 s]
[INFO] Tajo PostgreSQL JDBC storage ...................... SUCCESS [  0.902 s]
[INFO] Tajo S3 storage ................................... SUCCESS [  0.187 s]
[INFO] Tajo Storage ...................................... SUCCESS [  0.961 s]
[INFO] Tajo Yarn ......................................... SUCCESS [  1.855 s]
[INFO] Tajo Core Tests ................................... FAILURE [23:07 min]
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 29:42 min
[INFO] Finished at: 2016-05-10T11:11:53+00:00
[INFO] Final Memory: 162M/2071M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.19.1:test (default-test) on 
project tajo-core-tests: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Tajo-master-nightly/ws/tajo-core-tests/target/surefire-reports>
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-core-tests
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results

Reply via email to