See <https://builds.apache.org/job/Tajo-master-build/781/changes>

Changes:

[jihoonson] TAJO-1713: Change the type of edge cache in JoinGraphContext from 
HashMap to LRUMap.

------------------------------------------
[...truncated 660054 lines...]
INFO: 1 * Client response received on thread main
1 < 400
1 < Content-Type: application/json

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 1 * Sending client request on thread main
1 > DELETE http://127.0.0.1:25668/rest/databases/TestDropDatabaseNotFound

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 58 * Server has received a request on thread Tajo-REST-5 Server Worker #1
58 > DELETE http://127.0.0.1:25668/rest/databases/TestDropDatabaseNotFound
58 > Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
58 > Connection: keep-alive
58 > Content-Length: 0
58 > Host: 127.0.0.1:25668
58 > User-Agent: Jersey/2.6 (HttpUrlConnection 1.7.0_25)

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 58 * Server responded with a response on thread Tajo-REST-5 Server Worker 
#1
58 < 404

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 1 * Client response received on thread main
1 < 404

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 1 * Sending client request on thread main
1 > GET http://127.0.0.1:25668/rest/databases/testGetDatabaseNotFound

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 59 * Server has received a request on thread Tajo-REST-5 Server Worker #0
59 > GET http://127.0.0.1:25668/rest/databases/testGetDatabaseNotFound
59 > Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
59 > Connection: keep-alive
59 > Content-Length: 0
59 > Host: 127.0.0.1:25668
59 > User-Agent: Jersey/2.6 (HttpUrlConnection 1.7.0_25)

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 59 * Server responded with a response on thread Tajo-REST-5 Server Worker 
#0
59 < 404

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 1 * Client response received on thread main
1 < 404

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 1 * Sending client request on thread main
1 > POST http://127.0.0.1:25668/rest/databases
1 > Content-Type: application/json

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 60 * Server has received a request on thread Tajo-REST-5 Server Worker #1
60 > POST http://127.0.0.1:25668/rest/databases
60 > Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
60 > Connection: keep-alive
60 > Content-Length: 40
60 > Content-Type: application/json
60 > Host: 127.0.0.1:25668
60 > User-Agent: Jersey/2.6 (HttpUrlConnection 1.7.0_25)

2015-07-30 02:22:12,861 INFO: org.apache.tajo.catalog.CatalogServer 
(createDatabase(393)) - database "TestDatabasesResource" is created
Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 60 * Server responded with a response on thread Tajo-REST-5 Server Worker 
#1
60 < 201
60 < Location: http://127.0.0.1:25668/rest/databases/TestDatabasesResource

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 1 * Client response received on thread main
1 < 201
1 < Location: http://127.0.0.1:25668/rest/databases/TestDatabasesResource

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 2 * Sending client request on thread main
2 > GET http://127.0.0.1:25668/rest/databases

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 61 * Server has received a request on thread Tajo-REST-5 Server Worker #0
61 > GET http://127.0.0.1:25668/rest/databases
61 > Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
61 > Connection: keep-alive
61 > Content-Length: 0
61 > Host: 127.0.0.1:25668
61 > User-Agent: Jersey/2.6 (HttpUrlConnection 1.7.0_25)

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 61 * Server responded with a response on thread Tajo-REST-5 Server Worker 
#0
61 < 200
61 < Content-Type: application/json

Jul 30, 2015 2:22:12 AM org.glassfish.jersey.filter.LoggingFilter log
INFO: 2 * Client response received on thread main
2 < 200
2 < Content-Type: application/json

2015-07-30 02:22:12,866 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session 4a77969b-d41a-42fd-9476-872625ce1ab3 is removed.
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.188 sec - in 
org.apache.tajo.ws.rs.resources.TestDatabasesResource
eader initialized will read a total of 2 records.
Jul 30, 2015 2:14:20 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 30, 2015 2:14:20 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 2
Jul 30, 2015 2:14:38 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 30, 2015 2:14:38 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:38 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:38 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 30, 2015 2:14:38 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:38 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:39 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 30, 2015 2:14:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:39 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 30, 2015 2:14:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:39 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 26
Jul 30, 2015 2:14:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:39 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for 
[l_shipdate_function] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
3 ms. row count = 1
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 2:14:40 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 1 records.
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 30, 2015 2:14:40 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
1 ms. row count = 1
Jul 30, 2015 2:14:43 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore 
to file. allocated memory: 212
Jul 30, 2015 2:14:43 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for 
[l_orderkey] INT32: 5 values, 10B raw, 10B comp, 1 pages, encodings: 
[BIT_PACKED, PLAIN_DICTIONARY, RLE], dic { 3 entries, 12B raw, 3B comp}
Jul 30, 2015 2:14:43 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:43 AM INFO: 
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for 
[l_shipdate_function] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: 
[BIT_PACKED, RLE, PLAIN]
Jul 30, 2015 2:14:43 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:43 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
reading another 1 footers
Jul 30, 2015 2:14:43 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: 
Initiating action with parallelism: 5
Jul 30, 2015 2:14:43 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized 
will read a total of 5 records.
Jul 30, 2015 2:14:43 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next 
block
Jul 30, 2015 2:14:43 AM INFO: 
org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 
0 ms. row count = 5
2015-07-30 02:22:12,876 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session 9ccfb47b-4690-44de-aff0-602abdb6b530 is removed.
2015-07-30 02:22:12,876 INFO: org.apache.tajo.master.TajoMaster (run(567)) - 
============================================
2015-07-30 02:22:12,876 INFO: org.apache.tajo.master.TajoMaster (run(568)) - 
TajoMaster received SIGINT Signal
2015-07-30 02:22:12,876 INFO: org.apache.tajo.master.TajoMaster (run(569)) - 
============================================
2015-07-30 02:22:12,877 INFO: org.apache.tajo.worker.TajoWorker (run(565)) - 
============================================
2015-07-30 02:22:12,877 INFO: org.apache.tajo.worker.TajoWorker (run(566)) - 
TajoWorker received SIGINT Signal
2015-07-30 02:22:12,877 INFO: org.apache.tajo.worker.TajoWorker (run(567)) - 
============================================
2015-07-30 02:22:12,880 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(268)) - HistoryWriter_asf904.gq1.ygridcore.net_25669 stopped.
2015-07-30 02:22:12,880 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2015-07-30 02:22:12,881 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(serviceStop(111)) - NodeStatusUpdater stopped.
2015-07-30 02:22:12,881 INFO: org.apache.tajo.worker.NodeStatusUpdater 
(run(262)) - Heartbeat Thread stopped.
2015-07-30 02:22:12,881 INFO: org.apache.tajo.session.SessionManager 
(removeSession(86)) - Session 876a7faf-e823-4718-b54a-814b4ef5d0e6 is removed.
2015-07-30 02:22:12,881 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (Tajo-REST) listened on 0:0:0:0:0:0:0:0:25668) shutdown
2015-07-30 02:22:12,881 INFO: org.apache.tajo.ws.rs.TajoRestService 
(serviceStop(129)) - Tajo Rest Service stopped.
2015-07-30 02:22:12,882 INFO: org.apache.tajo.catalog.CatalogServer 
(serviceStop(191)) - Catalog Server (127.0.0.1:25665) shutdown
2015-07-30 02:22:12,882 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (CatalogProtocol) listened on 127.0.0.1:25665) shutdown
2015-07-30 02:22:12,883 INFO: org.apache.tajo.util.history.HistoryWriter 
(run(268)) - HistoryWriter_127.0.0.1_25667 stopped.
2015-07-30 02:22:12,885 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryMasterProtocol) listened on 0:0:0:0:0:0:0:0:25671) 
shutdown
2015-07-30 02:22:12,886 INFO: 
org.apache.tajo.querymaster.QueryMasterManagerService (serviceStop(106)) - 
QueryMasterManagerService stopped
2015-07-30 02:22:12,886 INFO: org.apache.tajo.querymaster.QueryMaster 
(run(417)) - QueryMaster heartbeat thread stopped
2015-07-30 02:22:12,889 INFO: org.apache.tajo.querymaster.QueryMaster 
(serviceStop(168)) - QueryMaster stopped
2015-07-30 02:22:12,889 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(99)) - TajoWorkerClientService stopping
2015-07-30 02:22:12,890 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryMasterClientProtocol) listened on 
0:0:0:0:0:0:0:0:25670) shutdown
2015-07-30 02:22:12,890 INFO: org.apache.tajo.worker.TajoWorkerClientService 
(stop(103)) - TajoWorkerClientService stopped
2015-07-30 02:22:12,890 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoWorkerProtocol) listened on 0:0:0:0:0:0:0:0:25669) 
shutdown
2015-07-30 02:22:12,890 INFO: BlockStateChange (logAddStoredBlock(2473)) - 
BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:49483 is added to 
blk_1073741834_1010{blockUCState=COMMITTED, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-b1976170-27fa-4b99-bdce-ddd2a235cf70:NORMAL:127.0.0.1:49483|RBW]]}
 size 3013017
2015-07-30 02:22:12,890 INFO: org.apache.tajo.worker.TajoWorkerManagerService 
(serviceStop(93)) - TajoWorkerManagerService stopped
2015-07-30 02:22:12,891 INFO: org.apache.tajo.worker.TajoWorker 
(serviceStop(375)) - TajoWorker main thread exiting
2015-07-30 02:22:13,291 INFO: org.apache.tajo.util.history.HistoryCleaner 
(run(136)) - History cleaner stopped
2015-07-30 02:22:13,292 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (QueryCoordinatorProtocol) listened on 127.0.0.1:25667) 
shutdown
2015-07-30 02:22:13,293 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoMasterClientProtocol) listened on 127.0.0.1:25666) 
shutdown
2015-07-30 02:22:13,296 INFO: org.apache.tajo.rpc.NettyServerBase 
(shutdown(173)) - Rpc (TajoResourceTrackerProtocol) listened on 
127.0.0.1:25664) shutdown
2015-07-30 02:22:13,296 INFO: org.apache.tajo.master.TajoMaster 
(serviceStop(401)) - Tajo Master main thread exiting

Results :

Tests in error: 
  
TestOuterJoinQuery.testLeftOuterWithEmptyTable:347->QueryTestCaseBase.runSimpleTests:527
 ยป SQL

Tests run: 1561, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  1.857 s]
[INFO] Tajo Project POM .................................. SUCCESS [  2.167 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  2.813 s]
[INFO] Tajo Common ....................................... SUCCESS [ 32.622 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  1.696 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  5.680 s]
[INFO] Tajo Plan ......................................... SUCCESS [  5.479 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [  0.467 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [ 49.332 s]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.543 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [ 11.208 s]
[INFO] Tajo Storage Common ............................... SUCCESS [ 10.752 s]
[INFO] Tajo HDFS Storage ................................. SUCCESS [ 56.671 s]
[INFO] Tajo HBase Storage ................................ SUCCESS [  4.603 s]
[INFO] Tajo PullServer ................................... SUCCESS [  1.238 s]
[INFO] Tajo Client ....................................... SUCCESS [  2.110 s]
[INFO] Tajo CLI tools .................................... SUCCESS [  0.958 s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [  3.458 s]
[INFO] ASM (thirdparty) .................................. SUCCESS [  1.357 s]
[INFO] Tajo RESTful Container ............................ SUCCESS [  3.594 s]
[INFO] Tajo Metrics ...................................... SUCCESS [  1.082 s]
[INFO] Tajo Core ......................................... FAILURE [19:39 min]
[INFO] Tajo RPC .......................................... SKIPPED
[INFO] Tajo Catalog Drivers Hive ......................... SKIPPED
[INFO] Tajo Catalog Drivers .............................. SKIPPED
[INFO] Tajo Catalog ...................................... SKIPPED
[INFO] Tajo Storage ...................................... SKIPPED
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 23:01 min
[INFO] Finished at: 2015-07-30T02:22:13+00:00
[INFO] Final Memory: 67M/492M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project tajo-core: There are test failures.
[ERROR] 
[ERROR] Please refer to 
<https://builds.apache.org/job/Tajo-master-build/ws/tajo-core/target/surefire-reports>
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-core
Build step 'Execute shell' marked build as failure
Updating TAJO-1713

Reply via email to