See <https://builds.apache.org/job/Tajo-master-build/962/changes>

Changes:

[hyunsik] TAJO-1941: PermGen elimination in JDK 8.

------------------------------------------
[...truncated 174921 lines...]
  Blocked count: 2
  Waited count: 0
  Blocked on org.apache.hadoop.hbase.ChoreService@2927f90c
  Blocked by 11495 (asf906.gq1.ygridcore.net,53180,1446576820892_ChoreService_1)
  Stack:
    
org.apache.hadoop.hbase.ChoreService.onChoreMissedStartTime(ChoreService.java:277)
    
org.apache.hadoop.hbase.ScheduledChore.onChoreMissedStartTime(ScheduledChore.java:212)
2015-11-03 18:59:33,746 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 29570ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2015-11-03 18:59:47,944 WARN: 
org.apache.zookeeper.server.persistence.FileTxnLog (commit(334)) - fsync-ing 
the write ahead log in SyncThread:0 took 12216ms which will adversely effect 
operation latency. See the ZooKeeper troubleshooting guide
2015-11-03 18:59:35,726 INFO: org.apache.zookeeper.server.PrepRequestProcessor 
(pRequest2Txn(494)) - Processed session termination for sessionid: 
0x150ceb284b60003
    org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:174)
2015-11-03 19:00:15,475 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 18125ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "org.apache.tajo.util.JvmPauseMonitor$Monitor@213835b6"
    java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
2015-11-03 19:00:33,291 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 15343ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
    
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    java.lang.Thread.run(Thread.java:745)
2015-11-03 19:00:54,239 INFO: org.apache.zookeeper.server.NIOServerCnxn 
(closeSock(1007)) - Closed socket connection for client /0:0:0:0:0:0:0:1:37332 
which had sessionid 0x150ceb284b60002
Thread 11510 (sync.4):
  State: WAITING
  Blocked count: 0
  Waited count: 1
  Waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@ca3f715
  Stack:
    sun.misc.Unsafe.park(Native Method)
2015-11-03 19:00:58,294 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 13407ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2015-11-03 19:01:01,170 INFO: org.apache.zookeeper.server.PrepRequestProcessor 
(pRequest2Txn(494)) - Processed session termination for sessionid: 
0x150ceb284b60004
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "org.apache.hadoop.util.JvmPauseMonitor$Monitor@64e92d61"
    
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
2015-11-03 19:01:26,899 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 20220ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2015-11-03 19:01:50,226 INFO: org.apache.zookeeper.server.NIOServerCnxn 
(closeSock(1007)) - Closed socket connection for client /0:0:0:0:0:0:0:1:37333 
which had sessionid 0x150ceb284b60003
2015-11-03 19:01:46,949 INFO: org.apache.zookeeper.server.PrepRequestProcessor 
(pRequest2Txn(494)) - Processed session termination for sessionid: 
0x150ceb284b60001
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1323)
    java.lang.Thread.run(Thread.java:745)
Thread 11509 (sync.3):
  State: WAITING
  Blocked count: 0
  Waited count: 1
  Waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@2287acc3
  Stack:
    sun.misc.Unsafe.park(Native Method)
2015-11-03 19:02:23,743 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 33516ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1323)
    java.lang.Thread.run(Thread.java:745)
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "org.apache.hadoop.util.JvmPauseMonitor$Monitor@42435b98"
Thread 11508 (sync.2):
  State: WAITING
  Blocked count: 0
2015-11-03 19:03:00,344 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 18667ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
  Waited count: 1
  Waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@724b23e7
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1323)
    java.lang.Thread.run(Thread.java:745)
2015-11-03 19:03:37,052 INFO: org.apache.zookeeper.server.NIOServerCnxn 
(closeSock(1007)) - Closed socket connection for client /127.0.0.1:59438 which 
had sessionid 0x150ceb284b60004
2015-11-03 19:03:46,109 INFO: org.apache.zookeeper.server.PrepRequestProcessor 
(pRequest2Txn(494)) - Processed session termination for sessionid: 
0x150ceb284b60000
2015-11-03 19:03:56,624 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 19571ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "asf906:59127.activeMasterManager-SendThread(localhost:56817)"
Thread 11507 (sync.1):
  State: WAITING
  Blocked count: 0
  Waited count: 1
  Waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@64b24b3b
  Stack:
    sun.misc.Unsafe.park(Native Method)
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "main-SendThread(localhost:56817)"
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
2015-11-03 19:04:36,842 WARN: org.apache.hadoop.security.Groups 
(fetchGroupList(244)) - Potential performance problem: getGroups(user=jenkins) 
took 20788 milliseconds.
    
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1323)
    java.lang.Thread.run(Thread.java:745)
Thread 11506 (sync.0):
2015-11-03 19:04:51,742 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 19698ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "RS:0;asf906:53180-SendThread(localhost:56817)"
  State: WAITING
  Blocked count: 0
  Waited count: 1
  Waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@773a13e7
  Stack:
    sun.misc.Unsafe.park(Native Method)
2015-11-03 19:06:09,599 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 24261ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
2015-11-03 19:06:20,601 INFO: org.apache.zookeeper.server.NIOServerCnxn 
(closeSock(1007)) - Closed socket connection for client /0:0:0:0:0:0:0:1:37331 
which had sessionid 0x150ceb284b60001
    
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    
org.apache.hadoop.hbase.regionserver.wal.FSHLog$SyncRunner.run(FSHLog.java:1323)
    java.lang.Thread.run(Thread.java:745)
2015-11-03 19:06:31,483 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 13618ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2015-11-03 19:06:41,516 INFO: org.apache.zookeeper.server.NIOServerCnxn 
(closeSock(1007)) - Closed socket connection for client /127.0.0.1:59324 which 
had sessionid 0x150ceb284b60000
Thread 11511 (RS_OPEN_META-asf906:53180-0.append-pool6-t1):
  State: WAITING
  Blocked count: 0
  Waited count: 2
  Waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@68444877
  Stack:
    sun.misc.Unsafe.park(Native Method)
2015-11-03 19:07:13,582 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 17314ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "M:0;asf906:59127-SendThread(localhost:56817)"
    java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    
com.lmax.disruptor.BlockingWaitStrategy.waitFor(BlockingWaitStrategy.java:45)
    
com.lmax.disruptor.ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:55)
    com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:123)
    
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    java.lang.Thread.run(Thread.java:745)
Thread 11505 (ResponseProcessor for block 
BP-331901906-67.195.81.150-1446576335956:blk_1073743442_2618):
  State: BLOCKED
  Blocked count: 2
  Waited count: 0
  Blocked on java.lang.Object@43bb1a83
  Blocked by 222 (NodeStatusUpdater)
  Stack:
    java.lang.ClassLoader.loadClass(ClassLoader.java:404)
    sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
2015-11-03 19:08:09,479 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 22044ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
    java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    org.apache.log4j.spi.LoggingEvent.<init>(LoggingEvent.java:165)
    org.apache.log4j.Category.forcedLog(Category.java:391)
    org.apache.log4j.Category.log(Category.java:856)
    org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:575)
    
org.apache.zookeeper.server.NIOServerCnxnFactory$1.uncaughtException(NIOServerCnxnFactory.java:44)
2015-11-03 19:08:57,079 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 30510ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2015-11-03 19:09:15,593 ERROR: org.apache.hadoop.hbase.master.HMaster 
(run(222)) - Master failed to complete initialization after 900000ms. Please 
consider submitting a bug report including a thread dump of this process.
    java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1057)
2015-11-03 19:09:24,644 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 22487ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
    java.lang.ThreadGroup.uncaughtException(ThreadGroup.java:1052)
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "NodeStatusUpdater"
    java.lang.Thread.dispatchUncaughtException(Thread.java:1952)
2015-11-03 19:09:50,855 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 20423ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "NIOServerCxn.Factory:0.0.0.0/0.0.0.0:56817"
Thread 11503 (DataXceiver for client DFSClient_NONMAPREDUCE_1125027474_1 at 
/127.0.0.1:34739 [Receiving block 
BP-331901906-67.195.81.150-1446576335956:blk_1073743442_2618]):
  State: BLOCKED
  Blocked count: 3
  Waited count: 1
  Blocked on java.lang.Object@43bb1a83
  Blocked by 11243 (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:56817)
  Stack:
    java.lang.ClassLoader.loadClass(ClassLoader.java:404)
    sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
    java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    org.apache.log4j.spi.LoggingEvent.<init>(LoggingEvent.java:165)
    org.apache.log4j.Category.forcedLog(Category.java:391)
    org.apache.log4j.Category.log(Category.java:856)
    org.apache.commons.logging.impl.Log4JLogger.error(Log4JLogger.java:257)
    org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:278)
    java.lang.Thread.run(Thread.java:745)
Thread 11502 (DataStreamer for file 
/user/jenkins/hbase/WALs/asf906.gq1.ygridcore.net,53180,1446576820892/asf906.gq1.ygridcore.net%2C53180%2C1446576820892..meta.1446576824122.meta
 block BP-331901906-67.195.81.150-1446576335956:blk_1073743442_2618):
  State: WAITING
  Blocked count: 4
  Waited count: 9
  Waiting on 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor@35266a5c
  Stack:
    java.lang.Object.wait(Native Method)
    java.lang.Thread.join(Thread.java:1245)
2015-11-03 19:10:57,884 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 22267ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
    java.lang.Thread.join(Thread.java:1319)
    
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:390)
2015-11-03 19:11:33,172 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 23892ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
Thread 11500 (RS_OPEN_META-asf906:53180-0-MetaLogRoller):
  State: TIMED_WAITING
  Blocked count: 0
  Waited count: 51
  Stack:
    java.lang.Object.wait(Native Method)
    org.apache.hadoop.hbase.regionserver.LogRoller.run(LogRoller.java:116)
2015-11-03 19:11:59,038 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 14436ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
    java.lang.Thread.run(Thread.java:745)
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "ResponseProcessor for block 
BP-331901906-67.195.81.150-1446576335956:blk_1073741834_1010"
2015-11-03 19:12:00,284 WARN: org.apache.hadoop.security.Groups 
(fetchGroupList(244)) - Potential performance problem: getGroups(user=jenkins) 
took 11126 milliseconds.
2015-11-03 19:12:14,685 WARN: org.apache.hadoop.hbase.util.Sleeper (sleep(97)) 
- We slept 15647ms instead of 3000ms, this is likely due to a long garbage 
collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler 
in thread "ResponseProcessor for block 
BP-331901906-67.195.81.150-1446576335956:blk_1073743441_2617"

Results :

Tests run: 559, Failures: 0, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  3.344 s]
[INFO] Tajo Project POM .................................. SUCCESS [  4.154 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  5.182 s]
[INFO] Tajo Common ....................................... SUCCESS [ 38.555 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  3.537 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  6.652 s]
[INFO] Tajo Plan ......................................... SUCCESS [  9.249 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [  1.623 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [ 51.327 s]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.738 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [ 15.799 s]
[INFO] Tajo Storage Common ............................... SUCCESS [  4.041 s]
[INFO] Tajo HDFS Storage ................................. SUCCESS [01:06 min]
[INFO] Tajo PullServer ................................... SUCCESS [  1.448 s]
[INFO] Tajo Client ....................................... SUCCESS [  3.462 s]
[INFO] Tajo CLI tools .................................... SUCCESS [  2.569 s]
[INFO] Tajo SQL Parser ................................... SUCCESS [  6.019 s]
[INFO] ASM (thirdparty) .................................. SUCCESS [  2.528 s]
[INFO] Tajo RESTful Container ............................ SUCCESS [  5.734 s]
[INFO] Tajo Metrics ...................................... SUCCESS [  1.741 s]
[INFO] Tajo Core ......................................... SUCCESS [ 11.268 s]
[INFO] Tajo RPC .......................................... SUCCESS [  0.994 s]
[INFO] Tajo Catalog Drivers Hive ......................... SUCCESS [ 14.213 s]
[INFO] Tajo Catalog Drivers .............................. SUCCESS [  0.139 s]
[INFO] Tajo Catalog ...................................... SUCCESS [  1.037 s]
[INFO] Tajo Client Example ............................... SUCCESS [  1.193 s]
[INFO] Tajo HBase Storage ................................ SUCCESS [  5.582 s]
[INFO] Tajo Cluster Tests ................................ SUCCESS [  3.544 s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [ 45.212 s]
[INFO] Tajo JDBC storage common .......................... SUCCESS [  0.914 s]
[INFO] Tajo PostgreSQL JDBC storage ...................... SUCCESS [  1.753 s]
[INFO] Tajo Storage ...................................... SUCCESS [  1.085 s]
[INFO] Tajo Distribution ................................. SUCCESS [  7.232 s]
[INFO] Tajo Core Tests ................................... FAILURE [27:01 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 32:31 min
[INFO] Finished at: 2015-11-03T19:12:22+00:00
[INFO] Final Memory: 176M/2505M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project tajo-core-tests: Execution default-test of goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test failed: The forked VM 
terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
<https://builds.apache.org/job/Tajo-master-build/ws/tajo-core-tests> && 
/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45/jre/bin/java 
-Xms512m -Xmx1024m -XX:MaxMetaspaceSize=152m -Dfile.encoding=UTF-8 
-Dderby.storage.pageSize=1024 -Dderby.stream.error.file=/dev/null -jar 
<https://builds.apache.org/job/Tajo-master-build/ws/tajo-core-tests/target/surefire/surefirebooter988559700895037715.jar>
 
<https://builds.apache.org/job/Tajo-master-build/ws/tajo-core-tests/target/surefire/surefire816633455731764012tmp>
 
<https://builds.apache.org/job/Tajo-master-build/ws/tajo-core-tests/target/surefire/surefire_141051122314741810101tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-core-tests
Build step 'Execute shell' marked build as failure
Updating TAJO-1941

Reply via email to