See <https://builds.apache.org/job/HBase-1.1-JDK7/1802/changes>

Changes:

[zhangduo] HBASE-16870 Add the metrics of replication sources which were

------------------------------------------
[...truncated 2588 lines...]
        at 
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4508)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4438)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin.disableTable(HBaseAdmin.java:1326)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.deleteTable(HBaseTestingUtility.java:1877)
        at 
org.apache.hadoop.hbase.HBaseTestingUtility.deleteTableIfAny(HBaseTestingUtility.java:1890)
        at 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationIgnoresDisabledTables(TestRegionReplicaReplicationEndpoint.java:348)
        at 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationIgnoresDroppedTables(TestRegionReplicaReplicationEndpoint.java:335)

testRegionReplicaReplicationPeerIsCreatedForModifyTable(org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint)
  Time elapsed: 4.752 sec  <<< ERROR!
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=5, exceptions:
Sat Oct 22 16:49:00 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167
Sat Oct 22 16:49:00 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167
Sat Oct 22 16:49:00 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167
Sat Oct 22 16:49:00 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167
Sat Oct 22 16:49:02 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
        at 
org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.convertResult(HBaseAdmin.java:4546)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4504)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4438)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:672)
        at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:602)
        at 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationPeerIsCreatedForModifyTable(TestRegionReplicaReplicationEndpoint.java:151)
Caused by: org.apache.hadoop.ipc.RemoteException: Failed after attempts=5, 
exceptions:
Sat Oct 22 16:49:00 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167
Sat Oct 22 16:49:00 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167
Sat Oct 22 16:49:00 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167
Sat Oct 22 16:49:00 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167
Sat Oct 22 16:49:02 UTC 2016, RpcRetryingCaller{globalStartTime=1477154940300, 
pause=100, retries=5}, org.apache.hadoop.hbase.ipc.FailedServerException: This 
server is in the failed servers list: priapus.apache.org/67.195.81.188:44167

        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:157)
        at 
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 763.969 sec - 
in 
org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.867 sec - in 
org.apache.hadoop.hbase.io.hfile.TestSeekBeforeWithInlineBlocks
Running org.apache.hadoop.hbase.io.hfile.TestHFileBlock
Running org.apache.hadoop.hbase.TestMetaTableAccessorNoCluster
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.613 sec - in 
org.apache.hadoop.hbase.TestMetaTableAccessorNoCluster
Running org.apache.hadoop.hbase.zookeeper.TestZKMulti
Running org.apache.hadoop.hbase.TestMetaMigrationConvertingToPB
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.88 sec - in 
org.apache.hadoop.hbase.TestMetaMigrationConvertingToPB
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.834 sec - 
in org.apache.hadoop.hbase.zookeeper.TestZKMulti
Running org.apache.hadoop.hbase.zookeeper.TestZooKeeperNodeTracker
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 184.175 sec - 
in org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks
Running org.apache.hadoop.hbase.zookeeper.TestZKTableStateManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.94 sec - in 
org.apache.hadoop.hbase.zookeeper.TestZooKeeperNodeTracker
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.159 sec - in 
org.apache.hadoop.hbase.zookeeper.TestZKTableStateManager
Running org.apache.hadoop.hbase.zookeeper.TestZKLeaderManager
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.304 sec - in 
org.apache.hadoop.hbase.zookeeper.TestZKLeaderManager
Running org.apache.hadoop.hbase.zookeeper.TestRecoverableZooKeeper
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.644 sec - in 
org.apache.hadoop.hbase.zookeeper.TestRecoverableZooKeeper
Running org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock
Running org.apache.hadoop.hbase.zookeeper.TestHQuorumPeer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.716 sec - in 
org.apache.hadoop.hbase.zookeeper.TestHQuorumPeer

Results :

Failed tests: 
org.apache.hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures.testTruncateWithFailover(org.apache.hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures)
  Run 1: 
TestMasterFailoverWithProcedures.testTruncateWithFailover:312->testTruncateWithFailoverAtStep:351
 {ENCODED => e65512b0497ba9a41c8ba4d1ea6f8c7a, NAME => 
'testTruncateWithFailoverAtStep4,,1477149045002.e65512b0497ba9a41c8ba4d1ea6f8c7a.',
 STARTKEY => '', ENDKEY => 'a'} region dir does not exist
  Run 2: 
TestMasterFailoverWithProcedures.testTruncateWithFailover:312->testTruncateWithFailoverAtStep:351
 {ENCODED => 8e35eaaccd7387faf373928361330be9, NAME => 
'testTruncateWithFailoverAtStep4,,1477149156218.8e35eaaccd7387faf373928361330be9.',
 STARTKEY => '', ENDKEY => 'a'} region dir does not exist
  Run 3: 
TestMasterFailoverWithProcedures.testTruncateWithFailover:312->testTruncateWithFailoverAtStep:351
 {ENCODED => 3920b5f7afeb9bd3d5bc58acb8991d46, NAME => 
'testTruncateWithFailoverAtStep4,,1477149189757.3920b5f7afeb9bd3d5bc58acb8991d46.',
 STARTKEY => '', ENDKEY => 'a'} region dir does not exist

Tests in error: 
  TestHRegionPartitioner.beforeClass:37 » OutOfMemory unable to create new 
nativ...
  
TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:184->testRegionCrossingHFileSplit:206->runTest:242->runTest:248->runTest:270
 » TestTimedOut
  
TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowBloom:193->testRegionCrossingHFileSplit:206->runTest:242->runTest:248->runTest:301
 » TestTimedOut
  
TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowColBloom:202->testRegionCrossingHFileSplit:206->runTest:242->runTest:248->runTest:264->Object.wait:503->Object.wait:-2
 » TestTimedOut
  TestLoadIncrementalHFiles.testRegionCrossingRowBloom » Remote 
java.lang.OutOfM...
  
TestLoadIncrementalHFiles.testRegionCrossingRowColBloom:153->runTest:228->runTest:238->runTest:248->runTest:287
 » TableNotFound
  
TestLoadIncrementalHFiles.testSimpleHFileSplit:166->runTest:242->runTest:248->runTest:287
 » TestTimedOut
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint.testRegionCrossingHFileSplit(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint)
  Run 1: 
TestLoadIncrementalHFilesUseSecurityEndPoint>TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:184->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:238->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:301
 » TestTimedOut
  Run 2: 
TestLoadIncrementalHFilesUseSecurityEndPoint>TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:184->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:238->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:287->Object.wait:461->Object.wait:-2
 » TestTimedOut

org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint.testRegionCrossingHFileSplitRowBloom(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint)
  Run 1: 
TestLoadIncrementalHFilesUseSecurityEndPoint>TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowBloom:193->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:301
 » TestTimedOut
  Run 2: 
TestLoadIncrementalHFilesUseSecurityEndPoint>TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowBloom:193->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:281
 » TestTimedOut

org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint.testRegionCrossingHFileSplitRowColBloom(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint)
  Run 1: 
TestLoadIncrementalHFilesUseSecurityEndPoint>TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowColBloom:202->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:301
 » TestTimedOut
  Run 2: 
TestLoadIncrementalHFilesUseSecurityEndPoint>TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowColBloom:202->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:281
 » TestTimedOut

org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint.testSimpleHFileSplit(org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint)
  Run 1: 
TestLoadIncrementalHFilesUseSecurityEndPoint>TestLoadIncrementalHFiles.testSimpleHFileSplit:166->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:281
 » TestTimedOut
  Run 2: 
TestLoadIncrementalHFilesUseSecurityEndPoint>TestLoadIncrementalHFiles.testSimpleHFileSplit:166->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:270
 » TestTimedOut

org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles.testRegionCrossingHFileSplit(org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles)
  Run 1: 
TestSecureLoadIncrementalHFiles>TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:184->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:301
 » TestTimedOut
  Run 2: 
TestSecureLoadIncrementalHFiles>TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:184->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:238->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:301
 » TestTimedOut

org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles.testRegionCrossingHFileSplitRowBloom(org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles)
  Run 1: 
TestSecureLoadIncrementalHFiles>TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowBloom:193->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:301
 » TestTimedOut
  Run 2: 
TestSecureLoadIncrementalHFiles>TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowBloom:193->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:301
 » TestTimedOut

org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles.testRegionCrossingHFileSplitRowColBloom(org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles)
  Run 1: 
TestSecureLoadIncrementalHFiles>TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowColBloom:202->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:301
 » TestTimedOut
  Run 2: 
TestSecureLoadIncrementalHFiles>TestLoadIncrementalHFiles.testRegionCrossingHFileSplitRowColBloom:202->TestLoadIncrementalHFiles.testRegionCrossingHFileSplit:206->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:301
 » TestTimedOut

org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles.testSimpleHFileSplit(org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles)
  Run 1: 
TestSecureLoadIncrementalHFiles>TestLoadIncrementalHFiles.testSimpleHFileSplit:166->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:282->Object.wait:461->Object.wait:-2
 » TestTimedOut
  Run 2: 
TestSecureLoadIncrementalHFiles>TestLoadIncrementalHFiles.testSimpleHFileSplit:166->TestLoadIncrementalHFiles.runTest:242->TestLoadIncrementalHFiles.runTest:248->TestLoadIncrementalHFiles.runTest:270
 » TestTimedOut

  TestTableInputFormatScan2>TestTableInputFormatScanBase.setUpBeforeClass:85 » 
IO
  
TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationIgnoresDroppedTables:335->testRegionReplicaReplicationIgnoresDisabledTables:348
 » TestTimedOut
  
TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationPeerIsCreated:121
 » RetriesExhausted
  
TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationPeerIsCreatedForModifyTable:151
 » RetriesExhausted
  
TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationWith10Replicas:259->testRegionReplicaReplication:181
 » TestTimedOut
  
TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationWith2Replicas:249->testRegionReplicaReplication:178
 » RetriesExhausted
  
TestRegionReplicaReplicationEndpoint.testRegionReplicaWithoutMemstoreReplication:269
 » RetriesExhausted
Flaked tests: 
org.apache.hadoop.hbase.mapreduce.TestCopyTable.testCopyTable(org.apache.hadoop.hbase.mapreduce.TestCopyTable)
  Run 1: TestCopyTable.testCopyTable:123->doCopyTableTest:88 » Runtime 
java.lang.OutOfM...
  Run 2: PASS

org.apache.hadoop.hbase.mapreduce.TestCopyTable.testCopyTableWithBulkload(org.apache.hadoop.hbase.mapreduce.TestCopyTable)
  Run 1: TestCopyTable.testCopyTableWithBulkload:131->doCopyTableTest:81 » 
TableExists ...
  Run 2: PASS


Tests run: 801, Failures: 1, Errors: 22, Skipped: 5, Flakes: 2

Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 159.172 sec - 
in org.apache.hadoop.hbase.io.hfile.TestHFileBlock
[WARNING] Could not delete temp direcotry 
<https://builds.apache.org/job/HBase-1.1-JDK7/ws/hbase-server/target/surefire> 
because Directory 
<https://builds.apache.org/job/HBase-1.1-JDK7/ws/hbase-server/target/surefire> 
unable to be deleted.
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache HBase ....................................... SUCCESS [  6.391 s]
[INFO] Apache HBase - Checkstyle .......................... SUCCESS [  1.307 s]
[INFO] Apache HBase - Resource Bundle ..................... SUCCESS [  0.543 s]
[INFO] Apache HBase - Annotations ......................... SUCCESS [  2.252 s]
[INFO] Apache HBase - Protocol ............................ SUCCESS [ 30.609 s]
[INFO] Apache HBase - Common .............................. SUCCESS [02:37 min]
[INFO] Apache HBase - Procedure ........................... SUCCESS [04:04 min]
[INFO] Apache HBase - Client .............................. SUCCESS [01:58 min]
[INFO] Apache HBase - Hadoop Compatibility ................ SUCCESS [ 11.388 s]
[INFO] Apache HBase - Hadoop Two Compatibility ............ SUCCESS [ 15.450 s]
[INFO] Apache HBase - Prefix Tree ......................... SUCCESS [ 21.523 s]
[INFO] Apache HBase - Server .............................. FAILURE [  02:39 h]
[INFO] Apache HBase - Testing Util ........................ SKIPPED
[INFO] Apache HBase - Thrift .............................. SKIPPED
[INFO] Apache HBase - Rest ................................ SKIPPED
[INFO] Apache HBase - Shell ............................... SKIPPED
[INFO] Apache HBase - Integration Tests ................... SKIPPED
[INFO] Apache HBase - Examples ............................ SKIPPED
[INFO] Apache HBase - Assembly ............................ SKIPPED
[INFO] Apache HBase - Shaded .............................. SKIPPED
[INFO] Apache HBase - Shaded - Client ..................... SKIPPED
[INFO] Apache HBase - Shaded - Server ..................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:49 h
[INFO] Finished at: 2016-10-22T16:52:42+00:00
[INFO] Final Memory: 259M/458M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test 
(secondPartTestsExecution) on project hbase-server: ExecutionException: 
java.lang.RuntimeException: The forked VM terminated without properly saying 
goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
<https://builds.apache.org/job/HBase-1.1-JDK7/ws/hbase-server> && 
/usr/local/asfpackages/java/jdk1.7.0_80/jre/bin/java -enableassertions 
-XX:MaxDirectMemorySize=1G -Xmx2800m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true -jar 
<https://builds.apache.org/job/HBase-1.1-JDK7/ws/hbase-server/target/surefire/surefirebooter6859185112014052902.jar>
 
<https://builds.apache.org/job/HBase-1.1-JDK7/ws/hbase-server/target/surefire/surefire1206215538926192903tmp>
 
<https://builds.apache.org/job/HBase-1.1-JDK7/ws/hbase-server/target/surefire/surefire_8981228221050865757284tmp>
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hbase-server
Build step 'Invoke top-level Maven targets' marked build as failure
Performing Post build task...
Match found for :.* : True
Logical operation result is TRUE
Running script  : # Post-build task script. TODO: Check this in and have all 
builds reference check-in.
pwd && ls
# NOTE!!!! The below code has been copied and pasted from 
./dev-tools/run-test.sh
# Do not change here without syncing there and vice-versa.
ZOMBIE_TESTS_COUNT=`jps -v | grep surefirebooter | grep -e '-Dhbase.test' | wc 
-l`
if [[ $ZOMBIE_TESTS_COUNT != 0 ]] ; then
 echo "Suspicious java process found - waiting 30s to see if there are just 
slow to stop"
 sleep 30
 ZOMBIE_TESTS_COUNT=`jps -v | grep surefirebooter | grep -e '-Dhbase.test' | wc 
-l`
 if [[ $ZOMBIE_TESTS_COUNT != 0 ]] ; then
   echo " {color:red}There appear to be $ZOMBIE_TESTS_COUNT zombie 
tests{color}, they should have been killed by surefire but survived"
   jps -v | grep surefirebooter | grep -e '-Dhbase.test'
   jps -v | grep surefirebooter | grep -e '-Dhbase.test' | cut -d ' ' -f 1 | 
xargs -n 1 jstack
   # Exit with error
   exit 1
 else
   echo "We're ok: there is no zombie test, but some tests took some time to 
stop"
 fi
else
  echo "We're ok: there is no zombie test"
fi
[HBase-1.1-JDK7] $ /bin/bash -xe /tmp/hudson2930274800375554978.sh
+ pwd
<https://builds.apache.org/job/HBase-1.1-JDK7/ws/>
+ ls
bin
CHANGES.txt
conf
dev-support
hbase-annotations
hbase-assembly
hbase-checkstyle
hbase-client
hbase-common
hbase-examples
hbase-hadoop2-compat
hbase-hadoop-compat
hbase-it
hbase-native-client
hbase-prefix-tree
hbase-procedure
hbase-protocol
hbase-resource-bundle
hbase-rest
hbase-server
hbase-shaded
hbase-shell
hbase-testing-util
hbase-thrift
LICENSE.txt
NOTICE.txt
pom.xml
README.txt
src
target
++ grep surefirebooter
++ grep -e -Dhbase.test
++ wc -l
++ jps -v
+ ZOMBIE_TESTS_COUNT=0
+ [[ 0 != 0 ]]
+ echo 'We'\''re ok: there is no zombie test'
We're ok: there is no zombie test
POST BUILD TASK : SUCCESS
END OF POST BUILD TASK : 0
Archiving artifacts
Recording test results
Updating HBASE-16870

Reply via email to