tomscut created HDFS-16109:
------------------------------
Summary: Fix flaky some unit tests since they offen timeout
Key: HDFS-16109
URL: https://issues.apache.org/jira/browse/HDFS-16109
Project: Hadoop HDFS
Issue Type: Wish
Reporter: tomscut
Assignee: tomscut
Increase timeout for TestBootstrapStandby, TestFsVolumeList and
TestDecommissionWithBackoffMonitor since they offen timeout.
TestBootstrapStandby:
{code:java}
[ERROR] Tests run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 159.474
s <<< FAILURE! - in
org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby[ERROR] Tests
run: 8, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 159.474 s <<<
FAILURE! - in
org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby[ERROR]
testRateThrottling(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby)
Time elapsed: 31.262 s <<<
ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 30000
milliseconds at java.io.RandomAccessFile.writeBytes(Native Method) at
java.io.RandomAccessFile.write(RandomAccessFile.java:512) at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:947)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:910)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:699)
at
org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:642)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:387)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:243)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1224)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:795)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:673)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:760)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1014)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:989) at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763)
at
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2261)
at
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2231)
at
org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby.testRateThrottling(TestBootstrapStandby.java:297)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.lang.Thread.run(Thread.java:748)
{code}
TestFsVolumeList:
{code:java}
[ERROR] Tests run: 12, Failures: 0, Errors: 3, Skipped: 0, Time elapsed:
190.294 s <<< FAILURE! - in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList[ERROR]
Tests run: 12, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 190.294 s <<<
FAILURE! - in
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList[ERROR]
testAddRplicaProcessorForAddingReplicaInMap(org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList)
Time elapsed: 60.028 s <<<
ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 60000
milliseconds at sun.misc.Unsafe.park(Native Method) at
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at
java.util.concurrent.FutureTask.awaitDone(FutureTask.java:429) at
java.util.concurrent.FutureTask.get(FutureTask.java:191) at
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList.testAddRplicaProcessorForAddingReplicaInMap(TestFsVolumeList.java:395)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.lang.Thread.run(Thread.java:748)
{code}
TestDecommission:
{code:java}
[ERROR] Tests run: 28, Failures: 0, Errors: 2, Skipped: 1, Time elapsed:
676.729 s <<< FAILURE! - in
org.apache.hadoop.hdfs.TestDecommissionWithBackoffMonitor[ERROR] Tests run: 28,
Failures: 0, Errors: 2, Skipped: 1, Time elapsed: 676.729 s <<< FAILURE! - in
org.apache.hadoop.hdfs.TestDecommissionWithBackoffMonitor[ERROR]
testDecommissionWithCloseFileAndListOpenFiles(org.apache.hadoop.hdfs.TestDecommissionWithBackoffMonitor)
Time elapsed: 180.686 s <<<
ERROR!org.junit.runners.model.TestTimedOutException: test timed out after
180000 milliseconds at java.lang.Thread.sleep(Native Method) at
org.apache.hadoop.hdfs.AdminStatesBaseTest.waitNodeState(AdminStatesBaseTest.java:346)
at
org.apache.hadoop.hdfs.AdminStatesBaseTest.waitNodeState(AdminStatesBaseTest.java:333)
at
org.apache.hadoop.hdfs.TestDecommission.testDecommissionWithCloseFileAndListOpenFiles(TestDecommission.java:912)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.lang.Thread.run(Thread.java:748){code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]