See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1092/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE 
###########################
[...truncated 17101 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
    [mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [04:19 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  04:10 h]
[INFO] Apache Hadoop HDFS Native Client .................. SKIPPED
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.094 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 04:15 h
[INFO] Finished at: 2016-04-12T23:01:50+00:00
[INFO] Final Memory: 73M/668M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter6253569772000488704.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire4009938385999161559tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4586713484252041808232tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###################################################################################
############################## FAILED TESTS (if any) 
##############################
14 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDFSPermission.testPermissionMessageOnNonDirAncestor

Error Message:
No valid image files found

Stack Trace:
java.io.FileNotFoundException: No valid image files found
        at 
org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImages(FSImageTransactionalStorageInspector.java:158)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:619)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:960)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:659)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:637)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:699)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:900)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:879)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1596)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
        at 
org.apache.hadoop.hdfs.TestDFSPermission.setUp(TestDFSPermission.java:118)


FAILED:  org.apache.hadoop.hdfs.TestDFSPermission.testPermissionChecking

Error Message:
Could not rename temporary file 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-2/current/seen_txid.tmp
 to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-2/current/seen_txid
 due to failure in native rename. ENOENT: No such file or directory

Stack Trace:
java.io.IOException: Could not rename temporary file 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-2/current/seen_txid.tmp
 to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-2/current/seen_txid
 due to failure in native rename. ENOENT: No such file or directory
        at 
org.apache.hadoop.hdfs.util.AtomicFileOutputStream.close(AtomicFileOutputStream.java:86)
        at 
org.apache.hadoop.hdfs.util.PersistentLongFile.writeFile(PersistentLongFile.java:82)
        at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.writeTransactionIdFile(NNStorage.java:447)
        at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:575)
        at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:594)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:157)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1104)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:386)
        at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:228)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1005)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
        at 
org.apache.hadoop.hdfs.TestDFSPermission.setUp(TestDFSPermission.java:118)


FAILED:  org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050.test6

Error Message:
failed, dn=1, length=1179648java.io.IOException: Got error, status=ERROR, 
status message opReadBlock 
BP-1495764293-67.195.81.149-1460500585888:blk_-9223372036854775792_1002 
received exception java.io.IOException: BlockId -9223372036854775792 is not 
valid., for OP_READ_BLOCK, self=/127.0.0.1:59964, remote=/127.0.0.1:56381, for 
file /127.0.0.1:56381:-9223372036854775792, for pool 
BP-1495764293-67.195.81.149-1460500585888 block -9223372036854775792_1002
 at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:121)
 at 
org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:443)
 at 
org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:411)
 at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:846)
 at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:735)
 at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:377)
 at 
org.apache.hadoop.hdfs.BlockReaderTestUtil.getBlockReader(BlockReaderTestUtil.java:216)
 at 
org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:440)
 at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:436)
 at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:322)
 at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:527)
 at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test6(TestDFSStripedOutputStreamWithFailure.java:536)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)


Stack Trace:
java.lang.AssertionError: failed, dn=1, length=1179648java.io.IOException: Got 
error, status=ERROR, status message opReadBlock 
BP-1495764293-67.195.81.149-1460500585888:blk_-9223372036854775792_1002 
received exception java.io.IOException: BlockId -9223372036854775792 is not 
valid., for OP_READ_BLOCK, self=/127.0.0.1:59964, remote=/127.0.0.1:56381, for 
file /127.0.0.1:56381:-9223372036854775792, for pool 
BP-1495764293-67.195.81.149-1460500585888 block -9223372036854775792_1002
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:121)
        at 
org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:443)
        at 
org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:411)
        at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:846)
        at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:735)
        at 
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:377)
        at 
org.apache.hadoop.hdfs.BlockReaderTestUtil.getBlockReader(BlockReaderTestUtil.java:216)
        at 
org.apache.hadoop.hdfs.StripedFileTestUtil.checkData(StripedFileTestUtil.java:440)
        at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:436)
        at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:322)
        at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:527)
        at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test6(TestDFSStripedOutputStreamWithFailure.java:536)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:483)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

        at org.junit.Assert.fail(Assert.java:88)
        at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:327)
        at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:527)
        at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test6(TestDFSStripedOutputStreamWithFailure.java:536)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:483)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)


FAILED:  
org.apache.hadoop.hdfs.TestDataTransferProtocol.testDataTransferProtocol

Error Message:
Directory 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-2
 is in an inconsistent state: namespaceID is incompatible with others.

Stack Trace:
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-2
 is in an inconsistent state: namespaceID is incompatible with others.
        at 
org.apache.hadoop.hdfs.server.common.StorageInfo.setNamespaceID(StorageInfo.java:213)
        at 
org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:156)
        at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:635)
        at 
org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:664)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:335)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:211)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:960)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:659)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:637)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:699)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:900)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:879)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1596)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
        at 
org.apache.hadoop.hdfs.TestDataTransferProtocol.testDataTransferProtocol(TestDataTransferProtocol.java:343)


FAILED:  org.apache.hadoop.hdfs.TestFileCreation.testConcurrentFileCreation

Error Message:
Missing directory 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-1

Stack Trace:
java.io.IOException: Missing directory 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name-0-1
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:162)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1029)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:764)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:715)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:900)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:879)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1596)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
        at 
org.apache.hadoop.hdfs.TestFileCreation.testConcurrentFileCreation(TestFileCreation.java:892)


FAILED:  org.apache.hadoop.hdfs.TestFileCreation.testFileCreationWithOverwrite

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1985)
        at 
org.apache.hadoop.hdfs.TestFileCreation.testFileCreationWithOverwrite(TestFileCreation.java:1271)


FAILED:  
org.apache.hadoop.hdfs.TestMiniDFSCluster.testClusterNoStorageTypeSetForDatanodes

Error Message:
No valid image files found

Stack Trace:
java.io.FileNotFoundException: No valid image files found
        at 
org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImages(FSImageTransactionalStorageInspector.java:158)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:619)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:960)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:659)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:637)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:699)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:900)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:879)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1596)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
        at 
org.apache.hadoop.hdfs.TestMiniDFSCluster.testClusterNoStorageTypeSetForDatanodes(TestMiniDFSCluster.java:181)


FAILED:  org.apache.hadoop.hdfs.TestModTime.testModTimePersistsAfterRestart

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:848)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
        at 
org.apache.hadoop.hdfs.TestModTime.testModTimePersistsAfterRestart(TestModTime.java:193)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
        at org.junit.Assert.fail(Assert.java:86)
        at org.junit.Assert.assertTrue(Assert.java:41)
        at org.junit.Assert.assertTrue(Assert.java:52)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testCleanShutdownOfVolume(TestFsDatasetImpl.java:683)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithRename

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2024)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1994)
        at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithRename(TestOpenFilesWithSnapshot.java:210)


FAILED:  
org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks

Error Message:
test timed out after 300000 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 300000 milliseconds
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at 
org.apache.hadoop.hdfs.StripedFileTestUtil.readAll(StripedFileTestUtil.java:91)
        at 
org.apache.hadoop.hdfs.StripedFileTestUtil.assertSeekAndRead(StripedFileTestUtil.java:218)
        at 
org.apache.hadoop.hdfs.StripedFileTestUtil.verifySeek(StripedFileTestUtil.java:176)
        at 
org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.readFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:115)
        at 
org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks.testReadFileWithMissingBlocks(TestReadStripedFileWithMissingBlocks.java:78)


FAILED:  
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.testSmallFileLocalRead

Error Message:
Requested more bytes than destination buffer size

Stack Trace:
java.lang.IndexOutOfBoundsException: Requested more bytes than destination 
buffer size
        at 
org.apache.hadoop.fs.FSInputStream.validatePositionedReadArgs(FSInputStream.java:107)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:975)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.checkFileContent(TestShortCircuitLocalRead.java:157)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.doTestShortCircuitReadImpl(TestShortCircuitLocalRead.java:286)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.doTestShortCircuitRead(TestShortCircuitLocalRead.java:241)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.testSmallFileLocalRead(TestShortCircuitLocalRead.java:308)


FAILED:  
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.testLocalReadLegacy

Error Message:
Requested more bytes than destination buffer size

Stack Trace:
java.lang.IndexOutOfBoundsException: Requested more bytes than destination 
buffer size
        at 
org.apache.hadoop.fs.FSInputStream.validatePositionedReadArgs(FSInputStream.java:107)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:975)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.checkFileContent(TestShortCircuitLocalRead.java:157)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.doTestShortCircuitReadImpl(TestShortCircuitLocalRead.java:286)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.doTestShortCircuitReadLegacy(TestShortCircuitLocalRead.java:235)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.testLocalReadLegacy(TestShortCircuitLocalRead.java:316)


FAILED:  
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.testLocalReadFallback

Error Message:
Requested more bytes than destination buffer size

Stack Trace:
java.lang.IndexOutOfBoundsException: Requested more bytes than destination 
buffer size
        at 
org.apache.hadoop.fs.FSInputStream.validatePositionedReadArgs(FSInputStream.java:107)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:975)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.checkFileContent(TestShortCircuitLocalRead.java:157)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.doTestShortCircuitReadImpl(TestShortCircuitLocalRead.java:286)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.doTestShortCircuitReadLegacy(TestShortCircuitLocalRead.java:235)
        at 
org.apache.hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead.testLocalReadFallback(TestShortCircuitLocalRead.java:327)


Reply via email to