[
https://issues.apache.org/jira/browse/HDFS-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15014454#comment-15014454
]
Xiao Chen commented on HDFS-9429:
---------------------------------
Another reoccurrence that looks to be the same cause:
*Error Message*
{noformat}
Unable to check if JNs are ready for formatting. 1 exceptions thrown:
127.0.0.1:47894: End of File Exception between local host is: "172.26.21.235";
destination host is: "localhost":47894; : java.io.EOFException; For more
details see: http://wiki.apache.org/hadoop/EOFException
{noformat}
*Stacktrace*
{noformat}
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs
are ready for formatting. 1 exceptions thrown:
127.0.0.1:47894: End of File Exception between local host is: "172.26.21.235";
destination host is: "localhost":47894; : java.io.EOFException; For more
details see: http://wiki.apache.org/hadoop/EOFException
at
org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at
org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at
org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1021)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:347)
at
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173)
at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:974)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:812)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:472)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:431)
at
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104)
at
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40)
at
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69)
at
org.apache.hadoop.hdfs.TestRollingUpgrade.testQuery(TestRollingUpgrade.java:453)
{noformat}
*Standard Output*
{noformat}
2015-11-16 15:21:19,177 INFO qjournal.MiniJournalCluster
(MiniJournalCluster.java:<init>(87)) - Starting MiniJournalCluster with 3
journal nodes
2015-11-16 15:21:19,194 INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
2015-11-16 15:21:19,195 INFO hdfs.DFSUtil
(DFSUtil.java:httpServerTemplateForNNAndJN(1697)) - Starting Web-server for
journal at: http://localhost:0
2015-11-16 15:21:19,196 INFO server.AuthenticationFilter
(AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize
FileSignerSecretProvider, falling back to use random secrets.
2015-11-16 15:21:19,197 INFO http.HttpRequestLog
(HttpRequestLog.java:getRequestLog(80)) - Http request log for
http.requests.journal is not defined
2015-11-16 15:21:19,198 INFO http.HttpServer2
(HttpServer2.java:addGlobalFilter(752)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-11-16 15:21:19,198 INFO http.HttpServer2
(HttpServer2.java:addFilter(730)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context journal
2015-11-16 15:21:19,199 INFO http.HttpServer2
(HttpServer2.java:addFilter(737)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2015-11-16 15:21:19,199 INFO http.HttpServer2
(HttpServer2.java:addFilter(737)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2015-11-16 15:21:19,200 INFO http.HttpServer2
(HttpServer2.java:openListeners(940)) - Jetty bound to port 54447
2015-11-16 15:21:19,219 INFO mortbay.log (Slf4jLog.java:info(67)) - Started
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:54447
2015-11-16 15:21:19,220 INFO ipc.CallQueueManager
(CallQueueManager.java:<init>(56)) - Using callQueue class
java.util.concurrent.LinkedBlockingQueue
2015-11-16 15:21:19,220 INFO ipc.Server (Server.java:run(616)) - Starting
Socket Reader #1 for port 49879
2015-11-16 15:21:19,224 INFO ipc.Server (Server.java:run(839)) - IPC Server
Responder: starting
2015-11-16 15:21:19,224 INFO ipc.Server (Server.java:run(686)) - IPC Server
listener on 49879: starting
2015-11-16 15:21:19,239 INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
2015-11-16 15:21:19,241 INFO hdfs.DFSUtil
(DFSUtil.java:httpServerTemplateForNNAndJN(1697)) - Starting Web-server for
journal at: http://localhost:0
2015-11-16 15:21:19,242 INFO server.AuthenticationFilter
(AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize
FileSignerSecretProvider, falling back to use random secrets.
2015-11-16 15:21:19,243 INFO http.HttpRequestLog
(HttpRequestLog.java:getRequestLog(80)) - Http request log for
http.requests.journal is not defined
2015-11-16 15:21:19,243 INFO http.HttpServer2
(HttpServer2.java:addGlobalFilter(752)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-11-16 15:21:19,244 INFO http.HttpServer2
(HttpServer2.java:addFilter(730)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context journal
2015-11-16 15:21:19,244 INFO http.HttpServer2
(HttpServer2.java:addFilter(737)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2015-11-16 15:21:19,244 INFO http.HttpServer2
(HttpServer2.java:addFilter(737)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2015-11-16 15:21:19,245 INFO http.HttpServer2
(HttpServer2.java:openListeners(940)) - Jetty bound to port 41815
2015-11-16 15:21:19,262 INFO mortbay.log (Slf4jLog.java:info(67)) - Started
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41815
2015-11-16 15:21:19,262 INFO ipc.CallQueueManager
(CallQueueManager.java:<init>(56)) - Using callQueue class
java.util.concurrent.LinkedBlockingQueue
2015-11-16 15:21:19,263 INFO ipc.Server (Server.java:run(616)) - Starting
Socket Reader #1 for port 47894
2015-11-16 15:21:19,267 INFO ipc.Server (Server.java:run(839)) - IPC Server
Responder: starting
2015-11-16 15:21:19,267 INFO ipc.Server (Server.java:run(686)) - IPC Server
listener on 47894: starting
2015-11-16 15:21:19,288 INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
2015-11-16 15:21:19,290 INFO hdfs.DFSUtil
(DFSUtil.java:httpServerTemplateForNNAndJN(1697)) - Starting Web-server for
journal at: http://localhost:0
2015-11-16 15:21:19,291 INFO server.AuthenticationFilter
(AuthenticationFilter.java:constructSecretProvider(282)) - Unable to initialize
FileSignerSecretProvider, falling back to use random secrets.
2015-11-16 15:21:19,291 INFO http.HttpRequestLog
(HttpRequestLog.java:getRequestLog(80)) - Http request log for
http.requests.journal is not defined
2015-11-16 15:21:19,292 INFO http.HttpServer2
(HttpServer2.java:addGlobalFilter(752)) - Added global filter 'safety'
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-11-16 15:21:19,293 INFO http.HttpServer2
(HttpServer2.java:addFilter(730)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context journal
2015-11-16 15:21:19,293 INFO http.HttpServer2
(HttpServer2.java:addFilter(737)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2015-11-16 15:21:19,293 INFO http.HttpServer2
(HttpServer2.java:addFilter(737)) - Added filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2015-11-16 15:21:19,294 INFO http.HttpServer2
(HttpServer2.java:openListeners(940)) - Jetty bound to port 53577
2015-11-16 15:21:19,309 INFO mortbay.log (Slf4jLog.java:info(67)) - Started
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:53577
2015-11-16 15:21:19,310 INFO ipc.CallQueueManager
(CallQueueManager.java:<init>(56)) - Using callQueue class
java.util.concurrent.LinkedBlockingQueue
2015-11-16 15:21:19,311 INFO ipc.Server (Server.java:run(616)) - Starting
Socket Reader #1 for port 40415
2015-11-16 15:21:19,314 INFO ipc.Server (Server.java:run(839)) - IPC Server
Responder: starting
2015-11-16 15:21:19,314 INFO ipc.Server (Server.java:run(686)) - IPC Server
listener on 40415: starting
2015-11-16 15:21:19,327 INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:<init>(447)) - starting cluster: numNameNodes=2,
numDataNodes=0
Formatting using clusterid: testClusterID
2015-11-16 15:21:19,332 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(800)) - No KeyProvider found.
2015-11-16 15:21:19,333 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(812)) - fsLock is fair:true
2015-11-16 15:21:19,333 INFO blockmanagement.DatanodeManager
(DatanodeManager.java:<init>(237)) - dfs.block.invalidate.limit=1000
2015-11-16 15:21:19,334 INFO blockmanagement.DatanodeManager
(DatanodeManager.java:<init>(243)) -
dfs.namenode.datanode.registration.ip-hostname-check=true
2015-11-16 15:21:19,334 INFO blockmanagement.BlockManager
(InvalidateBlocks.java:printBlockDeletionTime(71)) -
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-11-16 15:21:19,334 INFO blockmanagement.BlockManager
(InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will
start around 2015 Nov 16 15:21:19
2015-11-16 15:21:19,335 INFO util.GSet
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
BlocksMap
2015-11-16 15:21:19,335 INFO util.GSet
(LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
2015-11-16 15:21:19,335 INFO util.GSet
(LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 3.6 GB = 72.8 MB
2015-11-16 15:21:19,335 INFO util.GSet
(LightWeightGSet.java:computeCapacity(361)) - capacity = 2^23 = 8388608
entries
2015-11-16 15:21:19,368 INFO blockmanagement.BlockManager
(BlockManager.java:createBlockTokenSecretManager(369)) -
dfs.block.access.token.enable=false
2015-11-16 15:21:19,368 INFO blockmanagement.BlockManager
(BlockManager.java:<init>(354)) - defaultReplication = 0
2015-11-16 15:21:19,368 INFO blockmanagement.BlockManager
(BlockManager.java:<init>(355)) - maxReplication = 512
2015-11-16 15:21:19,368 INFO blockmanagement.BlockManager
(BlockManager.java:<init>(356)) - minReplication = 1
2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager
(BlockManager.java:<init>(357)) - maxReplicationStreams = 2
2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager
(BlockManager.java:<init>(358)) - shouldCheckForEnoughRacks = false
2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager
(BlockManager.java:<init>(359)) - replicationRecheckInterval = 3000
2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager
(BlockManager.java:<init>(360)) - encryptDataTransfer = false
2015-11-16 15:21:19,369 INFO blockmanagement.BlockManager
(BlockManager.java:<init>(361)) - maxNumBlocksToLog = 1000
2015-11-16 15:21:19,370 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(837)) - fsOwner = jenkins (auth:SIMPLE)
2015-11-16 15:21:19,370 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(838)) - supergroup = supergroup
2015-11-16 15:21:19,370 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(839)) - isPermissionEnabled = true
2015-11-16 15:21:19,370 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(848)) - Determined nameservice ID: ns1
2015-11-16 15:21:19,371 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(850)) - HA Enabled: true
2015-11-16 15:21:19,371 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(887)) - Append Enabled: true
2015-11-16 15:21:19,371 INFO util.GSet
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
INodeMap
2015-11-16 15:21:19,371 INFO util.GSet
(LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
2015-11-16 15:21:19,372 INFO util.GSet
(LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 3.6 GB = 36.4 MB
2015-11-16 15:21:19,372 INFO util.GSet
(LightWeightGSet.java:computeCapacity(361)) - capacity = 2^22 = 4194304
entries
2015-11-16 15:21:19,388 INFO namenode.NameNode (FSDirectory.java:<init>(238))
- Caching file names occuring more than 10 times
2015-11-16 15:21:19,389 INFO util.GSet
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
cachedBlocks
2015-11-16 15:21:19,389 INFO util.GSet
(LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
2015-11-16 15:21:19,389 INFO util.GSet
(LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 3.6 GB = 9.1 MB
2015-11-16 15:21:19,389 INFO util.GSet
(LightWeightGSet.java:computeCapacity(361)) - capacity = 2^20 = 1048576
entries
2015-11-16 15:21:19,394 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(5776)) - dfs.namenode.safemode.threshold-pct =
0.9990000128746033
2015-11-16 15:21:19,394 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(5777)) - dfs.namenode.safemode.min.datanodes = 0
2015-11-16 15:21:19,394 INFO namenode.FSNamesystem
(FSNamesystem.java:<init>(5778)) - dfs.namenode.safemode.extension = 0
2015-11-16 15:21:19,394 INFO metrics.TopMetrics (TopMetrics.java:logConf(65))
- NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-11-16 15:21:19,395 INFO metrics.TopMetrics (TopMetrics.java:logConf(67))
- NNTop conf: dfs.namenode.top.num.users = 10
2015-11-16 15:21:19,395 INFO metrics.TopMetrics (TopMetrics.java:logConf(69))
- NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-11-16 15:21:19,395 INFO namenode.FSNamesystem
(FSNamesystem.java:initRetryCache(991)) - Retry cache on namenode is enabled
2015-11-16 15:21:19,395 INFO namenode.FSNamesystem
(FSNamesystem.java:initRetryCache(999)) - Retry cache will use 0.03 of total
heap and retry cache entry expiry time is 600000 millis
2015-11-16 15:21:19,395 INFO util.GSet
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
NameNodeRetryCache
2015-11-16 15:21:19,396 INFO util.GSet
(LightWeightGSet.java:computeCapacity(355)) - VM type = 64-bit
2015-11-16 15:21:19,396 INFO util.GSet
(LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max memory
3.6 GB = 1.1 MB
2015-11-16 15:21:19,396 INFO util.GSet
(LightWeightGSet.java:computeCapacity(361)) - capacity = 2^17 = 131072
entries
2015-11-16 15:21:19,398 INFO namenode.NNConf (NNConf.java:<init>(62)) - ACLs
enabled? false
2015-11-16 15:21:19,398 INFO namenode.NNConf (NNConf.java:<init>(66)) - XAttrs
enabled? true
2015-11-16 15:21:19,399 INFO namenode.NNConf (NNConf.java:<init>(74)) -
Maximum size of an xattr: 16384
2015-11-16 15:21:19,821 WARN namenode.NameNode (NameNode.java:format(1027)) -
Encountered exception during format:
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs
are ready for formatting. 1 exceptions thrown:
127.0.0.1:47894: End of File Exception between local host is: "172.26.21.235";
destination host is: "localhost":47894; : java.io.EOFException; For more
details see: http://wiki.apache.org/hadoop/EOFException
at
org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
at
org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
at
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
at
org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:900)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1021)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:347)
at
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173)
at
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:974)
at
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:812)
at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:472)
at
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:431)
at
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104)
at
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40)
at
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69)
at
org.apache.hadoop.hdfs.TestRollingUpgrade.testQuery(TestRollingUpgrade.java:453)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
2015-11-16 15:21:19,822 INFO server.JournalNode
(JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory
/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/ns1
2015-11-16 15:21:19,823 WARN common.Storage (Storage.java:analyzeStorage(477))
- Storage directory
/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-0/ns1
does not exist
2015-11-16 15:21:19,823 INFO server.JournalNode
(JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory
/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1
2015-11-16 15:21:19,823 ERROR hdfs.MiniDFSCluster
(MiniDFSCluster.java:initMiniDFSCluster(817)) - IOE creating namenodes.
Permissions dump:
path
'/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data':
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data
permissions: ----
path
'/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs':
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs
permissions: drwx
path
'/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data':
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data
permissions: drwx
path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test':
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test
permissions: drwx
path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target':
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target
permissions: drwx
path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs':
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs
permissions: drwx
path '/data/jenkins/workspace/hadoop-hdfs-project':
absolute:/data/jenkins/workspace/hadoop-hdfs-project
permissions: drwx
path '/data/jenkins/workspace':
absolute:/data/jenkins/workspace
permissions: drwx
path '/data/jenkins/workspace':
absolute:/data/jenkins/workspace
permissions: drwx
path '/data/jenkins':
absolute:/data/jenkins
permissions: drwx
path '/data':
absolute:/data
permissions: dr-x
path '/':
absolute:/
permissions: dr-x
2015-11-16 15:21:19,823 INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:shutdown(1713)) - Shutting down the Mini HDFS Cluster
2015-11-16 15:21:19,823 WARN common.Storage (Storage.java:analyzeStorage(477))
- Storage directory
/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1
does not exist
{noformat}
> Tests in TestDFSAdminWithHA intermittently fail with EOFException
> -----------------------------------------------------------------
>
> Key: HDFS-9429
> URL: https://issues.apache.org/jira/browse/HDFS-9429
> Project: Hadoop HDFS
> Issue Type: Test
> Components: HDFS
> Reporter: Xiao Chen
> Assignee: Xiao Chen
>
> I have seen this fail a handful of times for {{testMetaSave}}, but from my
> understanding this is from {{setUpHaCluster}} so theoretically it could fail
> for any cases in the class.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)