[ 
https://issues.apache.org/jira/browse/HDFS-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15005071#comment-15005071
 ] 

Xiao Chen commented on HDFS-9429:
---------------------------------

A sample failure below:
*Error Message*
{noformat}
Unable to check if JNs are ready for formatting. 1 exceptions thrown:
127.0.0.1:42901: End of File Exception between local host is: "172.26.21.176"; 
destination host is: "localhost":42901; : java.io.EOFException; For more 
details see:  http://wiki.apache.org/hadoop/EOFException
{noformat}
*Stacktrace*
{noformat}
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs 
are ready for formatting. 1 exceptions thrown:
127.0.0.1:42901: End of File Exception between local host is: "172.26.21.176"; 
destination host is: "localhost":42901; : java.io.EOFException; For more 
details see:  http://wiki.apache.org/hadoop/EOFException
        at 
org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at 
org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
        at 
org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:899)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:986)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
        at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:969)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:807)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:467)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:426)
        at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104)
        at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40)
        at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69)
        at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:84)
        at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testMetaSave(TestDFSAdminWithHA.java:197)
{noformat}
*Standard Output*
{noformat}
2015-11-09 19:26:41,365 INFO  qjournal.MiniJournalCluster 
(MiniJournalCluster.java:<init>(87)) - Starting MiniJournalCluster with 3 
journal nodes
2015-11-09 19:26:41,366 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
2015-11-09 19:26:41,367 INFO  hdfs.DFSUtil 
(DFSUtil.java:httpServerTemplateForNNAndJN(1687)) - Starting Web-server for 
journal at: http://localhost:0
2015-11-09 19:26:41,368 INFO  http.HttpRequestLog 
(HttpRequestLog.java:getRequestLog(80)) - Http request log for 
http.requests.journal is not defined
2015-11-09 19:26:41,368 INFO  http.HttpServer2 
(HttpServer2.java:addGlobalFilter(700)) - Added global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-11-09 19:26:41,369 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(678)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context journal
2015-11-09 19:26:41,369 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(685)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
2015-11-09 19:26:41,369 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(685)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
2015-11-09 19:26:41,369 INFO  http.HttpServer2 
(HttpServer2.java:openListeners(888)) - Jetty bound to port 53757
2015-11-09 19:26:41,380 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:53757
2015-11-09 19:26:41,380 INFO  ipc.CallQueueManager 
(CallQueueManager.java:<init>(53)) - Using callQueue class 
java.util.concurrent.LinkedBlockingQueue
2015-11-09 19:26:41,381 INFO  ipc.Server (Server.java:run(605)) - Starting 
Socket Reader #1 for port 42901
2015-11-09 19:26:41,383 INFO  ipc.Server (Server.java:run(827)) - IPC Server 
Responder: starting
2015-11-09 19:26:41,383 INFO  ipc.Server (Server.java:run(674)) - IPC Server 
listener on 42901: starting
2015-11-09 19:26:41,384 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
2015-11-09 19:26:41,385 INFO  hdfs.DFSUtil 
(DFSUtil.java:httpServerTemplateForNNAndJN(1687)) - Starting Web-server for 
journal at: http://localhost:0
2015-11-09 19:26:41,386 INFO  http.HttpRequestLog 
(HttpRequestLog.java:getRequestLog(80)) - Http request log for 
http.requests.journal is not defined
2015-11-09 19:26:41,386 INFO  http.HttpServer2 
(HttpServer2.java:addGlobalFilter(700)) - Added global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-11-09 19:26:41,387 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(678)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context journal
2015-11-09 19:26:41,387 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(685)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
2015-11-09 19:26:41,387 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(685)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
2015-11-09 19:26:41,387 INFO  http.HttpServer2 
(HttpServer2.java:openListeners(888)) - Jetty bound to port 45615
2015-11-09 19:26:41,398 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45615
2015-11-09 19:26:41,398 INFO  ipc.CallQueueManager 
(CallQueueManager.java:<init>(53)) - Using callQueue class 
java.util.concurrent.LinkedBlockingQueue
2015-11-09 19:26:41,399 INFO  ipc.Server (Server.java:run(605)) - Starting 
Socket Reader #1 for port 60192
2015-11-09 19:26:41,401 INFO  ipc.Server (Server.java:run(827)) - IPC Server 
Responder: starting
2015-11-09 19:26:41,401 INFO  ipc.Server (Server.java:run(674)) - IPC Server 
listener on 60192: starting
2015-11-09 19:26:41,402 INFO  impl.MetricsSystemImpl 
(MetricsSystemImpl.java:init(158)) - JournalNode metrics system started (again)
2015-11-09 19:26:41,404 INFO  hdfs.DFSUtil 
(DFSUtil.java:httpServerTemplateForNNAndJN(1687)) - Starting Web-server for 
journal at: http://localhost:0
2015-11-09 19:26:41,404 INFO  http.HttpRequestLog 
(HttpRequestLog.java:getRequestLog(80)) - Http request log for 
http.requests.journal is not defined
2015-11-09 19:26:41,404 INFO  http.HttpServer2 
(HttpServer2.java:addGlobalFilter(700)) - Added global filter 'safety' 
(class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-11-09 19:26:41,405 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(678)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context journal
2015-11-09 19:26:41,405 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(685)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context static
2015-11-09 19:26:41,405 INFO  http.HttpServer2 
(HttpServer2.java:addFilter(685)) - Added filter static_user_filter 
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to 
context logs
2015-11-09 19:26:41,405 INFO  http.HttpServer2 
(HttpServer2.java:openListeners(888)) - Jetty bound to port 43021
2015-11-09 19:26:41,417 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43021
2015-11-09 19:26:41,418 INFO  ipc.CallQueueManager 
(CallQueueManager.java:<init>(53)) - Using callQueue class 
java.util.concurrent.LinkedBlockingQueue
2015-11-09 19:26:41,418 INFO  ipc.Server (Server.java:run(605)) - Starting 
Socket Reader #1 for port 43930
2015-11-09 19:26:41,420 INFO  ipc.Server (Server.java:run(827)) - IPC Server 
Responder: starting
2015-11-09 19:26:41,420 INFO  ipc.Server (Server.java:run(674)) - IPC Server 
listener on 43930: starting
2015-11-09 19:26:41,422 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:<init>(442)) - starting cluster: numNameNodes=2, 
numDataNodes=0
Formatting using clusterid: testClusterID
2015-11-09 19:26:41,424 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(800)) - No KeyProvider found.
2015-11-09 19:26:41,425 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(812)) - fsLock is fair:true
2015-11-09 19:26:41,425 INFO  blockmanagement.DatanodeManager 
(DatanodeManager.java:<init>(237)) - dfs.block.invalidate.limit=1000
2015-11-09 19:26:41,425 INFO  blockmanagement.DatanodeManager 
(DatanodeManager.java:<init>(243)) - 
dfs.namenode.datanode.registration.ip-hostname-check=true
2015-11-09 19:26:41,426 INFO  blockmanagement.BlockManager 
(InvalidateBlocks.java:printBlockDeletionTime(71)) - 
dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-11-09 19:26:41,426 INFO  blockmanagement.BlockManager 
(InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will 
start around 2015 Nov 09 19:26:41
2015-11-09 19:26:41,426 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map 
BlocksMap
2015-11-09 19:26:41,426 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2015-11-09 19:26:41,426 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 3.6 GB = 72.8 MB
2015-11-09 19:26:41,427 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^23 = 8388608 
entries
2015-11-09 19:26:41,437 INFO  blockmanagement.BlockManager 
(BlockManager.java:createBlockTokenSecretManager(366)) - 
dfs.block.access.token.enable=false
2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(351)) - defaultReplication         = 0
2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(352)) - maxReplication             = 512
2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(353)) - minReplication             = 1
2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(354)) - maxReplicationStreams      = 2
2015-11-09 19:26:41,438 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(355)) - shouldCheckForEnoughRacks  = false
2015-11-09 19:26:41,439 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(356)) - replicationRecheckInterval = 3000
2015-11-09 19:26:41,439 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(357)) - encryptDataTransfer        = false
2015-11-09 19:26:41,439 INFO  blockmanagement.BlockManager 
(BlockManager.java:<init>(358)) - maxNumBlocksToLog          = 1000
2015-11-09 19:26:41,439 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(837)) - fsOwner             = jenkins (auth:SIMPLE)
2015-11-09 19:26:41,439 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(838)) - supergroup          = supergroup
2015-11-09 19:26:41,439 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(839)) - isPermissionEnabled = true
2015-11-09 19:26:41,440 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(848)) - Determined nameservice ID: ns1
2015-11-09 19:26:41,440 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(850)) - HA Enabled: true
2015-11-09 19:26:41,440 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(887)) - Append Enabled: true
2015-11-09 19:26:41,440 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map 
INodeMap
2015-11-09 19:26:41,441 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2015-11-09 19:26:41,441 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 3.6 GB = 36.4 MB
2015-11-09 19:26:41,441 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^22 = 4194304 
entries
2015-11-09 19:26:41,443 INFO  namenode.NameNode (FSDirectory.java:<init>(234)) 
- Caching file names occuring more than 10 times
2015-11-09 19:26:41,443 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map 
cachedBlocks
2015-11-09 19:26:41,444 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2015-11-09 19:26:41,444 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 3.6 GB = 9.1 MB
2015-11-09 19:26:41,444 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^20 = 1048576 
entries
2015-11-09 19:26:41,445 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(5771)) - dfs.namenode.safemode.threshold-pct = 
0.9990000128746033
2015-11-09 19:26:41,445 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(5772)) - dfs.namenode.safemode.min.datanodes = 0
2015-11-09 19:26:41,445 INFO  namenode.FSNamesystem 
(FSNamesystem.java:<init>(5773)) - dfs.namenode.safemode.extension     = 0
2015-11-09 19:26:41,445 INFO  metrics.TopMetrics (TopMetrics.java:logConf(65)) 
- NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-11-09 19:26:41,445 INFO  metrics.TopMetrics (TopMetrics.java:logConf(67)) 
- NNTop conf: dfs.namenode.top.num.users = 10
2015-11-09 19:26:41,446 INFO  metrics.TopMetrics (TopMetrics.java:logConf(69)) 
- NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-11-09 19:26:41,446 INFO  namenode.FSNamesystem 
(FSNamesystem.java:initRetryCache(991)) - Retry cache on namenode is enabled
2015-11-09 19:26:41,446 INFO  namenode.FSNamesystem 
(FSNamesystem.java:initRetryCache(999)) - Retry cache will use 0.03 of total 
heap and retry cache entry expiry time is 600000 millis
2015-11-09 19:26:41,446 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map 
NameNodeRetryCache
2015-11-09 19:26:41,446 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2015-11-09 19:26:41,447 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max memory 
3.6 GB = 1.1 MB
2015-11-09 19:26:41,447 INFO  util.GSet 
(LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^17 = 131072 
entries
2015-11-09 19:26:41,448 INFO  namenode.NNConf (NNConf.java:<init>(62)) - ACLs 
enabled? false
2015-11-09 19:26:41,448 INFO  namenode.NNConf (NNConf.java:<init>(66)) - XAttrs 
enabled? true
2015-11-09 19:26:41,448 INFO  namenode.NNConf (NNConf.java:<init>(74)) - 
Maximum size of an xattr: 16384
2015-11-09 19:26:41,466 WARN  namenode.NameNode (NameNode.java:format(992)) - 
Encountered exception during format: 
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs 
are ready for formatting. 1 exceptions thrown:
127.0.0.1:42901: End of File Exception between local host is: "172.26.21.176"; 
destination host is: "localhost":42901; : java.io.EOFException; For more 
details see:  http://wiki.apache.org/hadoop/EOFException
        at 
org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at 
org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
        at 
org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:899)
        at 
org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:171)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:986)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
        at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:173)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:969)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:807)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:467)
        at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:426)
        at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:104)
        at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.<init>(MiniQJMHACluster.java:40)
        at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster$Builder.build(MiniQJMHACluster.java:69)
        at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.setUpHaCluster(TestDFSAdminWithHA.java:84)
        at 
org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA.testMetaSave(TestDFSAdminWithHA.java:197)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
2015-11-09 19:26:41,467 INFO  server.JournalNode 
(JournalNode.java:getOrCreateJournal(89)) - Initializing journal in directory 
/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1
2015-11-09 19:26:41,467 INFO  server.JournalNode 
(JournalNode.java:getOrCreateJournal(89)) - Initializing journal in directory 
/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/ns1
2015-11-09 19:26:41,467 WARN  common.Storage (Storage.java:analyzeStorage(477)) 
- Storage directory 
/data/jenkins/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-2/ns1
 does not exist
2015-11-09 19:26:41,468 WARN  common.Storage (Storage.java:analyzeStorage(477)) 
- Storage directory 
/data/jenkins/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/journalnode-1/ns1
 does not exist
2015-11-09 19:26:41,468 ERROR hdfs.MiniDFSCluster 
(MiniDFSCluster.java:initMiniDFSCluster(812)) - IOE creating namenodes. 
Permissions dump:
path 
'/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data':
 
        
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data
        permissions: ----
path 
'/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs': 
        
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs
        permissions: drwx
path 
'/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data': 
        
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test/data
        permissions: drwx
path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test': 
        
absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target/test
        permissions: drwx
path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target': 
        absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs/target
        permissions: drwx
path '/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs': 
        absolute:/data/jenkins/workspace/hadoop-hdfs-project/hadoop-hdfs
        permissions: drwx
path '/data/jenkins/workspace/hadoop-hdfs-project': 
        absolute:/data/jenkins/workspace/hadoop-hdfs-project
        permissions: drwx
path '/data/jenkins/workspace': 
        absolute:/data/jenkins/workspace
        permissions: drwx
path '/data/jenkins/workspace': 
        absolute:/data/jenkins/workspace
        permissions: drwx
path '/data/jenkins': 
        absolute:/data/jenkins
        permissions: drwx
path '/data': 
        absolute:/data
        permissions: dr-x
path '/': 
        absolute:/
        permissions: dr-x

2015-11-09 19:26:41,468 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1708)) - Shutting down the Mini HDFS Cluster
{noformat}

> Tests in TestDFSAdminWithHA intermittently fail with EOFException
> -----------------------------------------------------------------
>
>                 Key: HDFS-9429
>                 URL: https://issues.apache.org/jira/browse/HDFS-9429
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: HDFS
>            Reporter: Xiao Chen
>            Assignee: Xiao Chen
>
> I have seen this fail a handful of times for {{testMetaSave}}, but from my 
> understanding this is from {{setUpHaCluster}} so theoretically it could fail 
> for any cases in the class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to