[ 
https://issues.apache.org/jira/browse/HDFS-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14980975#comment-14980975
 ] 

Masatake Iwasaki commented on HDFS-9333:
----------------------------------------

Thanks for reporting this, [~drankye].

TestBlockTokenWithDFSStriped already uses MiniDFSCluster with random port 
settings and 49333 is the randomly chosen port. It could be race with another 
process using ephemeral port.

TestDFSZKFailoverController seems to use fixed port settings because zkfc part 
does not allow random port setting. Let me see this if you do not yet started 
the work.


> Some tests using MiniDFSCluster errored complaining port in use
> ---------------------------------------------------------------
>
>                 Key: HDFS-9333
>                 URL: https://issues.apache.org/jira/browse/HDFS-9333
>             Project: Hadoop HDFS
>          Issue Type: Test
>            Reporter: Kai Zheng
>            Priority: Minor
>
> Ref. the following:
> {noformat}
> Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.483 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
> testRead(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped)
>   Time elapsed: 11.021 sec  <<< ERROR!
> java.net.BindException: Port in use: localhost:49333
>       at sun.nio.ch.Net.bind0(Native Method)
>       at sun.nio.ch.Net.bind(Net.java:433)
>       at sun.nio.ch.Net.bind(Net.java:425)
>       at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>       at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>       at 
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>       at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:884)
>       at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:821)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:675)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:883)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:862)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1555)
>       at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2015)
>       at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1996)
>       at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.doTestRead(TestBlockTokenWithDFS.java:539)
>       at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.testRead(TestBlockTokenWithDFSStriped.java:62)
> {noformat}
> Another one:
> {noformat}
> Tests run: 5, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 9.859 sec <<< 
> FAILURE! - in org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
> testFailoverAndBackOnNNShutdown(org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController)
>   Time elapsed: 0.41 sec  <<< ERROR!
> java.net.BindException: Problem binding to [localhost:10021] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>       at sun.nio.ch.Net.bind0(Native Method)
>       at sun.nio.ch.Net.bind(Net.java:433)
>       at sun.nio.ch.Net.bind(Net.java:425)
>       at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>       at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>       at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>       at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:695)
>       at org.apache.hadoop.ipc.Server.<init>(Server.java:2464)
>       at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:945)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:535)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>       at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:399)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:742)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:680)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:883)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:862)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1555)
>       at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1245)
>       at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1014)
>       at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:889)
>       at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:821)
>       at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:480)
>       at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:439)
>       at 
> org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController.setup(TestDFSZKFailoverController.java:90)
> testFailoverAndBackOnNNShutdown(org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController)
>   Time elapsed: 0.41 sec  <<< ERROR!
> java.lang.NullPointerException: null
>       at 
> org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController.shutdown(TestDFSZKFailoverController.java:114)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to