[
https://issues.apache.org/jira/browse/HDFS-16140?focusedWorklogId=627412&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627412
]
ASF GitHub Bot logged work on HDFS-16140:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 24/Jul/21 16:31
Start Date: 24/Jul/21 16:31
Worklog Time Spent: 10m
Work Description: aajisaka opened a new pull request #3229:
URL: https://github.com/apache/hadoop/pull/3229
Use an available port instead of the default 50200 port in the test.
JIRA: HDFS-16140
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 627412)
Remaining Estimate: 0h
Time Spent: 10m
> TestBootstrapAliasmap fails by BindException
> --------------------------------------------
>
> Key: HDFS-16140
> URL: https://issues.apache.org/jira/browse/HDFS-16140
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: test
> Reporter: Akira Ajisaka
> Priority: Major
> Time Spent: 10m
> Remaining Estimate: 0h
>
> TestBootstrapAliasmap fails if 50200 port is already in use.
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3227/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {quote}
> [ERROR]
> testAliasmapBootstrap(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap)
> Time elapsed: 0.472 s <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:50200]
> java.net.BindException: Address already in use; For more details see:
> http://wiki.apache.org/hadoop/BindException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:914)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:810)
> at org.apache.hadoop.ipc.Server.bind(Server.java:642)
> at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:1301)
> at org.apache.hadoop.ipc.Server.<init>(Server.java:3199)
> at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:1062)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server.<init>(ProtobufRpcEngine2.java:464)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine2.getServer(ProtobufRpcEngine2.java:371)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:853)
> at
> org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer.start(InMemoryLevelDBAliasMapServer.java:98)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.startAliasMapServerIfNecessary(NameNode.java:801)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1014)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:989)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1378)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1147)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1020)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:952)
> at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:576)
> at
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:518)
> at
> org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap.setup(TestBootstrapAliasmap.java:56)
> {quote}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]