WangYuanben created HDDS-9512:
---------------------------------

             Summary: Datanode in ozone share a same port with datanode in HDFS.
                 Key: HDDS-9512
                 URL: https://issues.apache.org/jira/browse/HDDS-9512
             Project: Apache Ozone
          Issue Type: Bug
          Components: Ozone Datanode
            Reporter: WangYuanben


Now in master branch, we have the config like below:
{code:java}
<property>
  <name>hdds.datanode.client.port</name>
  <value>9864</value>
  <tag>OZONE, HDDS, MANAGEMENT</tag>
  <description>
    The port number of the Ozone Datanode client service.
  </description>
</property> {code}
where HDFS has the config like this:
{code:java}
<property>  
  <name>dfs.datanode.http.address</name>  
  <value>0.0.0.0:9864</value>  
  <description>    
    The datanode http server address and port.  
  </description>
</property> {code}
Obviously they share the same port 9864. When starting HddsDatanodeService in 
the node where datanode of HDFS is running, we get the error:
{code:java}
Caused by: java.net.BindException: Problem binding to [0.0.0.0:9864] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:930)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:826)
        at org.apache.hadoop.ipc.Server.bind(Server.java:680)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:1288)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:3223)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:1195)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server.<init>(ProtobufRpcEngine2.java:485)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:452)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:375)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:986)
        at 
org.apache.hadoop.ozone.HddsDatanodeClientProtocolServer.startRpcServer(HddsDatanodeClientProtocolServer.java:138)
        at 
org.apache.hadoop.ozone.HddsDatanodeClientProtocolServer.lambda$getRpcServer$0(HddsDatanodeClientProtocolServer.java:110)
        at 
org.apache.hadoop.hdds.HddsUtils.preserveThreadName(HddsUtils.java:847)
        at 
org.apache.hadoop.ozone.HddsDatanodeClientProtocolServer.getRpcServer(HddsDatanodeClientProtocolServer.java:110)
        at 
org.apache.hadoop.ozone.HddsDatanodeClientProtocolServer.<init>(HddsDatanodeClientProtocolServer.java:64)
        at 
org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:317)
        ... 13 more
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.apache.hadoop.ipc.Server.bind(Server.java:663)
        ... 26 more {code}
 We can't run the datanode of HDFS when HddsDatanodeService is on as well.

 

This affects the compatibility of Ozone with HDFS. Therefore, maybe we should 
change this port. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to