[ 
https://issues.apache.org/jira/browse/HDFS-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dncba updated HDFS-17647:
-------------------------
    Description: 
Given the Hadoop cluster with dual networks:
 * 192.168.x.x: For data transfer. Speed: 1Gbps.
 * 10.x.x.x: For management only. Speed: 10Gbps.
 
and hostname resolve in 10.x.x.x. when i want hdfs client use hostname connect 
hdfs , i config i[ dfs.client.use.datanode.hostname=true ] in hdfs-site.xml. 
but i found the 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB#getBlockLocations
 not use this configuration to resolve ipaddr
 

{code:java}
org.apache.hadoop.hdfs.DFSInputStream#fetchBlockAt(long, long, boolean)
 
private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
throws IOException {      
    ....       
    dfsClient.getLocatedBlocks(src, offset)
}
 

org.apache.hadoop.hdfs.DFSClient#getLocatedBlocks(java.lang.String, long, long)
public LocatedBlocks getLocatedBlocks(String src, long start, long length)
throws IOException{      
       ...      
       return callGetBlockLocations(namenode, src, start, length); 
}
 

static LocatedBlocks callGetBlockLocations(ClientProtocol namenode,
String src, long start, long length)
throws IOException{      
     ...      
     return namenode.getBlockLocations(src, start, length); 
}
 
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB#getBlockLocations
public LocatedBlocks getBlockLocations(String src, long offset, long 
length)throws IOException{     
     ...     
     return PBHelperClient.convert(resp.getLocations()) 
}
 
 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient#convert(DatanodeIDProto)
public static DatanodeID convert(DatanodeIDProto dn){       
     ...       
     //if when use hostname here ipaddr should to use hostname resolve         
dn.getIpAddr();       
     dn.getHostName();      
      .... 
}
 {code}
 

  was:
Given the Hadoop cluster with dual networks:
 * 192.168.x.x: For data transfer. Speed: 1Gbps.
 * 10.x.x.x: For management only. Speed: 10Gbps.
 
and hostname resolve in 10.x.x.x. when i want hdfs client use hostname connect 
hdfs , i config i[ dfs.client.use.datanode.hostname=true ] in hdfs-site.xml. 
but i found the 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB#getBlockLocations
 not use this configuration to resolve ipaddr
 
```
org.apache.hadoop.hdfs.DFSInputStream#fetchBlockAt(long, long, boolean)
 
/** Fetch a block from namenode and cache it */
private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
throws IOException {
      ....
      dfsClient.getLocatedBlocks(src, offset)}
 
org.apache.hadoop.hdfs.DFSClient#getLocatedBlocks(java.lang.String, long, long)
public LocatedBlocks getLocatedBlocks(String src, long start, long length)
throws IOException {
     ...
     return callGetBlockLocations(namenode, src, start, length);
}
 
static LocatedBlocks callGetBlockLocations(ClientProtocol namenode,
String src, long start, long length)
throws IOException {
     ...
     return namenode.getBlockLocations(src, start, length);
}
 
 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB#getBlockLocations
public LocatedBlocks getBlockLocations(String src, long offset, long length)
throws IOException {
    ...
    return PBHelperClient.convert(resp.getLocations())
}
 
 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient#convert(DatanodeIDProto)
public static DatanodeID convert(DatanodeIDProto dn) {
      ...
      //  if when use hostname here ipaddr should to use hostname resolve      
dn.getIpAddr()
      dn.getHostName()       ....
}
 
```
 


> Datanodeid attribute  ipAddr not use hostname resolve when hdfs client use 
> hostname
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-17647
>                 URL: https://issues.apache.org/jira/browse/HDFS-17647
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 3.1.0
>            Reporter: dncba
>            Priority: Major
>             Fix For: 3.2.3
>
>
> Given the Hadoop cluster with dual networks:
>  * 192.168.x.x: For data transfer. Speed: 1Gbps.
>  * 10.x.x.x: For management only. Speed: 10Gbps.
>  
> and hostname resolve in 10.x.x.x. when i want hdfs client use hostname 
> connect hdfs , i config i[ dfs.client.use.datanode.hostname=true ] in 
> hdfs-site.xml. but i found the 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB#getBlockLocations
>  not use this configuration to resolve ipaddr
>  
> {code:java}
> org.apache.hadoop.hdfs.DFSInputStream#fetchBlockAt(long, long, boolean)
>  
> private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
> throws IOException {      
>     ....       
>     dfsClient.getLocatedBlocks(src, offset)
> }
>  
> org.apache.hadoop.hdfs.DFSClient#getLocatedBlocks(java.lang.String, long, 
> long)
> public LocatedBlocks getLocatedBlocks(String src, long start, long length)
> throws IOException{      
>        ...      
>        return callGetBlockLocations(namenode, src, start, length); 
> }
>  
> static LocatedBlocks callGetBlockLocations(ClientProtocol namenode,
> String src, long start, long length)
> throws IOException{      
>      ...      
>      return namenode.getBlockLocations(src, start, length); 
> }
>  
>  
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB#getBlockLocations
> public LocatedBlocks getBlockLocations(String src, long offset, long 
> length)throws IOException{     
>      ...     
>      return PBHelperClient.convert(resp.getLocations()) 
> }
>  
>  
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient#convert(DatanodeIDProto)
> public static DatanodeID convert(DatanodeIDProto dn){       
>      ...       
>      //if when use hostname here ipaddr should to use hostname resolve        
>  dn.getIpAddr();       
>      dn.getHostName();      
>       .... 
> }
>  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to