[
https://issues.apache.org/jira/browse/HDFS-17877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
YUBI LEE updated HDFS-17877:
----------------------------
Description:
DataNodeID.updateRegInfo() updates hostName but misses hostNameBytes.
Since PBHelperClient.convert(DatanodeID) uses getHostNameBytes() for protobuf
serialization, clients end up receiving the stale hostname from before the
re-registration.
This becomes a real problem when a DataNode first registers with a PQDN and
later re-registers with a FQDN. With dfs.client.use.datanode.hostname=true,
the client tries to connect using the old PQDN and fails with
UnknownHostException.
The fix is to add hostNameBytes = nodeReg.getHostNameBytes() in updateRegInfo(),
same as how setIpAndXferPort() already handles ipAddr/ipAddrBytes together
In my environment, I use configurations as follows:
{code}
dfs.client.use.datanode.hostname=true
hadoop.security.token.service.use_ip=false
{code}
was:
DataNodeID.updateRegInfo() updates hostName but misses hostNameBytes.
Since PBHelperClient.convert(DatanodeID) uses getHostNameBytes() for protobuf
serialization, clients end up receiving the stale hostname from before the
re-registration.
This becomes a real problem when a DataNode first registers with a PQDN and
later re-registers with a FQDN. With dfs.client.use.datanode.hostname=true,
the client tries to connect using the old PQDN and fails with
UnknownHostException.
The fix is to add hostNameBytes = nodeReg.getHostNameBytes() in updateRegInfo(),
same as how setIpAndXferPort() already handles ipAddr/ipAddrBytes together
> DatanodeID.updateRegInfo() does not update hostNameBytes causing stale
> hostname on client
> -----------------------------------------------------------------------------------------
>
> Key: HDFS-17877
> URL: https://issues.apache.org/jira/browse/HDFS-17877
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: YUBI LEE
> Priority: Major
> Labels: pull-request-available
>
> DataNodeID.updateRegInfo() updates hostName but misses hostNameBytes.
> Since PBHelperClient.convert(DatanodeID) uses getHostNameBytes() for protobuf
> serialization, clients end up receiving the stale hostname from before the
> re-registration.
> This becomes a real problem when a DataNode first registers with a PQDN and
> later re-registers with a FQDN. With dfs.client.use.datanode.hostname=true,
> the client tries to connect using the old PQDN and fails with
> UnknownHostException.
> The fix is to add hostNameBytes = nodeReg.getHostNameBytes() in
> updateRegInfo(),
> same as how setIpAndXferPort() already handles ipAddr/ipAddrBytes together
> In my environment, I use configurations as follows:
> {code}
> dfs.client.use.datanode.hostname=true
> hadoop.security.token.service.use_ip=false
> {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]