[
https://issues.apache.org/jira/browse/HDFS-14579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16867877#comment-16867877
]
Chao Sun commented on HDFS-14579:
---------------------------------
One (not quite related) issue we've seen is that when refreshing nodes, DNS
resolution could fail at different DNs for different NameNodes, which could
cause them to see different subsets of DNs.
This is especially bad for consistent reads from standby feature (HDFS-12943)
as a DN could be seen as valid for active NN but blacklisted by the observer NN
due to DNS resolution failure. Later, when client tries to read a block on the
DN from the observer NN, it will report block not found. HDFS-13924 addresses
this but ideally all NNs should see the same picture (perhaps by having another
implementation of {{HostConfigManager}}).
> In refreshNodes, avoid performing a DNS lookup while holding the write lock
> ---------------------------------------------------------------------------
>
> Key: HDFS-14579
> URL: https://issues.apache.org/jira/browse/HDFS-14579
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 3.3.0
> Reporter: Stephen O'Donnell
> Assignee: Stephen O'Donnell
> Priority: Major
> Attachments: HDFS-14579.001.patch
>
>
> When refreshNodes is called on a large cluster, or a cluster where DNS is not
> performing well, it can cause the namenode to hang for a long time. This is
> because the refreshNodes operation holds the global write lock while it is
> running. Most of refreshNodes code is simple and hence fast, but
> unfortunately it performs a DNS lookup for each host in the cluster while the
> lock is held.
> Right now, it calls:
> {code}
> public void refreshNodes(final Configuration conf) throws IOException {
> refreshHostsReader(conf);
> namesystem.writeLock();
> try {
> refreshDatanodes();
> countSoftwareVersions();
> } finally {
> namesystem.writeUnlock();
> }
> }
> {code}
> The line refreshHostsReader(conf); reads the new config file and does a DNS
> lookup on each entry - the write lock is not held here. Then the main work is
> done here:
> {code}
> private void refreshDatanodes() {
> final Map<String, DatanodeDescriptor> copy;
> synchronized (this) {
> copy = new HashMap<>(datanodeMap);
> }
> for (DatanodeDescriptor node : copy.values()) {
> // Check if not include.
> if (!hostConfigManager.isIncluded(node)) {
> node.setDisallowed(true);
> } else {
> long maintenanceExpireTimeInMS =
> hostConfigManager.getMaintenanceExpirationTimeInMS(node);
> if (node.maintenanceNotExpired(maintenanceExpireTimeInMS)) {
> datanodeAdminManager.startMaintenance(
> node, maintenanceExpireTimeInMS);
> } else if (hostConfigManager.isExcluded(node)) {
> datanodeAdminManager.startDecommission(node);
> } else {
> datanodeAdminManager.stopMaintenance(node);
> datanodeAdminManager.stopDecommission(node);
> }
> }
> node.setUpgradeDomain(hostConfigManager.getUpgradeDomain(node));
> }
> }
> {code}
> All the isIncluded(), isExcluded() methods call node.getResolvedAddress()
> which does the DNS lookup. We could probably change things to perform all the
> DNS lookups outside of the write lock, and then take the lock and process the
> nodes. Also change or overload isIncluded() etc to take the inetAddress
> rather than the datanode descriptor.
> It would not shorten the time the operation takes to run overall, but it
> would move the long duration out of the write lock and avoid blocking the
> namenode for the entire time.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]