[ 
https://issues.apache.org/jira/browse/HDFS-16368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841983#comment-17841983
 ] 

ASF GitHub Bot commented on HDFS-16368:
---------------------------------------

hfutatzhanghb commented on code in PR #3743:
URL: https://github.com/apache/hadoop/pull/3743#discussion_r1583039222


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:
##########
@@ -1281,6 +1282,57 @@ nodes with its data cleared (or user can just remove the 
StorageID
     }
   }
 
+  /**
+   * refresh the network topology of this cluster based on the 
mapping_topology.data file.
+   */
+  public void refreshTopology() throws IOException {
+    long start = System.currentTimeMillis();

Review Comment:
   Sir, have fixed.





>  DFSAdmin supports refresh topology info without restarting namenode
> --------------------------------------------------------------------
>
>                 Key: HDFS-16368
>                 URL: https://issues.apache.org/jira/browse/HDFS-16368
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: dfsadmin, namanode
>    Affects Versions: 2.7.7, 3.3.1
>            Reporter: farmmamba
>            Assignee: farmmamba
>            Priority: Major
>              Labels: features, pull-request-available
>         Attachments: 0001.patch
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently in HDFS, if we update the rack info for rack-awareness, we may need 
> to rolling restart namenodes to let it be effective. If cluster is large, the 
> cost time of rolling restart namenodes is very log. So, we develope a method 
> to refresh topology info without rolling restart namenodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to