[
https://issues.apache.org/jira/browse/HDFS-16581?focusedWorklogId=780573&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-780573
]
ASF GitHub Bot logged work on HDFS-16581:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 12/Jun/22 01:37
Start Date: 12/Jun/22 01:37
Worklog Time Spent: 10m
Work Description: virajjasani commented on code in PR #4321:
URL: https://github.com/apache/hadoop/pull/4321#discussion_r895085989
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##########
@@ -1648,40 +1647,46 @@ public int metaSave(String[] argv, int idx) throws
IOException {
* @throws IOException If an error while getting datanode report
*/
public int printTopology() throws IOException {
- DistributedFileSystem dfs = getDFS();
- final DatanodeInfo[] report = dfs.getDataNodeStats();
-
- // Build a map of rack -> nodes from the datanode report
- HashMap<String, TreeSet<String> > tree = new HashMap<String,
TreeSet<String>>();
- for(DatanodeInfo dni : report) {
- String location = dni.getNetworkLocation();
- String name = dni.getName();
-
- if(!tree.containsKey(location)) {
- tree.put(location, new TreeSet<String>());
- }
+ DistributedFileSystem dfs = getDFS();
+ final DatanodeInfo[] report = dfs.getDataNodeStats();
+
+ // Build a map of rack -> nodes from the datanode report
+ HashMap<String, HashMap<String, String>> tree = new HashMap<String,
+ HashMap<String, String>>();
Review Comment:
nit: could you please replace this with?
```
Map<String, HashMap<String, String>> map = new HashMap<>();
```
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##########
@@ -1648,40 +1647,46 @@ public int metaSave(String[] argv, int idx) throws
IOException {
* @throws IOException If an error while getting datanode report
*/
public int printTopology() throws IOException {
- DistributedFileSystem dfs = getDFS();
- final DatanodeInfo[] report = dfs.getDataNodeStats();
-
- // Build a map of rack -> nodes from the datanode report
- HashMap<String, TreeSet<String> > tree = new HashMap<String,
TreeSet<String>>();
- for(DatanodeInfo dni : report) {
- String location = dni.getNetworkLocation();
- String name = dni.getName();
-
- if(!tree.containsKey(location)) {
- tree.put(location, new TreeSet<String>());
- }
+ DistributedFileSystem dfs = getDFS();
+ final DatanodeInfo[] report = dfs.getDataNodeStats();
+
+ // Build a map of rack -> nodes from the datanode report
+ HashMap<String, HashMap<String, String>> tree = new HashMap<String,
+ HashMap<String, String>>();
+ for(DatanodeInfo dni : report) {
+ String location = dni.getNetworkLocation();
+ String name = dni.getName();
+ String dnState = dni.getAdminState().toString();
- tree.get(location).add(name);
+ if(!tree.containsKey(location)) {
+ tree.put(location, new HashMap<String, String>());
Review Comment:
same here, `new HashMap<>()` is sufficient :)
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##########
@@ -1648,40 +1647,46 @@ public int metaSave(String[] argv, int idx) throws
IOException {
* @throws IOException If an error while getting datanode report
*/
public int printTopology() throws IOException {
- DistributedFileSystem dfs = getDFS();
- final DatanodeInfo[] report = dfs.getDataNodeStats();
-
- // Build a map of rack -> nodes from the datanode report
- HashMap<String, TreeSet<String> > tree = new HashMap<String,
TreeSet<String>>();
- for(DatanodeInfo dni : report) {
- String location = dni.getNetworkLocation();
- String name = dni.getName();
-
- if(!tree.containsKey(location)) {
- tree.put(location, new TreeSet<String>());
- }
+ DistributedFileSystem dfs = getDFS();
+ final DatanodeInfo[] report = dfs.getDataNodeStats();
+
+ // Build a map of rack -> nodes from the datanode report
+ HashMap<String, HashMap<String, String>> tree = new HashMap<String,
+ HashMap<String, String>>();
+ for(DatanodeInfo dni : report) {
+ String location = dni.getNetworkLocation();
+ String name = dni.getName();
+ String dnState = dni.getAdminState().toString();
- tree.get(location).add(name);
+ if(!tree.containsKey(location)) {
+ tree.put(location, new HashMap<String, String>());
}
+
+ HashMap<String, String> node = tree.get(location);
+ node.put(name, dnState);
+ }
- // Sort the racks (and nodes) alphabetically, display in order
- ArrayList<String> racks = new ArrayList<String>(tree.keySet());
- Collections.sort(racks);
+ // Sort the racks (and nodes) alphabetically, display in order
+ ArrayList<String> racks = new ArrayList<String>(tree.keySet());
Review Comment:
nit: `List<String> racks = new ArrayList<>(tree.keySet())`
Issue Time Tracking
-------------------
Worklog Id: (was: 780573)
Time Spent: 1.5h (was: 1h 20m)
> Print node status when executing printTopology
> ----------------------------------------------
>
> Key: HDFS-16581
> URL: https://issues.apache.org/jira/browse/HDFS-16581
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: dfsadmin, namenode
> Affects Versions: 3.3.0
> Reporter: JiangHua Zhu
> Assignee: JiangHua Zhu
> Priority: Major
> Labels: pull-request-available
> Time Spent: 1.5h
> Remaining Estimate: 0h
>
> We can use the dfsadmin tool to see which DataNodes the cluster has, and some
> of these nodes are alive, DECOMMISSIONED, or DECOMMISSION_INPROGRESS. It
> would be helpful if we could get this information in a timely manner, such as
> troubleshooting cluster failures, tracking node status, etc.
--
This message was sent by Atlassian Jira
(v8.20.7#820007)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]