virajjasani commented on code in PR #4107:
URL: https://github.com/apache/hadoop/pull/4107#discussion_r859427464
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##########
@@ -433,7 +433,7 @@ static int run(DistributedFileSystem dfs, String[] argv,
int idx) throws IOExcep
*/
private static final String commonUsageSummary =
"\t[-report [-live] [-dead] [-decommissioning] " +
- "[-enteringmaintenance] [-inmaintenance]]\n" +
+ "[-enteringmaintenance] [-inmaintenance] [-slownodes]]\n" +
Review Comment:
Reg the command options, I believe filters can be ideally used for both: 1)
state of DNs (decommissioning, dead, live etc) and 2) nature of DNs (slow
outliers). Updated the doc, please review.
Thanks
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##########
@@ -632,6 +638,20 @@ private static void
printDataNodeReports(DistributedFileSystem dfs,
}
}
+ private static void printSlowDataNodeReports(DistributedFileSystem dfs,
boolean listNodes,
Review Comment:
> I suspect you would need some kind of header to distinguish from the other
data node reports.
This is called only if condition `listAll || listSlowNodes` is true:
```
if (listAll || listSlowNodes) {
printSlowDataNodeReports(dfs, listSlowNodes, "Slow");
}
```
Sample output:
<img width="524" alt="Screenshot 2022-03-25 at 9 12 58 PM"
src="https://user-images.githubusercontent.com/34790606/165455352-303eb506-0a5f-491d-ac44-bcc243a8f0f6.png">
##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java:
##########
@@ -1868,4 +1868,16 @@ BatchedEntries<OpenFileEntry> listOpenFiles(long prevId,
*/
@AtMostOnce
void satisfyStoragePolicy(String path) throws IOException;
+
+ /**
+ * Get report on all of the slow Datanodes. Slow running datanodes are
identified based on
+ * the Outlier detection algorithm, if slow peer tracking is enabled for the
DFS cluster.
+ *
+ * @return Datanode report for slow running datanodes.
+ * @throws IOException If an I/O error occurs.
+ */
+ @Idempotent
+ @ReadOnly
+ DatanodeInfo[] getSlowDatanodeReport() throws IOException;
Review Comment:
I thought List is also fine but kept it Array to keep the API contract in
line with `getDatanodeReport()` so that both APIs can use same underlying
utility methods (e.g. getDatanodeInfoFromDescriptors() ).
##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##########
@@ -4914,6 +4914,33 @@ int getNumberOfDatanodes(DatanodeReportType type) {
}
}
+ DatanodeInfo[] slowDataNodesReport() throws IOException {
+ String operationName = "slowDataNodesReport";
+ DatanodeInfo[] datanodeInfos;
+ checkSuperuserPrivilege(operationName);
Review Comment:
Not really, removed, thanks.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]