[
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Manoj Govindassamy updated HDFS-11847:
--------------------------------------
Attachment: HDFS-11847.05.patch
Thanks [~xiaochen] for the review. Attached v05 patch to address the following.
Please take a look at the latest patch.
1. HDFS-12969 is tracking the enhancements needed to {{dfsAdmin
-listOpenFiles}} command.
2. Restored old API in the client packages.
3. {{FSN#getFilesBlockingDecom}} nows returns a batched list honoring
{{maxListOpenFilesResponses}}.
4. Restored the old reporting format
5. Surprisingly I don't see this change in the IDE. Able to get this
unnecessary change removed after a fresh pull.
And, updated the test case to cover the batch response for listing open files
by type.
> Enhance dfsadmin listOpenFiles command to list files blocking datanode
> decommissioning
> --------------------------------------------------------------------------------------
>
> Key: HDFS-11847
> URL: https://issues.apache.org/jira/browse/HDFS-11847
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs
> Affects Versions: 3.0.0-alpha1
> Reporter: Manoj Govindassamy
> Assignee: Manoj Govindassamy
> Attachments: HDFS-11847.01.patch, HDFS-11847.02.patch,
> HDFS-11847.03.patch, HDFS-11847.04.patch, HDFS-11847.05.patch
>
>
> HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list
> all the open files in the system.
> Additionally, it would be very useful to only list open files that are
> blocking the DataNode decommissioning. With thousand+ node clusters, where
> there might be machines added and removed regularly for maintenance, any
> option to monitor and debug decommissioning status is very helpful. Proposal
> here is to add suboptions to {{listOpenFiles}} for the above case.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]